Language and Classification

PART II: SECOND ORDER EQUATIONS
Section 10
Language and Classification
James V. Herod*

*(c) Copyright 1993,1994,1995 by James V. Herod, herod@math.gatech.edu. All rights reserved.

Web page maintained by Evans M. Harrell, II, harrell@math.gatech.edu.


In this beginning with second order equations, we will take a look at the language of partial differential equations. We should be sure that we all have the same vocabulary. Since there is some standard language for the identification of partial differential equations, let's agree with the language that every one else uses as a start.

Several parts are easy. Suppose that we have a partial differential equation in u. The order of the equation is the order of the highest partial derivative of u, the number of variables is simply the number of independent variables for u in the equation, and the equation has constant coefficients if u and the coefficients of all the partial derivatives of u are constant. If all the terms involving u are moved to the left side of the equation, then the equation is called homogeneous if the right side is zero, and non homogeneous if it is not zero. The system is linear if that left side involves u in only a "linear way."

Examples:

(1) 4uxx - 24 uxy + 11uyy - 12ux - 9uy - 5u = 0

is a linear, homogeneous, second order partial differential equation in two variables with constant coefficients.

(2) ut = x uxx + y uyy + u2

is a nonlinear, second order partial differential equation in two variables and does not have constant coefficients.

(3) utt - uxx = sin([[pi]]t)

is a non-homogeneous equation.

In thinking of partial differential equations, it is a common practice to carry over the language that has been used for matrix or ordinary differential equations in as far as possible. Recall that one solved linear systems of algebraic equations such as the equation

3x + 4y = 0

5x + 6y = 0.

This equation could be re-written as a matrix system

B(ACO2( 3, 4, 5, 6))B(A(x,y)) = B(A(0,0)).

One talked of the matrix equation or the linear homogeneous equation Au = 0, where u and 0 are understood to be vectors in the two dimensioned vector space on which the linear operator A is defined.

In a similar manner, in sophomore differential equations, one considered equations such as

x' = 3x + 4y

y' = 5x + 6y

and, perhaps, re-wrote these equations as a system z' = Az where A is the matrix

A = B(ACO2( 3, 4, 5, 6)).

This is a first order, linear system; A is a linear operator on R2 and we follow the vector value function u(t) as it runs through that space, changing in time as prescribed by the differential equation

F(du,dt) = Az.

So, in partial differential equation, we consider linear equations

Lu = 0, or u' = Lu,

only now, L is a linear operator on a space of functions. For example, it may be that L(u) = uxx + uyy. And, a corresponding notion for the equation u' = Lu is ut = uxx + uyy.

The notion that a linear operator can have domain a space of functions may seem alien at first, but the analogies from Rn are more than superficial. This is an idea worthy of consideration.

We will be interested in a rather general second order, differential operator. In two variables, we consider the operator

L(u) = ISU(p=1,2, )ISU(p=1,2, ) Apq [[Phi]]([[partialdiff]]2u,[[partialdiff]]xp[[partialdiff]]xq) + ISU(p=1,2, Bp) F([[partialdiff]]u,[[partialdiff]]xp) + Cu.

There is an analogous formula for three variables.

We suppose that u is smooth enough so that

F([[partialdiff]]2u,[[partialdiff]]x1 [[partialdiff]]x2) = F( [[partialdiff]]2u,[[partialdiff]]x2 [[partialdiff]]x1) ;

that is, we can interchange the order of differentiation. In this first consideration, the matrix A, the vector B, and the number C do not depend on u; we take the matrix A to be symmetric.

Example: We write an example in the matrix representation for constant coefficient equations:

L[u] = 4uxx - 24 uxy + 11uyy - 12ux - 9uy - 5u

can be written as

L[u] = B(ACO2([[partialdiff]]/[[partialdiff]]x, [[partialdiff]]/[[partialdiff]]y)) B(ACO2( 4, -12, -12, 11)) B(A( [[partialdiff]] /[[partialdiff]]x, [[partialdiff]] /[[partialdiff]]y))u + B(ACO2( -12, -9)) B(A( [[partialdiff]] /[[partialdiff]]x, [[partialdiff]] /[[partialdiff]]y))u - 5 u.

We shall be interested in an equation which has the form

L(u) = f

or u' = L(u) + f.

In this section, u is a function on R2 or R3 and the equations are to hold in an open, connected region D of the plane. We will also assume that the boundary of the region is piece-wise smooth, and denote this boundary by [[partialdiff]]D. Just as in ordinary differential equations, so in partial differential equations, some boundary conditions will be needed to solve the equations. We will take the boundary conditions to be linear and have the general form

B(u) = a u + b [[partialdiff]]u/[[partialdiff]][[eta]],

where [[partialdiff]]u/[[partialdiff]][[eta]] is the derivative taken in the direction of a normal to the boundary of the region.

The techniques of studying partial differential operators and the properties of these operators change depending on the "type" of operator. These operators have been classified into three principal types. The classifications are made according to the nature of the coefficients in the equation which defines the operator. The operator is called an elliptic operator if the eigenvalues of A are non-zero and have the same algebraic sign. The operator is hyperbolic if the eigenvalues have opposite signs and is parabolic if at least one of the eigenvalues is zero.

These very names and ideas suggest a connection with quadratic forms in analytic geometry. We will make this connection a little clearer. Rather than finding geometric understanding of the partial differential equation from this connection we will more likely develop algebraic understanding. Especially, we will see that there are some standard forms. Because of the nearly error free arithmetic that MAPLE is able to do, we will offer syntax in MAPLE that enables the reader to use this computer algebra system to change second order, linear systems into the standard forms.

If presented with a quadratic equation in x and y, one could likely decide if the equation represented a parabola, hyperbola, or ellipse in the plane. However, if asked to draw a graph of this conic section in the plane, one would start recalling that there are several forms that are easy to draw:

ax2 + by2 = c2, and the special case x2 + y2 = c2,

ax2 - by2 = c2, and the special case x2 - y2 = c2,

or y - ax2 = 0 and x - by2 = 0.

These quadratic equations represent the familiar conic sections: ellipses, hyperbolas and parabolas, respectively. If a quadratic equation is given that is not in these special forms, then one may recall procedures to transform the equations algebraically into these standard forms. This will be the topic of the next section.

The purpose for doing the classification is that the techniques for solving equations are different in the three classes, if it is possible to solve the equation at all. Even more, there are important resemblance among the solutions of one class; and there are striking differences between the solutions of one class and those of another class. The remainder of these notes will be primarily concerned with finding solutions to hyperbolic, second order, partial differential equations. As we progress, we will see the importance of the equation being a hyperbolic partial differential equation to use the techniques of these notes.

Before comparing the similarity in procedures for changing the partial differential equation to standard form with the preceeding arithmetic, we pause to emphasize the differences in geometry for the types: elliptic, hyperbolic, and parabolic.

Figure 10.1

Here are three equations from analytic geometry:

x2 + y2 = 4 is an ellipse,

x2 - y2 = 4 is a hyperbola,

and x2 + 2 x y + y2 = 4 is a parabola.

Figure 10.1 contains the graphs of all three of these. Their shapes and their geometry, are strikingly different. Even more, naively, one might say that the graph of the third of those above is not the graph of a parabola. Indeed. It does, however, meet the criteria: b2 - 4 a c = 0. One might think of the graph as that of a parabola with vertex at the "point at infinity."

The criteria for classifying second order, partial differential equations is the same: ask what is the character of b2 - 4 a c in the equation

a F([[partialdiff]]2u,[[partialdiff]]x2) + b F([[partialdiff]]2u,[[partialdiff]]x[[partialdiff]]y) + c F([[partialdiff]]2u,[[partialdiff]]y2) + d F([[partialdiff]]u,[[partialdiff]]x) + e F([[partialdiff]]u,[[partialdiff]]y) + f u = 0.

We now present solutions for three equations that have the same start -- the same initial conditions. However the equations are of the three types. There is no reason you should know how to solve the three equations yet. There is no reason you should even understand that solving the hyperbolic equation by the method of characteristics is appropriate. But, you should be able to check the solutions -- to see that they solve the specified equations. Each equation has initial conditions

u(0,y) = sin(y) and ux(0,y) = 0.

The equations are

F([[partialdiff]]2u,[[partialdiff]]x2) + F([[partialdiff]]2u,[[partialdiff]]y2) = 0 is an elliptic partial differential equation,

F([[partialdiff]]2u,[[partialdiff]]x2) - F([[partialdiff]]2u,[[partialdiff]]y2) = 0 is an hyperbolic partial differential equation,

F([[partialdiff]]2u,[[partialdiff]]x2) + 2 F([[partialdiff]]2u,[[partialdiff]]x[[partialdiff]]y) + F([[partialdiff]]2u,[[partialdiff]]y2) = 0 is a parabolic partial differential equation,

These have solutions cosh(x) sin(y), cos(x) sin(y), and cos(x-y) x/2 + sin(y-x) respectively. Figure 10.2 has the graphs of these three solutions in order.

Figure 10.2 a

Figure 10.2 b

Figure 10.2 c

10.1. Classify each of the following as hyperbolic, parabolic, or elliptic.

(A) F([[partialdiff]]2u,[[partialdiff]]x2) + 3 F([[partialdiff]]2u,[[partialdiff]]x[[partialdiff]]y) + 2 F([[partialdiff]]2u,[[partialdiff]]y2) - F([[partialdiff]]u,[[partialdiff]]x) - F([[partialdiff]]u,[[partialdiff]]y) = 0.

(B) F([[partialdiff]]2u,[[partialdiff]]x2) + 3 F([[partialdiff]]2u,[[partialdiff]]x[[partialdiff]]y) + 2 F([[partialdiff]]2u,[[partialdiff]]y2) - 2 F([[partialdiff]]u,[[partialdiff]]x) - 4 F([[partialdiff]]u,[[partialdiff]]y) = 0.

(C) F([[partialdiff]]2u,[[partialdiff]]x2) + 2 F([[partialdiff]]2u,[[partialdiff]]x[[partialdiff]]y) + 2 F([[partialdiff]]2u,[[partialdiff]]y2) = 0.

(D) F([[partialdiff]]2u,[[partialdiff]]x2) + 2 F([[partialdiff]]2u,[[partialdiff]]x[[partialdiff]]y) + F([[partialdiff]]2u,[[partialdiff]]y2) + F([[partialdiff]]u,[[partialdiff]]x) + F([[partialdiff]]u,[[partialdiff]]y) = 0.


Return to James Herod's Table of Contents

Back to preceding lecture

Onward to the next lecture

Return to Evans Harrell's