James V. Herod*

Page maintained with additions by Evans M. Harrell, II, harrell@math.gatech.edu.

Section 3.1 Classification of Second Order, Linear Equations

In this section, we will be interested in a rather general second order, linear operator. In two variables, we consider the operator

L(u) = ISU(p=1,2, )ISU(p=1,2, ) Apq
[[Phi]]([[partialdiff]]^{2}u,[[partialdiff]]xp[[partialdiff]]xq) +
ISU(p=1,2, Bp) F([[partialdiff]]u,[[partialdiff]]xp) + Cu.

There is an analogus formula for three variables. We suppose that u is smooth
enough so that [[partialdiff]]^{2}u/[[partialdiff]]x1 [[partialdiff]]x2
= [[partialdiff]]^{2}u/[[partialdiff]]x2 [[partialdiff]]x1; that is,
we can interchange the order of differentiation. The matrix A, the vector B,
and the number C do not depend on u; we take the matrix A to be symmetric.

We shall continue to be interested in an equation which has the form L(u) = f.
In this section, u is a function on R^{2} or R^{3} and the
equations are to hold in an open, connected region D of the plane. We will
also assume that the boundary of the region is piecewise smooth, and denote
this boundary by [[partialdiff]]D. Just as in ordinary differential equations,
so in partial differential equations, some boundary conditions will be needed
to solve the equations. We will take the boundary conditions to be linear and
have the general form B(u) = a u + b [[partialdiff]]u/[[partialdiff]][[eta]],
where [[partialdiff]]u/[[partialdiff]][[eta]] is the derivative taken in the
direction of the outward normal to the region.

The techniques of studying partial differential operators and the properties
of these operators change depending on the "type" of operator. These operators
have been classified into three principal types. The classifications are made
according to the nature of the coefficients in the equation which defines the
operator. The operator is called an __elliptic__ operator if the
eigenvalues of A are nonzero and have the same algebraic sign. The operator is
__hyperbolic__ if the eigenvalues have opposite signs and is
__parabolic__ if not all the eigenvalues are nonzero.

**EXAMPLES**

(a) We consider an example which arises in a physical situation and is called
the __one dimensional heat equation__. Here, u(t,x) represents the heat on
a line at time t and position x. One should be given an initial distribution
of temperature which is denoted u(0,x), and some boundary conditions which
arise in the context of the problem. For example, it might be assumed that the
ends are held at some fixed temperature for all time. In this case, boundary
conditions for a line of length L would be u(t,0) = [[alpha]] and u(t,L) =
[[beta]]. Or, one might assume that the ends are insulated. A mathematical
statement of this is that the rate of flow of heat over the ends is zero:

F([[partialdiff]]u,[[partialdiff]]x) (t,0) = F([[partialdiff]]u,[[partialdiff]]x) (t,L) = 0.

The manner in which u changes in time is derived from the physical principle which states that the heat flux at any point is proportional to the temperature gradient at that point and leads to the equation

F([[partialdiff]]u,[[partialdiff]]t) (t,x) =
F([[partialdiff]]^{2}u,[[partialdiff]]x^{2}) (t,x).

Geometrically, one may think of the problem as one of defining the graph of u whose domain is the infinite strip bounded in the first quadrant by the parallel lines x = 0 and x = L The function u is known along the x axis between x = 0 and x = L, To define u on the infinite strip, move in the t direction according to the equation

F([[partialdiff]]u,[[partialdiff]]t) (t,x) =
F([[partialdiff]]^{2}u,[[partialdiff]]x^{2}) (t,x),

while maintaining the boundary conditions.

We could also have a source term. Physically, this could be thought of as a heater (or refrigerator) adding or removing heat at some rate along the strip. Such an equation could be written as

F([[partialdiff]]u,[[partialdiff]]t) (t,x) =
F([[partialdiff]]^{2}u,[[partialdiff]]x^{2}) (t,x) + F(t,x).

Boundary and initial conditions would be as before. In order to rewrite this equation in the context of this course, we should conceive of the equation as L[u] = f , with appropriate boundary conditions. The operator L is

L[u] = F([[partialdiff]]u,[[partialdiff]]t) (t,x) -
F([[partialdiff]]^{2}u,[[partialdiff]]x^{2}) (t,x).

This is a parabolic operator; in fact, the matrix A of the above definition is given by A = B(ACO2( 0, 0, 0, -1)).

(b) Another common operator is the __Laplacian operator__:

L[u] = F([[partialdiff]]^{2}u,[[partialdiff]]x^{2}) +
F([[partialdiff]]^{2}u,[[partialdiff]]y^{2}) .

A physical situation in which it arises is in the problem of finding the shape of a drum under force. Suppose that the bottom of the drum sits on the unit disc in the xy-plane and that the sides of the drum lie above the unit circle. We do not suppose that the sides are at a uniform height, but that the height is specified.

That is, we know u(x,y) for {x,y} on the boundary ot the drum. We also suppose that there is a force pulling down, or pushing up, on the drum at each point and that this force is not changing in time. An example of such a force might be the pull of gravity. The question is, what is the shape of the drum? As we shall see, the appropriate equations take the form: Find u if

F([[partialdiff]]^{2}u,[[partialdiff]]x^{2}) +
F([[partialdiff]]^{2}u,[[partialdiff]]y^{2}) = f(x,y) for
x^{2 }+ y^{2} < 1

with u(x,y) specified for x^{2 }+ y^{2} = 1.

This is an elliptic problem for the Laplacian operator is an elliptic operator.

(c) An example of a hyperbolic problem is the __one dimensional wave
equation__. One can think of this equation as describing the motion of a
taunt string after an initial perturbation and subject to some outside force.
Appropriate boundary conditions are given. To think of this as being a plucked
string with the observer watching the up and down motion in time is not a bad
perspective, and certainly gives intutive understanding. Here is another
perspective, however, which will be more useful in the context of finding the
Greens function to solve this one dimensional wave equation:

F([[partialdiff]]^{2}u,[[partialdiff]]t^{2})^{ } =
^{ }F([[partialdiff]]^{2}u,[[partialdiff]]x^{2}) +
f(t,x) for 0 < x < L,

u(t,x) = u(t,L) = 0 for t > 0,

u(0,x) = g(x)

and F([[partialdiff]]u,[[partialdiff]]t) (0,x) = h(x) for 0 < x < L.

As in example (a), the problem is to describe u in the infinite strip within the first quadrant of the xt-plane bounded by the x axis and the lines x = 0 and x = L. Both u and its first derivative in the t direction are known along the x axis. Along the other boundaries, u is zero. What must be the shape of the graph above the infinite strip?

To classify this as a hyperbolic problem, think of the operator L as

L[u] = F([[partialdiff]]^{2}u,[[partialdiff]]t^{2) -
}F([[partialdiff]]^{2}u,[[partialdiff]]x^{2})

and re-write it in the appropriate form for classification.

There are standard procedures for changing more general partial differential equations to the familiar standard forms.

Section 3.2: A Standard Form for Second Order Linear Equations

The ideas of the previous section suggested a connection with quadratic forms in analytic geometry. If presented with a quadratic equation in two variables, one could likely decide if the equation represented a parabola, hyperbola, or ellipse in the plane. However, if asked to draw a graph of this conic section in the plane, one would start recalling that there are several forms that are easy to draw:

ax^{2} + by^{2} = c^{2},^{ }and the special
case x^{2} + y^{2} = c^{2},^{}

^{ }ax^{2} - by^{2} = c^{2}, and the special
case x^{2} - y^{2} = c^{2},

or y - ax^{2 }= 0 and x - by^{2} = 0.

These quadratic equations represent the familiar conic sections: ellipses, hyperbolas and parabolas, respectively. If a quadratic equation is given that is not in these special forms, then one must recall procedures to transform the equations algebraically into these standard forms. Performing these algebraic procedures corresponds to a geometric idea of translations and rotations.

For example, the equation

x^{2} - 3y^{2} - 8x + 30y = 50

represents a hyperbola. To draw the graph of the hyperbola, one algebraically factors the equation or, geometrically, translates the axes:

(x - 4)^{2} - 3(y - 5)^{2} = 1.

Now, the response that this is a hyperbola with center {4,5} is expected. More detailed information about the direction of the major and minor axes could be made, but these are not notions that we will wish to carry over to the process of getting second order partial differential equations into standard forms.

There is another idea more appropriate. Rather than keeping the hyperbola in the Euclidean plane where it now has the equation

x^{2} - 3y^{2} = 1

in the translated form, think of this hyperbola in the cartesian plane, and do not insist that the x axis and the y axis have the same scale. In this particular case, keep the x axis the same size and expand the y axis so that every unit is the old unit multiplying by R(3). Algebraically, one commonly writes that there are new coordinates {x',y'} related to the old coordinates by

x = x' , R(3) y = y'.

The algebraic effect is that the equation is transformed into an equation in {x',y'} coordinates:

x' ^{2 } - y' ^{2 }= 1

Pay attention to the fact that it is now a mistake to carry over too much of the geometric language for the form. For example if the original quadratic equation had been

x^{2} + 3y^{2} - 8x + 30y = 50

and we had translated axes to produce

x^{2} + 3y^{2} = 50,

and then rescaled the axes to get

x^{2} + y^{2 }= 50

we have not changed an ellipse into a circle for a circle is a geometric object whose very definition involves the notion of distance. The process of changing the scale on the X axis and the Y axis certainly destroys the entire notion of distance being the same in all directions.

Rather, the rescaling is an idea that is __algebraically__ simplifying.

Before we pursue the idea of rescaling and translating in second order partial differential equations in order to come up with standard forms, we need to recall that there is also the troublesome need to rotate the axis in order to get some quadratic forms into the standard one. For example, if the equation is

xy = 2,

we quickly recognize this as a quadratic equation. Even more, we could draw the graph. If pushed, we would identify the resulting geometric figure as a hyperbola. We ask for more here since these geometric ideas are more readily transformed into ideas about partial differential equations if they are converted into algebraic ideas. The question, then, is how do we achieve the algebraic representation of the hyperbola in standard form?

One recalls from analytic geometry, or recognizes from simply looking at the picture of the graph of the equation, that this hyperbola that has been rotated out of standard form. To see it in standard form, we must rotate the axes. One forgets the details of how this rotation is performed, but should know a reference to find the scheme.

Here is the rotation needed to remove the xy term in the equation

ax^{2} + bxy + cy^{2} + dx + ey + f = 0.

The new coordinates {x', y'} are given by

B(A(x',y')) = B(ACO2( cos([[alpha]]), sin([[alpha]]),-sin([[alpha]]), cos([[alpha]]))) B(A(x,y)),

where [[alpha]] is [[pi]]/4 if a = c and is F(1,2) arctan( F(b,a-c) ) if a != c. What is the same,

B(A(x,y)) = B(ACO2(cos([[alpha]]), -sin([[alpha]]),sin([[alpha]]), cos([[alpha]]))) B(A(x',y')).

Thus, substitute x = x' cos([[alpha]]) - y' sin([[alpha]]) and y = x' sin([[alpha]]) + y' cos([[alpha]]) into the equation, where [[alpha]] is as indicated above and the cross term, bxy, will disappear.

Given a general quadratic, there are three things that need to be done to get
it into standard form: get rid of the xy terms, factor all the x terms and the
y terms separately, and rescale the axes so that the coefficients of the
x^{2} term and the y^{2} terms are the same. Geometrically
this corresponds, as we have recalled, to a rotation, translation, and
expansion, respectively. From the geometric point of view, it does not matter
which is done first: the rotation and then the translation, or vice versa.
Algebraically, it is easier to remove the xy terms first, for then the
factoring is easier.

The purpose of the previous paragraphs recalling how to change algebraic equations representing two dimensional conic sections into standard form was to suggest that the same ideas carry over almost unchanged for the second degree partial differential equations. The techniques will change these equations into the standard forms for elliptic, hyperbolic, or parabolic partial differential equations. The purpose for doing this is that the techniques for solving equations are different in the three classes, if this is possible at all. Even more, there are important resemblances among the solutions of one class and striking differences between the solutions of one class and those of another class. The remainder of these notes will be primarily concerned with finding solutions to elliptic equations, but will discuss the standard form given by the Laplacian

--^{2}u =
F([[partialdiff]]^{2}u,[[partialdiff]]x^{2}) +
F([[partialdiff]]^{2}u,[[partialdiff]]x^{2})

If one has an elliptic equation that is not in the standard form of the Laplacian, the purpose of the remainder of this section is to present methods to change it into this form. The techniques are similar to those used in the analytic geometry. Having the standard form, one might then solve the equation involving the Laplacian. Finally, the solution should be transformed back into the original coordinate system.

We will illustrate the procedure for transformation of a second order equation into standard form. Consider the equation

4 F([[partialdiff]]^{2}u,[[partialdiff]]x^{2}) - 24 ^{
}F([[partialdiff]]^{2}u,[[partialdiff]]x[[partialdiff]]y) + 11
F([[partialdiff]]^{2}u,[[partialdiff]]x^{2}) -^{ }12
F([[partialdiff]]u,[[partialdiff]]x) - 9 F([[partialdiff]]u,[[partialdiff]]y) -
5u = 0.

We would like to transform the equation into the form

--^{2} u + cu = 0.

In the original equation, if we think of the equation as

a F([[partialdiff]]^{2}u,[[partialdiff]]x^{2}) + 2b ^{
}F([[partialdiff]]^{2}u,[[partialdiff]]x[[partialdiff]]y) + c
F([[partialdiff]]^{2}u,[[partialdiff]]y^{2) + d
}F([[partialdiff]]u,[[partialdiff]]x) + e
F([[partialdiff]]u,[[partialdiff]]y) + f u = 0.

Then, a = 4, b = -12, c = 11, so that b^{2} - a c = 100 and the
equation is hyperbolic. The transformations will be made in three steps which
will correspond to a rotation, a translation, and a rescaling in the earlier
discussion.

The first step is the introduction of new coordinates ([[xi]],[[eta]]) by rotation of axes so that in the transformed equation the mixed second partial derivative does not appear. Let

B(A([[xi]],[[eta]])) = B(ACO2( cos([[alpha]]), sin([[alpha]]),-sin([[alpha]]), cos([[alpha]]))) B(A(x,y))

or,

B(A(x,y)) = B(ACO2(cos([[alpha]]), -sin([[alpha]]),sin([[alpha]]), cos([[alpha]]))) B(A([[xi]],[[eta]])).

Using the chain rule, F([[partialdiff]] ,[[partialdiff]]x) = cos([[alpha]]) F([[partialdiff]] ,[[partialdiff]][[xi]]) - sin([[alpha]]) F([[partialdiff]] ,[[partialdiff]][[eta]])

and F([[partialdiff]] ,[[partialdiff]]y) = sin([[alpha]]) F([[partialdiff]] ,[[partialdiff]][[xi]]) + cos([[alpha]]) F([[partialdiff]] ,[[partialdiff]][[eta]]) .

It follows that

F([[partialdiff]]^{2} ,[[partialdiff]]x^{2}) =
F([[partialdiff]] ,[[partialdiff]]x) F([[partialdiff]] ,[[partialdiff]]x) = (
cos([[alpha]]) F([[partialdiff]] ,[[partialdiff]][[xi]]) - sin([[alpha]])
F([[partialdiff]] ,[[partialdiff]][[eta]]) )( cos([[alpha]]) F([[partialdiff]]
,[[partialdiff]][[xi]]) - sin([[alpha]]) F([[partialdiff]]
,[[partialdiff]][[eta]]) )

so that

F([[partialdiff]]^{2} ,[[partialdiff]]x^{2}) =
cos^{2}([[alpha]]) F([[partialdiff]]^{2}
,[[partialdiff]][[xi]]^{2)} - 2 sin([[alpha]])cos([[alpha]])
F([[partialdiff]]^{2} ,[[partialdiff]][[xi]][[partialdiff]][[eta]]) +
sin^{2}([[alpha]]) F([[partialdiff]]^{2}
,[[partialdiff]][[eta]]^{2) .}

^{}In a similar manner,

F([[partialdiff]]^{2} ,[[partialdiff]]x [[partialdiff]]y) =

sin([[alpha]]) cos([[alpha]]) F([[partialdiff]]^{2}
,[[partialdiff]][[xi]]^{2)} + (cos^{2}([[alpha]]) -
sin^{2}([[alpha]]) ) F([[partialdiff]]^{2}
,[[partialdiff]][[xi]]^{ }[[partialdiff]][[eta]]) -
sin([[alpha]])cos([[alpha]]) F([[partialdiff]]^{2}
,[[partialdiff]][[eta]]^{2)} ,

and

F([[partialdiff]]^{2} ,[[partialdiff]]y) = sin^{2}([[alpha]])
F([[partialdiff]]^{2} ,[[partialdiff]][[xi]]^{2}) + 2
sin([[alpha]]) cos([[alpha]]) F([[partialdiff]]^{2}
,[[partialdiff]][[xi]] [[partialdiff]][[eta]]) + cos^{2}([[alpha]])
F([[partialdiff]]^{2} ,[[partialdiff]][[eta]]^{2) .}

^{}The original equation described u as a function of x and y. We now
define v as a function of [[xi]] and [[eta]] by v([[xi]],[[eta]]) = u(x,y).
The variables [[xi]] and [[eta]] are related to x and y as described by the
rotation above. Of course, we have not specified [[alpha]] yet. This comes
next.

The equation satisfied by v is

[4c^{2 }- 24sc + 11 s^{2}]
F([[partialdiff]]^{2}v,[[partialdiff]][[xi]]^{2}) + [14sc -
24(c^{2} - s^{2})]
F([[partialdiff]]^{2}v,[[partialdiff]][[xi]][[partialdiff]][[eta]])

+ [ 4s^{2} + 24sc + 11 c^{2}]
F([[partialdiff]]^{2}v,[[partialdiff]][[eta]]^{2}) + [-12c -
9s ] F([[partialdiff]]v,[[partialdiff]][[xi]])

+ [12s - 9c] F([[partialdiff]]v,[[partialdiff]][[eta]]) - 5 v,

where we have used the abbreviations s = sin([[alpha]]) and c = cos([[alpha]]). The coefficient of the mixed partials will vanish if [[alpha]] is chosen so that

14sin([[alpha]])cos([[alpha]]) - 24(cos^{2}([[alpha]]) -
sin^{2}([[alpha]])) = 0,

that is,

tan(2[[alpha]]) = 24/7.

This means sin([[alpha]]) = 3/5 and cos([[alpha]]) = 4/5.

After substitution of these values, the equation satisfied by v becomes

F([[partialdiff]]^{2}v,[[partialdiff]][[xi]]^{2}) - 4
F([[partialdiff]]^{2}v,[[partialdiff]][[eta]]^{2}) + 3
F([[partialdiff]]v,[[partialdiff]][[xi]]) + v = 0.

This special example, together with the foregoing discussions of analytic geometry makes the following statement believable: Every second order partial differential equation with constant coefficients can be transformed into one in which mixed partials are absent.

We are now ready for the second step: to remove the first order term. For economy of notation, let us assume that the given equation is already in the form

F([[partialdiff]]^{2}u,[[partialdiff]]x^{2}) - 4
F([[partialdiff]]^{2}u,[[partialdiff]]y^{2}) + 3
F([[partialdiff]]u,[[partialdiff]]x) + u = 0.

Define v by

v(x,y) = e^{-[[beta]]x }u(x,y) or u(x,y) = e^{[[beta]]x}
v(x,y),

where [[beta]] will be chosen so that the transformed equation will have the first order derivative removed. Differentiating u and substituting into the equation we get that

F([[partialdiff]]^{2}v,[[partialdiff]]x^{2}) - 4
F([[partialdiff]]^{2}v,[[partialdiff]]y^{2}) + (2[[beta]] +3)
F([[partialdiff]]v,[[partialdiff]]x) + ([[beta]]^{2} + 3[[beta]] + 1)v
= 0.

If we choose [[beta]] = - 3/2, we have

F([[partialdiff]]^{2}v,[[partialdiff]]x^{2}) - 4
F([[partialdiff]]^{2}v,[[partialdiff]]y^{2}) - F(5,4) v =
0.

Notice that this transformation to achieve an equation lacking the first derivative with respect to x is generally possible when the coefficient on the second derivative with respect to x is not zero, and is otherwise impossible. The same statements hold for derivatives with respect to y.

The final step is rescaling. We choose variables [[xi]] and [[eta]] by

[[xi]] = u x and [[eta]] = [[nu]] y, where u and [[nu]] are chosen so that in
the transformed equation the coefficients of
F([[partialdiff]]^{2}v,[[partialdiff]][[xi]]^{2})^{ ,
}F([[partialdiff]]^{2}v,[[partialdiff]][[eta]]^{2})^{ ,
}and v are equal in absolute value. We have

F([[partialdiff]]^{2}u,[[partialdiff]]x^{2})^{ }=
u^{2} F([[partialdiff]]^{2}v,[[partialdiff]][[xi]]^{2})
and F([[partialdiff]]^{2}u,[[partialdiff]]y^{2}) =
[[nu]]^{2}
F([[partialdiff]]^{2}v,[[partialdiff]][[eta]]^{2}) .

Our equation becomes

u^{2}
F([[partialdiff]]^{2}v,[[partialdiff]][[xi]]^{2}) + 4
[[nu]]^{2}
F([[partialdiff]]^{2}v,[[partialdiff]][[eta]]^{2}) - f(5,4) v
= 0.

The condition that

u^{2} = 4 [[nu]]^{2} = F(5,4)

will be satisfied if u = F(R(5),2) and [[nu]] = F(R(5),4) . Then, we obtain the standard form

--^{2}v - v =
F([[partialdiff]]^{2}v,[[partialdiff]][[xi]]^{2}) +
F([[partialdiff]]^{2}v,[[partialdiff]][[eta]]^{2}) - v = 0.

**EXERCISES**:

I. Transform the following equations into standard form:

(a) 3 F([[partialdiff]]^{2}u,[[partialdiff]]x^{2}) + 4
F([[partialdiff]]^{2}u,[[partialdiff]]y^{2}) - u = 0.

(b) 4 F([[partialdiff]]^{2}u,[[partialdiff]]x^{2}) +
F([[partialdiff]]^{2}u,[[partialdiff]]x[[partialdiff]]y) + 4
F([[partialdiff]]^{2}u,[[partialdiff]]y^{2}) + u = 0.

(c) F([[partialdiff]]^{2}u,[[partialdiff]]x^{2}) +
F([[partialdiff]]^{2}u,[[partialdiff]]y^{2}) + 3
F([[partialdiff]]u,[[partialdiff]]x) - 4 F([[partialdiff]]u,[[partialdiff]]y)
+ 25 u = 0.

(d) F([[partialdiff]]^{2}u,[[partialdiff]]x^{2}) - 3
F([[partialdiff]]^{2}u,[[partialdiff]]y^{2}) + 2
F([[partialdiff]]u,[[partialdiff]]x) - F([[partialdiff]]u,[[partialdiff]]y)
+ u = 0.

(e) F([[partialdiff]]^{2}u,[[partialdiff]]x^{2}) - 2
F([[partialdiff]]^{2}u,[[partialdiff]]x[[partialdiff]]y) +
F([[partialdiff]]^{2}u,[[partialdiff]]y^{2}) + 3 u = 0.

II. Show that the equation

F([[partialdiff]]^{2}u,[[partialdiff]]x^{2}) -
F([[partialdiff]]u,[[partialdiff]]y) + [[gamma]] u = f(x,y)

where [[gamma]] is any constant, can be transformed into

F([[partialdiff]]^{2}v,[[partialdiff]]x^{2}) -
F([[partialdiff]]v,[[partialdiff]]y) = g(x,y).

III.Show that by rotation of the axis by 45deg. the equations

F([[partialdiff]]^{2}u,[[partialdiff]]x^{2}) -
F([[partialdiff]]^{2}u,[[partialdiff]]y^{2}) = 0 and
F([[partialdiff]]^{2}u,[[partialdiff]]x[[partialdiff]]y)= 0.

can be transformed into one another. Find the general solution for both equations.

Section 3.3: A CALCULUS REVIEW.

Throughout this chapter, there are some ideas from the multi-dimensional calculus which we will use. In addition, the following notational agreements should be recalled :

grad u = --u = {[[partialdiff]]u/[[partialdiff]]x, [[partialdiff]]u/[[partialdiff]]y},

div **v =**--***v**= [[partialdiff]]**v**1/[[partialdiff]]x +
[[partialdiff]]**v**2/[[partialdiff]]y,

--^{2}u = --*--u =
[[partialdiff]]^{2}u/[[partialdiff]]x^{2} +
[[partialdiff]]^{2}u/[[partialdiff]]y^{2} in rectangular
coordinates,

--^{2}u = F(1,r) F([[partialdiff]] ,[[partialdiff]]r) (r
F([[partialdiff]]u,[[partialdiff]]r)) + F(1,r^{2})
F([[partialdiff]]^{2}u,[[partialdiff]][[Theta]]^{2) }in polar
coordinates,

[[partialdiff]]u/[[partialdiff]][[eta]] = < --u, [[eta]] > where [[eta]] is a vector, and

curl **F** = det B(ACO3(i,j,k, [[partialdiff]](
)/[[partialdiff]]x, [[partialdiff]]( )/[[partialdiff]]y, [[partialdiff]](
)/[[partialdiff]]z, **F**1, **F**2, **F**3))

={ [[partialdiff]]**F**3/[[partialdiff]]y -
[[partialdiff]]**F**2/[[partialdiff]]z,
[[partialdiff]]**F**1/[[partialdiff]]z -
[[partialdiff]]**F**3/[[partialdiff]]x,
[[partialdiff]]**F**2/[[partialdiff]]x -
[[partialdiff]]**F**1/[[partialdiff]]y}.

There is also information from the integral calculus which we should recall. Some of the important ideas involve methods of calculating surface integrals. Recall that if D is an open, connected region in the plane and f is a function defined on D which has continuous partial derivatives, and if S is the surface which is the graph of f, then we can define a surface integral

òòS H(x,y,z) dA

where H is a continuous function with domain S. This integral over the surface
S in R^{3} can be evaluated by changing it to an integral over the
2-dimensional region D as

òòD H(x,y,f(x,y)) R(
[(df/dx)^{2}+(df/dy)^{2} +1] ) dx dy.

For such a surface, a unit normal is given by

[[eta]] = { - [[partialdiff]]f/[[partialdiff]]x,
-[[partialdiff]]f/[[partialdiff]]y, 1} / R
([(df/dx)^{2}+(df/dy)^{2} +1] ).

The unit normal to a plane curve described by {x(t), y(t)} is

[[eta]] = {y'(t), -x'(t)} / R([ x'(t)^{2} +
y'(t)^{2} ]).

The following are fundamental ideas used in vector analysis.

STOKES THEOREM IN 2D. Suppose that D is a region in the plane with a
piece-wise smooth boundary and that **F**(x,y) ={P(x,y),Q(x,y)} has
continuous second partial derivatives and is a function from R^{2} to
R^{2}. Then

ò[[partialdiff]]D Pdx + Qdy = òòD ([[partialdiff]]Q/[[partialdiff]]x - [[partialdiff]]P/[[partialdiff]]y) dx dy.

^{ }

^{}STOKES THEOREM IN 3D. Suppose that D is a smooth surface with unit
normal [[eta]] and has a smooth boundary. Suppose, also, that F is a function
from R^{3} to R^{3} which has continuous second partial
derivatives. Then

ò[[partialdiff]]D< **F**, dR > =
òòD< curl **F**, [[eta]] > dA.

^{ }

^{}DIVERGENCE THEOREM. With proper suppositions on D and **F**

ò[[partialdiff]]D< **F**, [[eta]] >ds =
òòD div **F** dA = òòD --.**F **dA.

in two dimensions, while

òò[[partialdiff]]D< **F**, [[eta]]
>dA = òòòDdiv **F** dV = òòòD
--.**F **dV.

^{ }

^{}in three dimensions.

**REMARKS**.

1. Here is a suggestion for the proof of the Divergence Theorem which uses
Stokes Theorem: Take **R**(t) to be a parameterization of the boundary
given by{ [[Phi]](t), [[Psi]](t)}. Then d**R**(t) = { [[Phi]]'(t),
[[Psi]]'(t) } dt and [[eta]] ds = {[[Psi]]'(t), -[[Phi]]'(t)} dt.

Thus,

òòD div **F** dA = òòD
[[[partialdiff]]**F**1/[[partialdiff]]x +
[[partialdiff]]**F**2/[[partialdiff]]y] dA

^{ }

^{ = }òòD
[F([[partialdiff]]**F**1,[[partialdiff]]x) -
F([[partialdiff]]-**F**2,[[partialdiff]]y)] dx dy.

Recall that Stokes theorem in 2-D says that this last two dimensional integral can be changed to a line integral:

òòD
[F([[partialdiff]]**F**1,[[partialdiff]]x) -
F([[partialdiff]]-**F**2,[[partialdiff]]y)] dx dy^{}

^{}

^{ = } ò[[partialdiff]]D^{
}[-**F**2 dx + **F**1 dy]

= ò[[partialdiff]]D < {**F**1 , **F**2}, {[[Psi]]',
-[[Phi]]'} > dt

^{}

^{ } = ò[[partialdiff]]D< **F
**, [[eta]] > ds . [[florin]]

^{}

^{}

^{}2. The Divergence Theorem is a generalization of the fundamental
theorem of integral calculus in the following sense: Let D be the rectangle
(a,b)x(c,d) and **F**(x,y) ={u(x),0}. Then div **F** =u'(x) and

(d-c) I( a,b, )u'(x) dx = òòD div **F** dA.

By the Divergence Theorem, this latter is

ò[[partialdiff]]D < **F** , [[eta]] > ds = I(a,b, )< {u, 0},
{0,-1} > dx +I(c,d, )< {u(b),0} , {1,0} > dy

+I(a,b, )< {u,0} , {0,1} > |dx| + I(c,d, )< {u(a) , 0} , {-1,0} > |dy|

= u(b) (d-c) - u(a) (d-c) = [u(b) - u(a)] (d-c).

Thus, in one dimension, the divergence theorem specializes to

I(a,b, u'(x) dx) = u(b) - u(a). [[florin]]

**EXERCISE**:

1. Suppose that A = B(ACO2(1, 2,2, 3)), B = {5,7}, and C = 11. Show that

--.A--u + B.--u + Cu =
F([[partialdiff]]^{2}u,[[partialdiff]]x^{2}) + 4
F([[partialdiff]]^{2}u,[[partialdiff]]x[[partialdiff]]y) + 3
F([[partialdiff]]^{2}u,[[partialdiff]]y^{2) + 5 }
F([[partialdiff]]u,[[partialdiff]]x) + 7 ^{
}F([[partialdiff]]u,[[partialdiff]]y) + 11 u.

2.Suppose that L[u] =
F([[partialdiff]]^{2}u,[[partialdiff]]x^{2}) + 2
F([[partialdiff]]^{2}u,[[partialdiff]]x[[partialdiff]]y) + 3
F([[partialdiff]]^{2}u,[[partialdiff]]y^{2) + 4 }
F([[partialdiff]]u,[[partialdiff]]x) + 5 ^{
}F([[partialdiff]]u,[[partialdiff]]y) + 6 u. Find A, B, and C such that
L[u] = --.A--u + B.--u + Cu.

3. Suppose the P(x,y) and Q(x,y) have continuous partial derivatives and that
**F** is defined by **F**(x,y,z) = {P(x,y), Q(x,y), 0}. Show that

< curl **F**, {0,0,1)> = ([[partialdiff]]Q/[[partialdiff]]x
- [[partialdiff]]P/[[partialdiff]]y).

4. Application of Stokes Theorem in the plane: Integrate < curl **F**,
[[eta]] > over D.

a. **F**(x,y) = {x,y}, D = D1(0) (= the unit disk).

b. **F**(x,y) = {-y,x}, D = D1(0),

c. **F**(x,y) = [3y, 5x}, D = D1(0), ans:2[[pi]]

d. **F**(x,y) = {0,x^{2}}, D is the rectangle with vertices at
{0,0}, {a,0}, {a,b}, and {0,b}. ans: a^{2}b

e. **F**(x,y) = {3xy + y^{2}, 2xy + 5x^{2}}, D= D1({1,2})
(= the disk with radius 1 and center {1,2} ). ans:7[[pi]].

5. Application of Stokes Theorem in 3D: Integrate < curl **F**, [[eta]]
> over D.

a. **F**(x,y,z) = {x,y,z}, D is the upper half of the unit sphere. ans:
0.

b. **F**(x,y,z) = {z^{2}, 2x, -y^{3}}, D is as above.
ans: 2[[pi]].

c. **F**(x,y,z) = {2z, -y, x}, D is the triangle with vertices at {2,0,0},
{0,2,0}, {0,0,2}. ans: 2.

d. **F**(x,y,z) = {x^{4}, xy, z^{4}}, D is the as above.
ans: 4/3.

6. Application of the Divergence Theorem. Verify the Divergence Theorem on these regions.

a. **F**(x,y,z) = {x, y, z}, D = the unit sphere. ans: 4[[pi]].

b. **F**(x,y,z) = {x^{2}, y^{2}, z^{2}}, D = the
unit sphere. ans: 0.

c. **F**(x,y,z)= {x,y,z}, D = the unit cube in the 1st octant. ans: 3.

d. **F**(x,y,z) = {x^{2}, -xz, z^{2}}, D = the unit cube in
the 1st octant ans: 2.

Section 4.4: GREEN'S IDENTITIES

As students study the integration identities in the multi-dimensional calculus, the chief applications they see for these formulas are likely in the computation of work along a path, or flux through a solid, or rotation of a surface. In this section, we will show that these identities may be used to derive the formulas which are used to study other physical phenomenia. Also, one of the Green's identities is a multi-dimensional version of integration-by-parts. Recalling that integration-by-parts played such an important role in defining the adjoint of differential operators, it is no surprise that the corresponding identity plays a similar role here.

THEOREM (__GREEN'S FIRST IDENTITY__) Suppose that D is a region in the
plane with a piecewise smooth boundary and that U and V have continuous second
partial derivatives. Then,

òòD V --^{2}U dA + òòD < --V , --U
> dA = ò[[partialdiff]]D < V --U, [[eta]] > ds.

^{
}

Suggestion for proof: We will use the Divergence theorem. Let

P = V Ux and Q = V Uy.

Then

[[partialdiff]]P/[[partialdiff]]x + [[partialdiff]]Q/[[partialdiff]]y = V Uxx + Vx Ux + V Uyy + Vy Uy

=V --^{2}U + < --V , --U >.

Is it clear how this is an application of the Divergence Theorem?

REMARK: Just as the divergence theorem generalizes the fundamental theorem of integral calculus, so Green's First Identity generalizes the integration-by-parts formulas: take V(x,y) = f(x) and U(x,y) = g(x) on the rectangle [a,b]x[c,d]. then

òòD V --^{2}U dA + òòD < --U , --V > dA

= òòD f(x)
[[partialdiff]]^{2}g/[[partialdiff]]x^{2} + òòD <
{f[[minute]], 0} , {g[[minute]],0) > dA

^{ }

= (d - c) [I(a,b, )f(x) g''(x) dx + I(a,b, )f'(x) g'(x) dx ] .

On the other hand,

ò[[partialdiff]]D < V --U, [[eta]] > ds = (d - c) [f(b) g'(b) - f(a) g'(a)].

THEOREM (__GREEN'S SECOND IDENTITY__) With D, U, and V as before

òòD [ --^{2}U. V - U --^{2}V] dA =
ò[[partialdiff]]D [[[partialdiff]]U/[[partialdiff]][[eta]] .V -
U.[[partialdiff]]V/[[partialdiff]][[eta]] ] ds.

Suggestion of a proof. òòD [ --^{2}U. V - U.--^{2}V ]
dA

= òòD --*[ --U.V- U.--V] dA = ò[[partialdiff]]D [[[partialdiff]]U/[[partialdiff]][[eta]] V - U [[partialdiff]]V/[[partialdiff]][[eta]] ] ds. This last equality is a result of the divergence theorem. [[florin]]

**EXERCISE**

(1) In the same sense that the Divergence Theorem generalizes the fundamental theorem of integral calculus, and the Green's First Theorem generalizes integration-by-parts, show that Green's Second Theorem leads to I(a,b, ) f[[minute]][[minute]](x) g(x) dx - I(a,b, ) f(x) g[[minute]][[minute]](x) dx

= [f[[minute]](b) g(b) - f[[minute]](a) g(a)] - [f(b) g[[minute]](b) - f(a) g[[minute]](a) ].

(2) A. Find a matrix A, a vector B, and a number C such that

--.[-- + {4,5}] u = --.A--u + B.--u + Cu.

B. Suppose that D is a region in the plane with a piecewise smooth boundary. Fill in the blank:

òòD (--.[-- + {4,5}] u) dA = ò[[partialdiff]]D [ blank ].

(3) A. Find F such that

[3 F([[partialdiff]]^{2}u,[[partialdiff]]x^{2}) + 5
F([[partialdiff]]^{2}u,[[partialdiff]]y^{2})^{ }]
v^{ }- u [3
F([[partialdiff]]^{2}v,[[partialdiff]]x^{2}) + 5
F([[partialdiff]]^{2}v,[[partialdiff]]y^{2})^{ }] =
--.F.

B. Suppose that D is a region in the plane with a piecewise smooth boundary. Fill in the blank:

òòD ( [3
F([[partialdiff]]^{2}u,[[partialdiff]]x^{2}) + 5
F([[partialdiff]]^{2}u,[[partialdiff]]y^{2) }] v^{ }-
u [3 F([[partialdiff]]^{2}v,[[partialdiff]]x^{2}) + 5
F([[partialdiff]]^{2}v,[[partialdiff]]y^{2) }] ) =
ò[[partialdiff]]D [ blank ].

(4.) A.Suppose that A is a matrix, B is a vector, and C is a number.

Let L[u] = --.A--u + B.--u + Cu

and M[v] = --.A--v - B.--v + Cv.

Find F such that L[u] v - u M[v] = --.F.

Section 3.5 ADJOINTS OF DIFFERENTIAL OPERATORS IN TWO DIMENSIONS

In the chapters that came before, an understanding of the adjoints of linear functions was critical in determining when certain linear equations would have a solution....and even in computing the solutions for some cases. It is, then, no surprise that we shall be interested in the computation of adjoints in this setting, too.

In the general inner product space, the adjoint of the linear operator L is defined in terms of the dot product:

< L(u) , v > = < u , L*(v) >.

For ordinary differential equations boundary value problems, the dot product
came with the problem in a sense: it was an integral over an appropriate
interval on which the functions were defined. For partial differential
equations with boundary conditions, the dot product will be over the
*region* of interest:

< f , g > = òòD f(x,y) g(x,y) dx dy.

For ordinary differential equations, integration-by-parts played a key role in
deciding the appropriate boundary conditions to impose so that the formal
adjoint would be the __real__ adjoint. Now, the Green's theorems provide
the appropriate calculus.

In fact, Green's Second Identity can be used to compute adjoints of the Laplacian. We will see that the Divergence Theorem is useful for the more general second-order, differential operators.

EXAMPLE: Consider the problem

--^{2}u = f,

u(x,0) = u(x,b) = 0

[[partialdiff]]u/[[partialdiff]]x(0,y) = [[partialdiff]]u/[[partialdiff]]x(a,y) = 0

on an interval [0,a] x [0,b] in the plane. This problem invites consideration of the operator L defined on a manifold as given below:

L(u) = --^{2}u

and M = {u: u(x,0) = u(x,b) = 0, [[partialdiff]]u/[[partialdiff]]y(0,y) = [[partialdiff]]u/[[partialdiff]]y(a,y) = 0}.

The Second Identity presents a natural setting in which the operator L is self-adjoint in the sense that L = L*. Let u and v be in M:

òòD [v L(u) - L(v) u] dA = ò[[partialdiff]]D [v [[partialdiff]]u/[[partialdiff]][[eta]] - [[partialdiff]]v/[[partialdiff]][[eta]] u] ds = 0.

^{ }

^{}This last equality follows because, on the boundary, all of u,
[[partialdiff]]u/[[partialdiff]][[eta]], v, and
[[partialdiff]]v/[[partialdiff]][[eta]] = 0.
[[florin]]

In order to discuss adjoints of more general second order partial
differential equations, let A, B, C, and c be scalar valued functions. Let
**b** be a vector valued function. Let L(u) be given by

L(u) = A[[partialdiff]]^{2}u/[[partialdiff]]x^{2} +
2B[[partialdiff]]^{2}u/[[partialdiff]]x[[partialdiff]]y +
C[[partialdiff]]^{2}u/[[partialdiff]]y^{2} + < **b** ,
--u > + cu.

DEFINITION: The __FORMAL__ ADJOINT is given by

L*(v) = [[partialdiff]]^{2}(Av)/[[partialdiff]]x^{2} +
2[[partialdiff]]^{2}(Bv)/[[partialdiff]]x[[partialdiff]]y +
[[partialdiff]]^{2}(Cv)/[[partialdiff]]y^{2} - --*(**b**v) +
cv.

Take A, B, and C to be constant. What would it mean to say that L is formally
self-adjoint? That L = L* (formally)? Then <**b**, --u> must be -
--***b**u = < -**b** , --u > - u --***b**. Thus, 2 <
**b** , --u > = -u (--***b**) for all u. Since this must hold for all
u, it must hold in the special case that u [[equivalence]] 1, which implies
that --***b** = 0. Taking u(x,y) to be x, or to be y gets that each of
**b**1 and **b**2 = 0. Hence, if L is formally self adjoint, then
**b** = **0**.

EXAMPLES:

1. Let L[u] = 3
F([[partialdiff]]^{2}u,[[partialdiff]]x^{2})^{ + 5 }
F([[partialdiff]]^{2}u,[[partialdiff]]y^{2})^{ . }The
formal adjoint of L is L. Note that

L[u] v - u L[v] = F([[partialdiff]] ,[[partialdiff]]x) ( 3[F([[partialdiff]]u,[[partialdiff]]x) v - u F([[partialdiff]]v,[[partialdiff]]x) ] ) + F([[partialdiff]] ,[[partialdiff]]y) ( 5[F([[partialdiff]]u,[[partialdiff]]y) v - u F([[partialdiff]]v,[[partialdiff]]x) ] )

= --.( 3[F([[partialdiff]]u,[[partialdiff]]x) v - u F([[partialdiff]]v,[[partialdiff]]x) ] , 5[F([[partialdiff]]u,[[partialdiff]]y) v - u F([[partialdiff]]v,[[partialdiff]]x) ] )

2. Let L[u] = 3
F([[partialdiff]]^{2}u,[[partialdiff]]x^{2})^{ + 5 }
F([[partialdiff]]^{2}u,[[partialdiff]]y^{2}) ^{ +} 7
F([[partialdiff]]u,[[partialdiff]]x) + 11 F([[partialdiff]]u,[[partialdiff]]y)
+ 13 u. The formal adjoint of L is

L*[v] = 3 F([[partialdiff]]^{2}v,[[partialdiff]]x^{2) + 5 }
F([[partialdiff]]^{2}v,[[partialdiff]]y^{2) -} 7
F([[partialdiff]]v,[[partialdiff]]x) - 11 F([[partialdiff]]v,[[partialdiff]]y)
+ 13 v. Note that

L[u] v - u L*[v] = F([[partialdiff]] ,[[partialdiff]]x) ( 3[F([[partialdiff]]u,[[partialdiff]]x) v - u F([[partialdiff]]v,[[partialdiff]]x) ] + 7 uv) + F([[partialdiff]] ,[[partialdiff]]y) ( 5[F([[partialdiff]]u,[[partialdiff]]y) v - u F([[partialdiff]]v,[[partialdiff]]x) ] + 11 uv)

= --.( 3[F([[partialdiff]]u,[[partialdiff]]x) v - u F([[partialdiff]]v,[[partialdiff]]x) ] +7 uv, 5[F([[partialdiff]]u,[[partialdiff]]y) v - u F([[partialdiff]]v,[[partialdiff]]x) ] + 11 uv).

3. Let L[u] = e^{x}
F([[partialdiff]]^{2}u,[[partialdiff]]x^{2}) + 5
F([[partialdiff]]u,[[partialdiff]]y) + 3u. The formal adjoint of L is L* given
by

L*[v] = e^{x} F([[partialdiff]]^{2}v,[[partialdiff]]x^{2)
}+ 2e^{x} F([[partialdiff]]v,[[partialdiff]]x)
-5F([[partialdiff]]v,[[partialdiff]]y) + (e^{x}+3) u. Note that

L[u] v - u L*[v] = F([[partialdiff]] ,[[partialdiff]]x) (e^{x} v
F([[partialdiff]]u,[[partialdiff]]x) - u F([[partialdiff]]
e^{x}v,[[partialdiff]]x) ) + F([[partialdiff]] ,[[partialdiff]]y)
(5uv)

= --.( e^{x} v F([[partialdiff]]u,[[partialdiff]]x) - u
F([[partialdiff]] e^{x}v,[[partialdiff]]x) , 5uv).

THE CONSTRUCTION OF M*

We now come to the important part of the construction of the __real__
adjoint: how to construct the appropriate adjoint boundary conditions. We are
given L and M; we have discussed how to construct L*. We now construct M*.
To see what is M* in the general case, the divergence theorem is recalled:

òòD --*F dx dy = ò[[partialdiff]]D < F, [[eta]] > ds.

The hope, then, is to write v L(u) - L*(v) u as --*F for some suitable chosen F.

**Theorem**. If L is a second order differential operator and L* is the
formal adjoint, then there is F such that vL(u) - L*(v)u = --.F.

Here's how to see that. Note that

(v A[[partialdiff]]^{2}u/[[partialdiff]]x^{2} + v
C[[partialdiff]]^{2}u/[[partialdiff]]y^{2}) - (u
[[partialdiff]]^{2}(Av)/[[partialdiff]]x^{2} + u
[[partialdiff]]^{2}(Cv)/[[partialdiff]]y^{2})

= [[partialdiff]]/[[partialdiff]]x(vA[[partialdiff]]u/[[partialdiff]]x - u[[partialdiff]](Av)/[[partialdiff]]x) + [[partialdiff]]/[[partialdiff]]y (vC[[partialdiff]]u/[[partialdiff]]y - u[[partialdiff]](Cv)/[[partialdiff]]y)

= --*{ v(A[[partialdiff]]u/[[partialdiff]]x, C[[partialdiff]]u/[[partialdiff]]y) - u([[partialdiff]]Av/[[partialdiff]]x, [[partialdiff]]Cv/[[partialdiff]]y)}.

Also, v < **b** , --u > + u --*(**b**v)

= v **b**1 [[partialdiff]]u/[[partialdiff]]x + **b**2
[[partialdiff]]u/[[partialdiff]]y v + u
[[partialdiff]](**b**1v)/[[partialdiff]]x +
u [[partialdiff]](**b**2v)/[[partialdiff]]y

= --*(v**b**u).

**COROLLARY**: òòD [vL(u) - uL*(v)]

= òòD
--*{v(A[[partialdiff]]u/[[partialdiff]]x,C[[partialdiff]]u/[[partialdiff]]y) -
u([[partialdiff]](Av)/[[partialdiff]]x,[[partialdiff]](Cv)/[[partialdiff]]y) +
v**b**u)

=ò[[partialdiff]]D
[v{A[[partialdiff]]u/[[partialdiff]]x,C[[partialdiff]]u/[[partialdiff]]y}-u{[[patialdiff]](Av)[[partialdiff]]x,[[partialdiff]](Cv)/[[partialdiff]]y}
+v**b**u}]*[[eta]] ds.

**EXAMPLES**.

1(cont). Let L[u] be as in example 1 above for {x,y} in the rectangle D = [0,1]x[0,1] and M = {u: u=0 on [[partialdiff]]D}. Then, according to Example 1, L = L* and

òòD [vL(u) - uL*(v)] dA=

ò[[partialdiff]]D <{3 [ F([[partialdiff]]u,[[partialdiff]]x) v - u F([[partialdiff]]v,[[partialdiff]]x) ], 5 [ F([[partialdiff]]u,[[partialdiff]]y) v - u F([[partialdiff]]v,[[partialdiff]]y)] } , [[eta]] > ds.

Recalling that the unit normal to the faces of the rectangle D will be {0.-1}, {1,0}, {0,1}, or {-1,0} and that u = 0 on [[partialdiff]]D, we have that

òòD [vL(u) - uL*(v)] dA=

= - I(0,1, ) 5F([[partialdiff]]u,[[partialdiff]]y) v dx + 3 I(0,1, )F([[partialdiff]]u,[[partialdiff]]x) v dy

+ 5 I(1,0, ) F([[partialdiff]]u,[[partialdiff]]y) v |dx| - 3 I(1,0, ) F([[partialdiff]]u,[[partialdiff]]y) v |dy|.

In order for this integral to be zero for all u in M, it must be that v = 0 on [[partialdiff]]D. And M = M*. Hence, {L, M} is (really) self adjoint.

2.(cont) Let L[u] be as in Example 2 above and M = {u: u(x,0) = u(0,y) = 0, and F([[partialdiff]]u,[[partialdiff]]x)(1,y) = F([[partialdiff]]u,[[partialdiff]]y)(x,1) = 0, 0 < x < 1, 0 < y < 1}. Using the results from above,

òòD [L(u)v - uL*(v)] dA=

-I(0,1, ) 5 F([[partialdiff]]u,[[partialdiff]]y) v dx + I(0,1, ) [-3 u F([[partialdiff]]v,[[partialdiff]]x) + 7uv] dy

+ I(1,0, ) [5u F([[partialdiff]]v,[[partialdiff]]x) + 11 uv] |dx| - I(1,0, )[3 F([[partialdiff]]u,[[partialdiff]]x) v ] |dy|.

It follows that M* = {v: v(x,0) = 0, v(1,y) = F(3,7) F([[partialdiff]]v,[[partialdiff]]x) (1,y), v(x,1) =

F(5,11) F([[partialdiff]]v,[[partialdiff]]x) (x,1), and v(0,y) = 0}.

3(cont). Let L[u] be as in Example 3 above for [x,y} in the first qaudrant. Let M = {u: u = 0 on [[partialdiff]]D}. Then

òòD [vL(u) - uL*(v)] dA=

=ò[[partialdiff]]D <{e^{x} v
F([[partialdiff]]u,[[partialdiff]]x) - u F([[partialdiff]] e^{x}v
,[[partialdiff]]x) , 5uv }, [[eta]] > ds = - I(0,*, )v
F([[partialdiff]]u,[[partialdiff]]x) dy.

Thus, M* = {v: v(0,y) = 0 for y > 0}.

**EXERCISES**

1. Suppose that L(u) =
[[partialdiff]]^{2}u/[[partialdiff]]x^{2} -
[[partialdiff]]^{2}u/[[partialdiff]]t^{2} restricted to M = {u:
u(0,t) =u(a,t) = 0 and u(x,0) = [[partialdiff]]u/[[partialdiff]]t(x,0) = 0}.
Classify L as parabolic, hyperbolic, or elliptic. Find L* . Find F such that
v L[u] - L*[v} u = --F. What is M*?

2. Suppose that L[u] =
[[partialdiff]]^{2}u/[[partialdiff]]x^{2} +
[[partialdiff]]^{2}u/[[partialdiff]]y^{2} -
[[partialdiff]]^{2}u/[[partialdiff]]z^{2} restricted to

M = {u: u(0,y,z) = 0, u(1,y,z) = 0, u(x,y,0) = u(x,y,1), [[partialdiff]]u/[[partialdiff]]z (x,y,0) = [[partialdiff]]u/[[partialdiff]]z(x,y,1),

[[partialdiff]]u/[[partialdiff]]y(x,0,z) = 3 u(x,0,z), [[partialdiff]]u/[[partialdiff]]y(x,1,z) = 5 u(x,1,z)}

Classify L as parabolic, hyperbolic, or elliptic. Give L*. Find F such that v L[u} - L*[v] u = --.F. What is M*?

SECTION 3.6 THE CONSTRUCTION OF THE GREEN'S FUNCTION FOR THE TWO DIMENSIONAL OPERATOR WITH DIRICHLET BOUNDARY CONDITIONS

We now will construct a Green's function for the following problem: Find u such that

--^{2}u = f on D, u = g on [[partialdiff]]D.

In order to simplify notation, we will let P or Q represent points in the
plane. For example, P might represent {x,y} and Q represent {a,b}. We
indicate that --^{2}G(P,Q) is
[[partialdiff]]^{2}G/[[partialdiff]]x^{2} +
[[partialdiff]]^{2}G/[[partialdiff]]y^{2}, instead of partials
with respect to a and b, by writing --p^{2}G(P,Q).

The function G is to be constructed to have these properties:

--p^{2} G(P,Q) = d(P,Q) and G(.,Q) = 0 on
[[partialdiff]]D.

Having such a G, the following applications of the Green's identities show that we can determine the solution to the partial differential equation:

òò G(P,Q) f(P) dpA = òò G(P,Q) --p^{2}u(P)
dpA

= òò --p.[G(P,Q) --pu - --pG(P,Q) u] dpA + òò
--p^{2}G(P,Q) u(P) dpA

= - ò [[partialdiff]]G/[[partialdiff]][[eta]](P,Q) u(P) dps + òò d(P,Q) u(P) dpA

= - ò [[partialdiff]]G/[[partialdiff]][[eta]](P,Q) g(P) dps + òò d(P,Q) u(P) dpA.

Thus, having such a G and knowing f and g, we have a formula for u which provides a solution to the problem:

u(Q) = òò G(P,Q) f(P) dpA + ò [[partialdiff]]G/[[partialdiff]][[eta]](P,Q) g(P) dps.

How is such a G constructed? We will do it in two pieces. We construct G =
F + R where F is the *fundamental* or *singular* part and
satisfies

--^{2} F = d on D and

F (.,Q) is independent of [[Theta]] (in polar coordinates). (*)

R is the *regular* part and satisfies

--^{2} R = 0 on D

R = - F on [[partialdiff]]D. (**)

THE FUNDAMENTAL PART

We begin by constructing F. Recall the formula for --^{2} in polar
coordinates:

--^{2}u = F(1,r)
F([[partialdiff]](r[[partialdiff]]u/[[partialdiff]]r) ,[[partialdiff]]r) +
F(1,r^{2})
F([[partialdiff]]^{2}u,[[partialdiff]][[Theta]]^{2 ) }.

In seeking F such that --^{2}F = d, we recall that, in the sense of
distributions, d({x,y},{a,b}) = 0 if {x,y} != {a,b}. Also, d is radially
symmetric. Thus,

0 = --^{2}F = 1/r
[[partialdiff]](r[[partialdiff]]F/[[partialdiff]]r)/[[partialdiff]]r +
1/r^{2}
[[partialdiff]]^{2}F/[[partialdiff]][[Theta]]^{2} ^{}

^{} = 1/r [[partialdiff]](r
[[partialdiff]]F/[[partialdiff]]r)/[[partialdiff]]r.

This last equality is because F is independent of [[Theta]]. Thus r.[[partialdiff]]F/[[partialdiff]]r is constant and F(r) = A ln(r) + B for some A and B. It remains to find A and B. To this point we have not used information about F at {a,b}, only at [x,y} different from [a,b]. Information about F at {a,b} comes through the integral. Integrate over any disk with radius c > 0

1 = òò --^{2}F dA = òò 1 --^{2}F dA =
òò --^{2}1 * F dA + ò
[1[[partialdiff]]F/[[partialdiff]][[eta]] -
[[partialdiff]]1/[[partialdiff]][[eta]] F] ds

= ò1*A/r ds = 2[[pi]]A.

Thus, A = 1/2[[pi]] and B is undetermined.^{}

^{}

F({x,y},{a,b}) = ln[ (x-a)^{2} +
(y-b)^{2}]/4[[pi]].

Now we seek R.

THE REGULAR PART: METHODS OF CONFORMAL MAPPING

Recall this complex arithmetic:

z = x+iy = (x^{2}+y^{2})^{1/2} exp( i arctan(y/x)
)

= |z| exp( i arg(z) ),

ln(z) = ln(|z|) + i arg(z), and Re ln(z) = ln(|z|).

Let D be a simply connected region different from the entire plane and let [[alpha]] be a point of D. Let w[[alpha]](z) be an analytic function from D into the unit disk D1(0) for which w[[alpha]][[minute]](z) != 0 for any z and for which w[[alpha]]([[alpha]]) = 0. Then

w[[alpha]](z) = (z-[[alpha]]) P(z)

where P is analytic on D and not zero. Define

G(z,[[alpha]]) = ln( |w[[alpha]](z)| )/2[[pi]] =

F(1,2[[pi]]) [ln( |z-[[alpha]]| ) + Re ln( P(z))].

We show that this G satisfies (*) and (**) above. In fact, F((x,y),(a,b)) =
ln( R([(x-a)^{2}+(y-b)^{2}] ) /2[[pi]] . To see that Re
ln(P(z))/2[[pi]] satisfies --^{2}H = 0, recall that f = u + iv is
analytic if and only if the Cauchy-Riemann equations hold:
[[partialdiff]]u/[[partialdiff]]x = [[partialdiff]]v/[[partialdiff]]y and
[[partialdiff]]u/[[partialdiff]]y = -[[partialdiff]]v/[[partialdiff]]x. Thus,
[[partialdiff]]^{2}u/[[partialdiff]]x^{2}+
[[partialdiff]]^{2}u/[[partialdiff]]y^{2} = 0. For this reason
R(z) = Re ln(P(z))/2[[pi]] satisfies --^{2}R = 0. Also , w[[alpha]](z)
= 1 on [[partialdiff]]D , so that G(z,[[alpha]]) = 0 for z on
[[partialdiff]]D.

**EXAMPLE**: A function that maps the disk with radius R onto the unit disk
taking [[alpha]] to zero is

w[[alpha]](z) = R (z-[[alpha]])/(R^{2}-[[alpha]]*z)

(Reference: Churchill, p83, 3rd edition.). Thus, G(z,[[alpha]]) =
ln(|w[[alpha]](z)|)/2[[pi]]. In order to do the calculus of --^{2}u,
we change G from being a function from CxC to a function from R^{2} x
R^{2} into R. Typically, u for the disk is given by polar coordinates:
u(r,[[theta]]). To make this change, let z = r e^{i[[theta]]} and
[[alpha]] = [[rho]] e^{i[[phi]]}. We change |w[[alpha]](z)| to polar
coordinates.First, the top:

|R (z-[[alpha]])|^{2} = |R(re^{i[[theta]]} -
[[rho]]e^{i[[phi]]})|^{2} = |R[[rho]] - Rr
e^{i([[phi]]-[[theta]])}|^{2}

= |R[[rho]] - Rr(cos([[phi]]-[[theta]]) + i
sin([[phi]]-[[theta]]))|^{2}

^{}

= [(R[[rho]] - R[[rho]](cos([[phi]]-[[theta]]))]^{2} +
R^{2}r^{2}sin^{2}([[phi]]-[[theta]])

= R^{2}[[rho]]^{2} -
2R^{2}r[[rho]]cos([[phi]]-[[theta]]) + R^{2}r^{2}

^{}

^{ =}R^{2}(r^{2} +
[[rho]]^{2} - 2r[[rho]]cos([[theta]]-[[phi]])).

^{}

^{}The bottom is |R^{2} - [[alpha]]* z|^{2}

=|R^{2} -r[[rho]]
e^{i([[theta]]-[[phi]])}|^{2}

= |R^{2} - r[[rho]](cos([[theta]]-[[phi]]) +
i sin([[theta]]-[[phi]]))|^{2}

=(R^{2}-r [[rho]]
cos([[theta]]-[[phi]]))^{2} + r^{2 }[[rho]]^{2}
sin^{2}([[theta]]-[[phi]])

= R^{4} + r^{2} [[rho]]^{2} - 2R^{2}
r[[rho]] cos([[theta]]-[[phi]]).

Thus,

|w[[alpha]](z))|^{2 }=
F(R^{2}(r^{2}+[[rho]]^{2}-2[[rho]]r
cos([[theta]]-[[phi]])),[[rho]]^{2}[(R^{2}/[[rho]])^{2}+^{2}-2rR^{2}/[[rho]]
cos([[theta]]-[[phi]])])

and

G(r,[[theta]],[[rho]],[[phi]]) = ln(|W[[alpha]](z)|)/2[[pi]]

= F(1,2[[pi]]) ln(R/[[rho]]) + F(1,4[[pi]]) ln(r^{2} +
[[rho]]^{2} -2r[[rho]]cos([[theta]]-[[phi]])) -

F(1,4[[pi]]) ln((R^{2}/[[rho]])^{2}
+ r^{2} -2 r R^{2}/[[rho]] cos([[theta]]-[[phi]])).

We note the values on the boundary in this last formulation:

G(R, [[theta]], [[rho]], [[phi]]) = ln(1)/2[[pi]] = 0,

and

BRC|(F([[partialdiff]]G,[[partialdiff]][[eta]]) =
F([[partialdiff]]G,[[partialdiff]]r) = 0 + F(1,4[[Pi]]) F( 2r-2[[rho]]
cos([[theta]]-[[phi]]),r^{2}+[[rho]]^{2} - 2 r [[rho]]
cos([[theta]]-[[phi]])) )r=R

BRC|(- F(1,4[[Pi]]) F( 2r - 2R^{2}/[[rho]]
cos([[theta]]-[[phi]]),(R^{2}/[[rho]])^{2} + r^{2} -
2rR^{2}/[[rho]] cos([[theta]]-[[phi]]) ) )r=R

= F(1,4[[Pi]]) F(2R - 2 [[rho]] cos([[theta]]-[[phi]]), R^{2} +
[[rho]]^{2} - 2 R [[rho]] cos([[theta]]-[[phi]]))

- F(1,4[[Pi]]) F(2R - 2 R^{2}/[[rho]] cos([[theta]]-[[phi]]),
R^{4}/[[rho]]^{2} + R^{2} - 2 R^{3}/ [[rho]]
cos([[theta]]-[[phi]]))

= F(1,4[[Pi]]) F(2R - 2[[rho]] cos([[theta]]-[[phi]]) -
2[[rho]]^{2}/R + 2[[rho]] cos([[theta]]-[[phi]]), R^{2}
+[[rho]]^{2} - 2R[[rho]] cos([[theta]]-[[phi]]))

= F(1,2[[Pi]]) F( R^{2} - [[rho]]^{2},R(R^{2} +
[[rho]]^{2} - 2R[[rho]] cos([[theta]]-[[phi]])))

This last equality gives the familiar Poisson formula:

The solution for

--^{2}u = 0 on DR(0)

u = g on [[partialdiff]]D

is

u([[rho]],[[phi]]) = F(1,2[[pi]]) ò
(R^{2}-[[rho]]^{2})/[ R^{2} + [[rho]]^{2} - 2 R
[[rho]] cos([[theta]]-[[phi]]) ] g(R,[[theta]]) d[[theta]].

This follows from the equation for F([[partialdiff]]G,[[partialdiff]][[eta]]) by remembering that

ds = R d[[Theta]].

**EXAMPLE**: We construct the Green's function for the Dirchlet problem in
the upper half plane. That is, we give a formula for U such that
--^{2}U = f(x,y) if y > 0 and u(x,0) = g(x) for all real x.

Consider w[[alpha]](z) given by (z-[[alpha]])/(z-[[alpha]]*) for z and [[alpha]] in the upper half plane. Two things must be established: |w[[alpha]](z)| < 1 and, if x is in R, then |w[[alpha]](x) | = 1. Suppose that z = x + i y and [[alpha]] = a + i b.

F(| z-[[alpha]] |^{2},| z-[[alpha]]*|^{2)} =
F((x-a)^{2} + (y-b)^{2},(x-a)^{2} + (y+b)^{2})
= F( x^{2} + y^{2} + a^{2} + b^{2} -2(ax + yb),
x^{2} + y^{2} + a^{2} + b^{2} -2(ax - yb))

Note that since y > 0 and b > 0 then -2yb < 2yb and |z - a |< |z - a*|

and |x-(a+ bi)|^{2} = |x - (a- bi)|^{2}.

Hence, G(z,[[alpha]]) = ln(|W[[alpha]](z)|)/2[[pi]]

= [ln(|z-[[alpha]]|) - ln(|z-[[alpha]]*|)]/2[[pi]].

Or, in R^{2} coordinates,

G(x,y,a,b) = [ln((x-a)^{2}+(y-b)^{2}) -
ln((x-a)^{2}+(y+b)^{2})]/4[[pi]].

EXAMPLES OF CONFORMAL MAPPINGS

Some examples of conformal mappings from complex variables can be found in the appendix to Churchill's complex variables text.

**EXERCISE ** Give a conformal mapping from the fourth quadrant onto the
unit disk.

Ans. Give a sequence of maps and take the composite to get

(z^{2} + i )/(z^{2} - i).

Here are two general comments about finding the Green's function for a simple region D.

(a) if [[beta]] is in the complex unit disk, then f(u) = (u - [[beta]] )/(1 - u[[beta]]*) maps the unit disk onto the unit disk taking [[beta]] to 0.

(b) if f takes the region D onto the unit disk then

f(z) - f([[alpha]])) / (1-f(z) f([[alpha]])*)

takes D onto the unit disk and [[alpha]] in D to zero.

Example. The Green's function for the fourth quadrant is given by

W[[alpha]](z) = F(F(z^{2}+i,z^{2}-i)
- F([[alpha]]^{2}+i,[[alpha]]^{2}-i),1 -
F(z^{2}+i,z^{2}-i)
F(([[alpha]]^{2}+i)*,([[alpha]]^{2}-i)*))

and G(z,[[alpha]]) = ln(|w[[alpha]](z)|)/2[[pi]] = F(1,2[[pi]]) [ ln(|f(z) - f([[alpha]])|) - ln(|1-f([[alpha]])* f(z)|)].

**EXERCISE**. Find w[[alpha]](z) for the disk with center a and radius b.

**EXAMPLE**: We find u such that --^{2}u = 0 in the right half
plane and u(0,y) = 1 if |y| < 1 with u(0,y) = 0 if |y| > 1.

Note that all the parts of the problem are here. There is the domain of the
functions which is the right half plane and which is denoted by D. There is
the linear operator L which is the Laplacian: L(u) = --^{2}u. There
is the linear equation we wish to solve: L(u) = 0. And, there are the boundary
conditions which we denote as u(0,y) = g(y) where g(y) = 0 or 1 according as to
whether |y| > 1 or |y| < 1 . We lay out how to get u in "five easy
steps".

Step1. Get a one-to-one map from D to the unit disk:

f(z) = (1-z)/(1+z).

Step2. Get a one-to-one map from D onto the unit disk which takes the point [[alpha]] in the right half plane to 0: w[[alpha]](z) =

F(F(1-z,1+z) - F(1-[[alpha]],1+[[alpha]]),1 - F(1-z,1+z) F(1-[[alpha]]*,1+[[alpha]]*)) = F([[alpha]] - z,[[alpha]]*+ z) F(1+[[alpha]]*,1+[[alpha]])

Step3. Get G(z,[[alpha]]) = ln(|w[[alpha]](z)|) = [ln(|z-[[alpha]]|) - ln(|z+[[alpha]]*|)]/2[[pi]]. Note that G is broken into the fundamental and regular parts.

Step 4.We change G to rectangular coordinates:

G({x,y},{a,b}) = [ln( (x-a)^{2}+(y-b)^{2 }) - ln(
(x+a)^{2}+(y-b)^{2 })]/4[[pi]].

Step 5. Compute [[partialdiff]]G/[[partialdiff]][[eta]] on the boundary: [[partialdiff]]G/[[partialdiff]][[eta]] = - [[partialdiff]]G/[[partialdiff]]x|(0,y)

= 1/[[pi]] a/[ a^{2}+(y-b)^{2}].

Finally we are ready to give the formula for u:

u(a,b) = òòD G(x,y,a,b) 0 dx dy + ò[[partialdiff]]D [[partialdiff]]G/[[partialdiff]][[eta]](x,y,a,b) g(y) ds

= 1/[[pi]] I(-1,1, ) a/[a^{2}+(y-b)^{2} ]dy.

Remark: The student may be uncomfortable with the direction of this last integral, thinking that the direction should be taken so that D is on the left. That's partially correct. The student should also remember that ds = -dy so that the sign of the integral is correct as stated.

Section 3.7 A MAXIMUM PRINCIPLE

At the beginning of calculus, one obtains a first understanding of the importance of the second derivative. The second derivative predicts where the graph of a function is concave up or down. In particular, if u is a function defined on the interval [0,1], and u'' is continuous and not negative, then the curve is concave up. As a consequence of this, the values of u on [0,1] do not exceed the maximum of its values at the end points 0 and 1.

A similar result holds for a function of more variables. The role of the
second derivative is played by the Laplacian operator. We assume
--^{2}u >= 0 in a bounded region in R^{n} and show that it
must take on its maximum value on the boundary.

__THEOREM (MAXIMUM PRINCIPLE)__. Let D be a bounded region and u be
continuous on the closed set D[[union]][[partialdiff]]D with

--^{2}u = f on D,

u = g on [[partialdiff]]D,

If f(x,y) >= 0 then u assumes its maximum on [[partialdiff]]D.

Here's a way to think of why this is so. First suppose that f > 0. Any
function which is continuous on a closed and bounded set in R^{n} has a
maximum value on that set. Since u is continuous on the closed and bounded set
D[[union]][[partialdiff]]D, it has a maximum on D[[union]][[partialdiff]]D.
Suppose {[[alpha]],[[beta]]} is in the interior of D and and the maximum of u
occurs at that place : max u = u([[alpha]],[[beta]]). Then

F([[partialdiff]]u,[[partialdiff]]x)([[alpha]],[[beta]]) = 0 =
F([[partialdiff]]u,[[partialdiff]]y)([[alpha]],[[beta]]) and
F([[partialdiff]]^{2}u,[[partialdiff]]x^{2})([[alpha]],[[beta]])__<__
0 ,
F([[partialdiff]]^{2}u,[[partialdiff]]y^{2})([[alpha]],[[beta]])__<__
0.

This contradicts f > 0 since

F([[partialdiff]]^{2}u,[[partialdiff]]x^{2})([[alpha]],[[beta]])
+
F([[partialdiff]]^{2}u,[[partialdiff]]y^{2})([[alpha]],[[beta]])
= f([[alpha]],[[beta]]).

Therefore, {[[alpha]],[[beta]]} must be on the boundary of D.

Now assume f __>__ 0. We will show that it remains true that u must have
its maximum on [[partialdiff]]D. For, suppose u satisfies

--^{2}u = f on D, with u = g on [[partialdiff]]D.

Let

V[[epsilon]](x,y) = u(x,y) + [[epsilon]] (x^{2}+y^{2}).

Then

--^{2}V[[epsilon]]= --^{2}(u+[[epsilon]]
(x^{2}+y^{2})) = f + 4[[epsilon]] > 0 on D.

By the previous paragraph, V[[epsilon]] has a maximum on [[partialdiff]]D. Let this maximum occur at {c[[epsilon]],d[[epsilon]]} and u([[alpha]],[[beta]]) = max u. We have

u([[alpha]],[[beta]]) < u([[alpha]],[[beta]]) + [[epsilon]]
([[alpha]]^{2}+[[beta]]^{2}) = v[[epsilon]]([[alpha]],[[beta]])
__<__

v[[epsilon]](c[[epsilon]],d[[epsilon]]) = u(c[[epsilon]],d[[epsilon]]) +
[[epsilon]] (c[[epsilon]]^{2}+d[[epsilon]]^{2}).

Also, because D[[union]][[partialdiff]]D is bounded, lim[[epsilon]]->0 [[epsilon]] (c[[epsilon]]+d[[epsilon]]) =0, so that

u([[alpha]],[[beta]]) = max u __<__ u(lim(c[[epsilon]], d[[epsilon]]))
__<__ max u.

Since [[partialdiff]]D is closed, lim{c[[epsilon]], d[[epsilon]]} will be in [[partialdiff]]D and u(lim{c[[epsilon]], d[[epsilon]]}) = max u, we have the result.

EXAMPLE. Consider

--^{2}u = 0 on the unit disk in the plane,

u(1,[[Theta]]) = sin([[Theta]]) on the closed unit circle.

Since the maximum value of u occurs on the boundary and u = sin([[Theta]]) on
the boundary of D then -1 __<__ u(x,y) __<__ +1. (Actually,
u(r,[[Theta]]) = r sin([[Theta]]).)

EXAMPLE. This proof for the maximum principle just given used that D is
bounded. The result may fail to be true if D is not bounded as this example
will show. Let D be the strip D = { {x,y}: 0 < x < [[pi]] and y >
0}. Let g(x,y) = sin(x) on the boundary of D. A solution of --^{2}u =
0 on D and u = g on [[partialdiff]]D is u(x,y) = sin(x) e^{y}, and
this function goes unbounded in D...certainly not taking on its maximum value
on the boundary.

Several questions now come to mind. Is there a minimum principle in case f <= 0 ? Yes, see exercise 1 below. What can be done in case f is positive at some places in D and negative at others? The next theorem addresses this question and finds the maximum value of |u| in terms of f, g, and D. This is a useful result for, in addition to saying something about the maximum value of |u|, we will see that it provides a chance to see how u changes with small changes in f, g, or D. Also, this theorem is used in establishing that solutions for this type problem are unique.

__THEOREM (Continuous Dependence on the Data)__. Let D be a bounded region
and u be continuous on the closed set D[[union]][[partialdiff]]D. There is a
number K such that if f and g are continuous on D and

--^{2}u = f on D,

u = g on [[partialdiff]]D.

then, |u| <= max |g| + K max |f|.

Here's a way to see this:

--^{2(} ^{+} u + (x^{2}+y^{2})
max |f|/4) = ^{+} --^{2}u + max |f| =
^{+} f + max |f| __>__ 0.

Therefore, by the maximum princple, ^{+} u +
(x^{2}+y^{2}) max |f|/4 assumes its maximum on the boundary of
D. Since D is bounded, let K >= x^{2}+y^{2} for x and y in
[[partialdiff]]D;

|u(x,y)| __<__ max( ^{+} u +
(x^{2}+y^{2}) max |f| /4) __<__ max |g| + K max |f|.

It follows that the solution depends continuously on the data f and g. [[florin]]

The ideas in this section have been laying out what we would always hope for in a PDE problem: we would hope that the problem has a solution, that the solution changes little if we make small errors in the data, and that the problem only has one solution. (Being able to find that solution is important, of course.) To this point we have not addressed the question of uniqueness of solutions.

**THEOREM** (UNIQUENESS OF SOLUTIONS) If each of u and v is a continuous
function on the bounded region with a piecewise smooth boundary and

--^{2}u = f on D and --^{2}v = f on D

with u(x,y) = g(x,y) on [[partialdiff]]D with v(x,y) = g(x,y) on [[partialdiff]]D

then u = v.

Here's why. Let W = u - v. Note the equations that W must satisfy:

0 = --^{2}W =
[[partialdiff]]^{2}W/[[partialdiff]]x^{2} +
[[partialdiff]]^{2}W/[[partialdiff]]y^{2} with W = 0 on
[[partialdiff]]D.

By Green's First identity, 0 = òòD W.0 dA

= òòD W.--^{2}W dA
= ò[[partialdiff]]D
< W--W , [[eta]] > ds - òòD < --W , --W > dA.

Thus, 0 = < --W, --W > =
([[partialdiff]]W/[[partialdiff]]x)^{2} +
([[partialdiff]]W/[[partialdiff]]y)^{2} and
[[partialdiff]]W/[[partialdiff]]x = 0 = [[partialdiff]]W/[[partialdiff]]y. It
follows that W is constant. Since it is zero on the boundary of D, it must be
zero everywhere. Hence, u - v = W = 0. [[florin]]

These three ideas of existence, uniqueness, and continuous dependence on the
data are desirable properties for a differential equation to have. One says
that a partial differential equation is *WELL- POSED* if it has these
three properties.

A problem is __well-posed__ if

(a) it has a solution,

(b) the solution is unique, and

(c) the solution depends continuously on the initial data.

The PDE in this section is called a Dirichlet problem We have shown that it
is well posed. That is, if D is a bounded region with a piecewise smooth
boundary then the Dirichlet problem, --^{2}u = f on D, with u = g on
[[partialdiff]]D, is well posed.

We have seen examples of differential equations which did not have a solution and for which the solution was not unique. Here is an example of a partial differential equation where the solution does not depend continuously on the data.

EXAMPLE. The problem

--^{2}u(x,y) = 0 for y > 0 and x in R,

u(x,0) = 0 for x in R

[[partialdiff]]u/[[partialdiff]]y|x=0 = sin(nx)/n for x in R.

A solution is un(x,y) = sin(nx) sinh(ny) /n^{2}.^{}

^{}Note that limn->*sin(nx) sinh(ny)/n^{2} = *.

but, --^{2}u(x,y) = 0 for y > 0 and x in R,

u(x,0) = 0 for sin R

[[partialdiff]]u/[[partialdiff]]y(x,0) = 0 for x in R.

has u(x,y) = 0 for a solution. [[florin]]

**EXERCISE:**

1. Suppose u is as in the Maximum Principle Theorem and that f <= 0.

Show that the minimum value of u occurs on [[partialdiff]]D. ( Hint: consider v = -u.)

2. Suppose that | g1([[theta]]) - g2([[theta]]) | < .01 for 0 <= [[theta]] <= 2[[pi]] and that u1 and u2 are continuous on [[partialdiff]]D and satisfy

--^{2}u1=0 on D and --^{2}u2=0 on D

with u1 = g1 on [[partialdiff]]D with u2 = g2 on [[partialdiff]]D.

Show that

max |u1(r,[[theta]]) - u2(r,[[theta]])| < .01 for 0 < r < 1 and 0 <= [[theta]] <= 2[[pi]].

3. Suppose that u(x,y) = e^{-y }sin(x) for y >= 0 and all real
x.

a. Show that --^{2}u = 0 and u(x,0) = sin(x).

b. If possible give { [[alpha]], [[beta]] } such that u([[alpha]],[[beta]]) is minimum.

c. If possible give { [[alpha]], [[beta]] } such that u([[alpha]],[[beta]]) is maximum.

d. Show that u(x,y) = e^{y} sin(x) solves the same equation.

e. Explain why this does not contradict the Uniqueness Theorem.

4. Let u(r,[[theta]]) = r sin([[theta]]) for 0 <= r <= 1 and 0 <= [[theta]] <= 2[[pi]].

a. Show that --^{2}u = 0 and u(1,[[theta]]) = sin([[theta]]).

b. If possible give { [[alpha]], [[beta]] } such that u([[alpha]],[[beta]]) is minimum.

c. If possible give { [[alpha]], [[beta]] } such that u([[alpha]],[[beta]]) is maximum.

d. Show that if u(r,[[theta]]) = F(1,r) sin([[theta]]) then --^{2}u =
0.

e. Explain why this does not contradict the Uniqueness Theorem.

Section 3.8: AN EXAMPLE: THE SHAPE OF A DRUM

We are now in a position to determine the equation which describes the shape of a drum. Let D be a region in the plane and U(x,y) be the height of a membrane with a prescribed boundary. That is, we assume that the values of U are known on the boundary of D. If we take g to be that function which describes the values of U on the boundary of D, then we have assumed that U(x,y) = g(x,y) for {x,y} on [[partialdiff]]D. For the interior of D, we assume that the potential energy of the membrane is proportioned to the surface area

E(U) = K òòD R(1 +
([[partialdiff]]U/[[partialdiff]]x)^{2} +
([[partialdiff]]U/[[partialdiff]]y)^{2) }dA.

The question is, how can U be chosen to minimize E? Let U be a surface that minimizes energy. Let [[Phi]] be any smooth function with [[Phi]](x,y) = 0 on [[partialdiff]]D. Then, a possible shape is U + [[epsilon]][[Phi]], [[epsilon]] > 0. The potential energy of this surface changes with [[epsilon]] and is given as a function of [[epsilon]] by the equation

E(U+[[epsilon]][[Phi]]) = KòòD
R(1+[[[partialdiff]](U+[[epsilon]][[Phi]])/[[partialdiff]]x}^{2} +
[[[partialdiff]](U+[[epsilon]][[Phi]])/[[partialdiff]]y]^{2} ) dA.

By hypothesis, E(U) __<__ E(U+ [[epsilon]][[Phi]]). That is,

F([[partialdiff]]E,[[partialdiff]][[epsilon]]) |[[epsilon]] = 0 = 0.

Use the approximation R(1+z) ~ 1 + z/2. Then

E(U) ~ K òòD [ 1 +
(([[partialdiff]]U/[[partialdiff]]x)^{2} +
([[partialdiff]]U/[[partialdiff]]y)^{2})/2 ] dA

and

E(U+ [[epsilon]][[Phi]]) ~ K òòD [ 1 +
(([[partialdiff]](U+[[epsilon]][[Phi]])/[[partialdiff]]x)^{2} +
([[partialdiff]](U+[[epsilon]][[Phi]])/[[partialdiff]]y)^{2})/2 ] dA.

Now, compute dE/d[[epsilon]] and evaluate at [[epsilon]]=0. Since E(U) is minimum, this derivative should be zero.

0 = dE/d[[epsilon]]|[[epsilon]]=0 = K/2 òòD[ 2 [[partialdiff]]U/[[partialdiff]]x [[partialdiff]][[Phi]]/[[partialdiff]]x + 2 [[partialdiff]]U/[[partialdiff]]y [[partialdiff]][[Phi]]/[[partialdiff]]y ] dA

= K òòD <--U, --[[Phi]] > dA.

^{}

^{}Use Green's First Identity to get

dE/d[[epsilon]]|[[epsilon]]=0 = K ò[[partialdiff]]D <
[[Phi]] --U , [[eta]] > - K òòD [[Phi]] --^{2}U dA

^{
}

^{}Since [[Phi]] = 0 on [[partialdiff]]D then 0 = -K òòD
[[Phi]] --^{2}U dA for all [[Phi]] with [[Phi]] = 0 on
[[partialdiff]]D. Therefore, --^{2}U = 0.

This result on the shape of a drum shows that a drum at rest, not changing in time, will be situated so that it satisfies a Dirichlet problem. It is resting in the steady state. A drum in the transition state - moving from some initial conditions to the steady state - will satisfy

F([[partialdiff]]^{2}u,[[partialdiff]]t^{2}) =
--^{2}u in D

u = g on [[partialdiff]]D

u(0,x,y) = initial distribution on D,

F([[partialdiff]]u,[[partialdiff]]t) (0,x,y) = initial velocity on D.

Other physical situations which arrange themselves inorder to minimize energy will often lead to elliptic problems, too. For example, consider the following heat problem: Suppose the temperature at each point in the upper- right quarter plane has assumed a value u(x,y) such that u has a continuous second partial derivative and the temperature is constant 0 along the positive x axis and constant 1 along the positive y axis. What must be the values of u for other {x,y}? Or, what is the shape of the graph of u?

The mathematical formulation of this problem is as follows:

Find u such that

--^{2}u = 0 for x > 0 and y > 0,

with u(x,0) = 0 for x > 0,

u(0,y) = 1 for y > 0.

The answer is u(x,y) = F(2,[[pi]]) arctan( F(y,x) ). (Exercise 1 below asks you to check this.)

Because the steady state - time independent - heat equation satisfies the same Dirichlet equations, all the results of this and the previous section apply to the heat equation, as well as to the equation for a drum.

We now investigate the uniqueness of solutions for the problem

[[partialdiff]]u/[[partialdiff]]t = --^{2}u on D

with u(x,y,0) = f(x,y) on D,

and u(t,x,y) = g(t,x,y) on [[partialdiff]]D.

This equation represents the transition of temperature from an initial distribution of g to the steady state with prescribed boundary conditions. The claim is that this problem has no more than one solution.

**THEOREM** (UNIQUENESS OF SOLUTION FOR THE TIME DEPENDENT HEAT EQUATION)

There is only one solution to the equation

[[partialdiff]]u/[[partialdiff]]t = --^{2}u on D

with u(x,y,0) = f(x,y) on D,

and u(x,y,t) = g(x,y,t) on [[partialdiff]]D.

Here's a way to see that solutions are unique. Let u and v be solutions and w = u - v. Then

[[partialdiff]]w/[[partialdiff]]t - --^{2}w = 0 on D

with w(x,y,0) = 0 on [[partialdiff]]D,

and u(x,y,t) = 0 on [[partialdiff]]D.

Consider E(t) = ** **òòDw^{2}(x,y,t) dx dy/2.

We have E'(t) = ** **òòD w(x,y,t)
[[partialdiff]]w/[[partialdiff]]t(x,y,t) dx dy

= ** **òòDw(x,y,t) --^{2}w(x,y,t) dx
dy

= ** **ò[[partialdiff]]D< w--w , [[eta]] >
ds - ** **òòD ||--w|| ^{2} dx dy.

^{ }

^{}This last comes from Green's First Identity. Recall w = 0 on
[[partialdiff]]D. Then, E'(t) __<__ 0 and E is not increasing. Also
E(t) __>__ 0 and E(0) = 0. This means E(t) = 0 for all t.
[[florin]]

**EXERCISE.**

1. Show that if u(x,y) = F(2,[[pi]]) arctan( F(y,x) ) for x > 0, y > 0, then

--^{2}u = 0 for x > 0 and y > 0,

with u(x,0) = 0 for x > 0,

u(0,y) = 1 for y > 0.

2. Show that if u(r,[[theta]]) = ln(r)/ln(2) then

--^{2}u = 0 for 1 < r < 2, 0 <= [[theta]] <=
2[[pi]].

BRC}(A(u(1,[[theta]]) = 0,u(2,[[theta]]) = 1)) for 0 <= [[theta]] <= 2[[pi]].

3. Let K(t,x) = exp(-x^{2}/4t) / R(4[[pi]]t).

a. Show that F([[partialdiff]]K,[[partialdiff]]t) =
F([[partialdiff]]^{2}K,[[partialdiff]]x^{2}) for t > 0 and
-* < x < *.

b. Sketch the graph of K(t,x) for t = 1, F(1,2), F(1,4).

c. Suppose that f is continuous and bounded for all real numbers and that u(t,x) =I(-*,*, )K( t , x-y ) f(y) dy for t > 0 and all real x.

Show that F([[partialdiff]]u,[[partialdiff]]t) =
F([[partialdiff]]^{2}u,[[partialdiff]]x^{2})^{ }[
Using the methods of Laplace transforms, one can show that u(0,x) = f(x). Take
MATH 4581 or see page 230 of __AN INTRODUCTION TO PARTIAL DIFFERENTIAL
EQUATIONS AND HILBERT SPACE METHODS__ by Karl E. Gustafson.]

4. Let K(x,y) = F(1,[[pi]]) F(y,x^{2} + y^{2}) .

a. Show the --^{2}K = 0 for x > 0, y > 0 and K(x,0) = 0 for x
!= 0.

b. Sketch the graph of K(x,y) for y = 1, F(1,2), F(1,4).

c. Suppose that f is continuous and bounded for all real numbers and that u(x,y) = I(-*,*, ) K( x-s , y ) f(s) ds for x > 0, y > 0.

Show that --^{2}u = 0 for x > 0, y > 0. [We will show that
u(x,0) = f(x). See also page 128 of the book cited above or Chapter 11 of
Churchill & Brown.]

Section 3.9: THE CONSTRUCTION OF GREEN'S FUNCTIONS FOR NEUMANN PROBLEMS

We now consider the Neumann problem

--^{2}u = f on D

[[partialdiff]]u/[[partialdiff]]h = g on [[partialdiff]]D.

As for the Dirchlet Problem, we use this fundamental identity: For all u and v
in the domain of --^{2},

òòD[u --^{2}v - --^{2}u v] dA = òòD -- [u
--v - --u v]

= ò[[partialdiff]]D [u [[partialdiff]]v/[[partialdiff]][[eta]] - [[partialdiff]]u/[[partialdiff]][[eta]] v ] ds.

^{ } ^{ } ^{
}

^{}Choose u to satisfy the Neumann Problem and v to be 1. Then

- òòf = - òg.

^{D} ^{[[partialdiff]]D}

Thus, if one poses a Neumann problem and this relationship does not hold, the problem cannot have a solution.

In particular, if g = 0 then òòD f = 0. Physically, this is reasonable for if nothing crosses the boundary, then the total input from the forcing function must sum to zero.

More important in the context of this course, one could have predicted this
from the Fredholm Alternative theorems. Recall that Ly = f has a solution if
and only if < f , z > = 0 for all solutions z of L*z = 0. Then realize
that L*z = --^{2}z = 0 on M* = {z :
[[partialdiff]]z/[[partialdiff]][[eta]] = 0 } has a solution z = 1. Hence, the
Fredholm Alternative theorem requires that òò D 1 f = 0.

The problem before us is to construct a Green's Function for this Neumann problem. In constructing the Green's Function for a Dirchlet Problem, G was made to satisfy

--^{2}G = d on D

G(P,Q) = 0 for P in [[partialdiff]]D.

Thus, one might guess to make the Green's Function for the Neumann problem to satisfy

--^{2}N = d on D

[[partialdiff]]N/[[partialdiff]][[eta]] = < --PN , [[eta]] > = 0 on [[partialdiff]]D.

But, this would be a mistake! For, it would imply that

0 = ò [[partialdiff]]N/[[partialdiff]][[eta]] =
òò--^{2}N = òò d = 1.

as we saw from the above work. This can be repaired by asking that [[partialdiff]]N/[[partialdiff]][[eta]] should be k(p) where k satisfies

ò[[partialdiff]]D k(p) dp s = 1.^{}

^{}

^{}

^{ } ^{ }Here is the important result from conformal mapping.
Suppose that f is a conformal mapping from D onto the unit disk. Then, the
Green's function for the Neumann problem is

N(z,[[alpha]]) = [ln( |f(z) - f([[alpha]]) | ) + ln(|1 - f(z) f([[alpha]])*|)].F(1,2[[pi]])

**Example**. Give N for the unit disk and verify that
ò[[partialdiff]]N/[[partialdiff]][[eta]] ds = 1.

To do this example, we take f(z) = z. Then

N(z,[[alpha]]) = [ ln( |z-[[alpha]]|) + ln(|1-z[[alpha]]*|)/ 2[[pi]]

= [ln([[rho]]^{2} +r^{2} - 2[[rho]]r
cos([[theta]]-[[phi]]))+ ln(
1+[[rho]]^{2}r^{2}-2[[rho]]rcos([[theta]]-[[phi]]))]/4[[pi]].

Here, again, z = re^{i[[theta]]} and [[alpha]] =
[[rho]]e^{i[[phi]]}. The normal derivative is found as follows:

[[partialdiff]]N/[[partialdiff]][[eta]] = [[partialdiff]]n/[[partialdiff]]r|r=1

= F(1,4[[Pi]]) ( [2r-2[[rho]]cos([[theta]]-[[phi]])/
([[rho]]^{2} +r^{2} -2[[rho]]rcos([[theta]]-[[phi]])]|r=1

+[2r[[rho]]^{2} -
2[[rho]]cos([[theta]]-[[phi]])]/[1+[[rho]]^{2}r^{2}-2[[rho]]rcos[[theta]]-[[phi]])]|r=1
)

= F(1,4[[Pi]]) (
[2-2[[rho]]cos([[theta]]-[[phi]])]/[1+[[rho]]^{2}-2[[rho]]cos([[theta]]-[phi]])]

+[2[[rho]]^{2}-2[[rho]]cos([[theta]]-[[phi]])]/[1+[[rho]]^{2}-2[rho]]cos([[theta]]-[[phi]])]
)

= 1/2[[pi]]. Thus, ò[[partialdiff]]n/[[partialdiff]][[eta]] d[[sigma]] = I(0,2[[pi]], ) d[[theta]] / 2[[pi]] = 1.

EXAMPLE. Give N for the upper half plane. Compute [[partialdiff]]N/[[partialdiff]][[eta]]|[[partialdiff]]D.

For this example, N(z,[[alpha]]) = [ln|f(z) - f([[alpha]])| + ln(| 1-f(z)f([[alpha]])*|)]/2[[pi]]

where f(z) = (z-i)/(z+i).

Then f(z) - f([[alpha]]) = ((z-i)/(z+i) - ([[alpha]]-i)/([[alpha]]+i) = 2i(z-[[alpha]])/(z+i)([[alpha]]+i)

and 1-f(z) f([[alpha]])* = -2i(z-[[alpha]]*)/(z-i)([[alpha]]*-i).

So, N(z,a) = ln(|(z - [[alpha]]) (z - [[alpha]]*) 4/(z +
i)^{2 }([[alpha]] + i)([[alpha]]*-i)|)

= [ln(x-a)^{2}+(y+b)^{2}) +
ln((x-a)^{2}+(y-b)^{2}))-2ln(x^{2}+(y+1)^{2)}
+2ln(4)-2ln(a^{2}+(b+1)^{2})] / 4[[pi]]

Finally, we compute [[partialdiff]]N/[[partialdiff]][[eta]] for the upper half plane.

[[partialdiff]]N/[[partialdiff]][[eta]] = - [[partialdiff]]N/[[partialdiff]]y|y=0 =

2b/(4[[pi]][(x-a)^{2} + b^{2}])
-2b/4[[pi]][(x-a)^{2}+b^{2}] - 1/[[pi]](x^{2}+1) =
1/[[pi]](x^{2}+1).

Thus,

I(-*,*, ) [[partialdiff]]N/[[partialdiff]][[eta]](x,0) dx = 1/[[pi]]
I(-*,*, ) dx/(x^{2}+1) = 1/[[pi]] arctan(x) ]O(-*, ^{*)} =
1.

We can also check that --^{2}N = d: already --^{2}
1/2[[pi]] ln(| z-[[alpha]]|) = d(z,[[alpha]]).

**EXERCISE**: Get the Green's function for the right half plane. Verify
that

[[partialdiff]]N/[[partialdiff]][[eta]]({x,y},{a,b}) is independent of {a,b}.

**SUMMARY. FORMULAS FOR U AND V WHICH SOLVE**

** -- ^{2}u = f on D and --^{2}v = f on D**

** u = g on [[partialdiff]]D [[partialdiff]]v/[[partialdiff]][[eta]] = h on
[[partialdiff]]D**.

Recall the fundamental identity

òòU --^{2}V - òò--^{2}U V =
òò--[U--V - --U V} = ò[U
[[partialdiff]]V/[[partialdiff]][[eta]] -
[[partialdiff]]U/[[partialdiff]][[eta]] V].

Thus, if U = u and V = G, then

òòu(P) d(P,Q) dPA - òòf(P) G(P,Q) dpA = ò g(P) [[partialdiff]]pG(P,Q)/[[partialdiff]][[eta]] dps,

so that

u(Q) = òò f(P) G(P,Q) dpA + ò g(P) [[partialdiff]]pG(P,Q)/[[partialdiff]][[eta]] dps.

On the other hand, if V = v and u = N then

òòf(P) [[Nu]](P,Q) dPA - òòv(P) d(P,Q) dpA

= -ò V(P) [[partialdiff]]pN(P,Q)/[[partialdiff]][[eta]] dps+òh(P)N(P,Q)dPs

so that

V(Q) = òòf(P)N(P,Q)dPA - òh(P)N(P,Q)dPs +ò[[partialdiff]]N/[[partialdiff]][[eta]]V(P)dPs.

ò [[partialdiff]]N/[[partialdiff]][[eta]](P,Q) V(P) dPs is independent of Q. It follows that

V(Q) = òòf(P)N(P,Q)dPA - òh(P) N(P,Q)dPs + constant.

**A COMPENDIUM OF PROBLEMS**

Consider L(U) = [[partialdiff]]^{2}U/[[partialdiff]]x^{2} +
[[partialdiff]]^{2}U/[[partialdiff]]y^{2} -
[[partialdiff]]^{2}U/[[partialdiff]]z^{2} -
[[partialdiff]]U/[[partialdiff]]x restricted to

M= { U(x,y,z): U(0,y,z) =0, U(1,y,z) = 0, [[partialdiff]]U/[[partialdiff]]y (x,0,z) = 3 U(x,0,z),

[[partialdiff]]U/[[partialdiff]]y (x,1,z) = 5 U(x,1,z), U(x,y,0) = U(x,y,1), [[partialdiff]]U/[[partialdiff]]z (x,y,0) = [[partialdiff]]U/[[partialdiff]]z (x,y,1)}

(1) L is a (parabolic, hyperbolic, or elliptic) operator.

(2) The formal adjoint, L* , of L is given by......

(3) The divergence theorem implies that if D is the box described by D = {
(x,y,z): 0 __<__ x __<__ 1, 0 __<__ y __<__ 1, 0
__<__ z __<__ 1 }

and F has continuous partials then

òòòD--*F dV = .........

(4) We will use the following identity: V L(U) - L*(V) U =

--*[..........].

(5) This identity is established as follows:..........

(6) Considering (3) and (4) we guess that M* = { U(x,y,z): .....}.

(7) That this is M* may be established as follows: ......

(8). Explain why this problem is "really" self-adjoint, or not:

3 F([[partialdiff]]^{2}U,[[partialdiff]]x^{2}) - 5
F([[partialdiff]]^{2}U,[[partialdiff]]y^{2}) = 0 on 0 < x
< a, 0 < y < b,

U(x,y) = 0 for {x,y} on the boundary of the rectangle.

(9) Suppose that D is a region in the plane and that h is a one-to-one
analytic function from D onto the unit disk. Explain, in some detail, how to
make up the Green's function and the solution for the problem --^{2}U
= f on D, with U = g on [[partialdiff]]D by using this f, g, and h.

(10) Let L(U) = F([[partialdiff]]^{2}U,[[partialdiff]]x^{2})
+ F([[partialdiff]]^{2}U,[[partialdiff]]y^{2}) -
F([[partialdiff]]^{2}U,[[partialdiff]]z^{2}) -
F([[partialdiff]]U,[[partialdiff]]x) + U. Find F such that

--.F = V L(U) - L*(V) U.

(11) Let L(U) = --^{2}U on the rectangle [0,1] x [0,1] and

M = {U: U(x,0) = 0, U(1,y) - 3 F([[partialdiff]]U,[[partialdiff]]x) (1,y) = 0, U(x,1) = 0, U(0,y) + 5 F([[partialdiff]]U,[[partialdiff]]x) (0,y) = 0}. Give L* and M*.

(12) Find F(x,y) in terms of f so that the solution of --^{2}W + W = F
provides the solution for
5F([[partialdiff]]^{2}u,[[partialdiff]]x^{2}) + 4
F([[partialdiff]]^{2}u,[[partialdiff]]y^{2}) +
3F([[partialdiff]]u,[[partialdiff]]x) + 2F([[partialdiff]]u,[[partialdiff]]y) +
u = f.

Back to Section 2.6

Return to Table of Contents (Green Track)

Return to Evans Harrell's