Section 3.8 A MAXIMUM PRINCIPLE

At the beginning of calculus, one obtains a first understanding of the importance of the second derivative. The second derivative predicts where the graph of a function is concave up or down. In particular, if u is a function defined on the interval [0,1], and u'' is continuous and not negative, then the curve is concave up. As a consequence of this, the values of u on [0,1] do not exceed the maximum of its values at the end points 0 and 1.

A similar result holds for a function of more variables. The role of the
second derivative is played by the Laplacian operator. We assume
--^{2}u >= 0 in a bounded region in R^{n} and show that it
must take on its maximum value on the boundary.

__THEOREM (MAXIMUM PRINCIPLE)__. Let D be a bounded region and u be
continuous on the closed set D[[union]][[partialdiff]]D with

--^{2}u = f on D,

u = g on [[partialdiff]]D,

If f(x,y) >= 0 then u assumes its maximum on [[partialdiff]]D.

Here's a way to think of why this is so. First suppose that f > 0. Any
function which is continuous on a closed and bounded set in R^{n} has a
maximum value on that set. Since u is continuous on the closed and bounded set
D[[union]][[partialdiff]]D, it has a maximum on D[[union]][[partialdiff]]D.
Suppose {[[alpha]],[[beta]]} is in the interior of D and and the maximum of u
occurs at that place : max u = u([[alpha]],[[beta]]). Then

F([[partialdiff]]u,[[partialdiff]]x)([[alpha]],[[beta]]) = 0 =
F([[partialdiff]]u,[[partialdiff]]y)([[alpha]],[[beta]]) and
F([[partialdiff]]^{2}u,[[partialdiff]]x^{2})([[alpha]],[[beta]])__<__
0 ,
F([[partialdiff]]^{2}u,[[partialdiff]]y^{2})([[alpha]],[[beta]])__<__
0.

This contradicts f > 0 since

F([[partialdiff]]^{2}u,[[partialdiff]]x^{2})([[alpha]],[[beta]])
+
F([[partialdiff]]^{2}u,[[partialdiff]]y^{2})([[alpha]],[[beta]])
= f([[alpha]],[[beta]]).

Therefore, {[[alpha]],[[beta]]} must be on the boundary of D.

Now assume f __>__ 0. We will show that it remains true that u must have
its maximum on [[partialdiff]]D. For, suppose u satisfies

--^{2}u = f on D, with u = g on [[partialdiff]]D.

Let

V[[epsilon]](x,y) = u(x,y) + [[epsilon]] (x^{2}+y^{2}).

Then

--^{2}V[[epsilon]]= --^{2}(u+[[epsilon]]
(x^{2}+y^{2})) = f + 4[[epsilon]] > 0 on D.

By the previous paragraph, V[[epsilon]] has a maximum on [[partialdiff]]D. Let this maximum occur at {c[[epsilon]],d[[epsilon]]} and u([[alpha]],[[beta]]) = max u. We have

u([[alpha]],[[beta]]) < u([[alpha]],[[beta]]) + [[epsilon]]
([[alpha]]^{2}+[[beta]]^{2}) = v[[epsilon]]([[alpha]],[[beta]])
__<__

v[[epsilon]](c[[epsilon]],d[[epsilon]]) = u(c[[epsilon]],d[[epsilon]]) +
[[epsilon]] (c[[epsilon]]^{2}+d[[epsilon]]^{2}).

Also, because D[[union]][[partialdiff]]D is bounded, lim[[epsilon]]->0 [[epsilon]] (c[[epsilon]]+d[[epsilon]]) =0, so that

u([[alpha]],[[beta]]) = max u __<__ u(lim(c[[epsilon]], d[[epsilon]]))
__<__ max u.

Since [[partialdiff]]D is closed, lim{c[[epsilon]], d[[epsilon]]} will be in [[partialdiff]]D and u(lim{c[[epsilon]], d[[epsilon]]}) = max u, we have the result.

EXAMPLE. Consider

--^{2}u = 0 on the unit disk in the plane,

u(1,[[Theta]]) = sin([[Theta]]) on the closed unit circle.

Since the maximum value of u occurs on the boundary and u = sin([[Theta]]) on
the boundary of D then -1 __<__ u(x,y) __<__ +1. (Actually,
u(r,[[Theta]]) = r sin([[Theta]]).)

EXAMPLE. This proof for the maximum principle just given used that D is
bounded. The result may fail to be true if D is not bounded as this example
will show. Let D be the strip D = { {x,y}: 0 < x < [[pi]] and y >
0}. Let g(x,y) = sin(x) on the boundary of D. A solution of --^{2}u =
0 on D and u = g on [[partialdiff]]D is u(x,y) = sin(x) e^{y}, and
this function goes unbounded in D...certainly not taking on its maximum value
on the boundary.

Several questions now come to mind. Is there a minimum principle in case f <= 0 ? Yes, see exercise 1 below. What can be done in case f is positive at some places in D and negative at others? The next theorem addresses this question and finds the maximum value of |u| in terms of f, g, and D. This is a useful result for, in addition to saying something about the maximum value of |u|, we will see that it provides a chance to see how u changes with small changes in f, g, or D. Also, this theorem is used in establishing that solutions for this type problem are unique.

__THEOREM (Continuous Dependence on the Data)__. Let D be a bounded region
and u be continuous on the closed set D[[union]][[partialdiff]]D. There is a
number K such that if f and g are continuous on D and

--^{2}u = f on D,

u = g on [[partialdiff]]D.

then, |u| <= max |g| + K max |f|.

Here's a way to see this:

--^{2(} ^{+} u + (x^{2}+y^{2})
max |f|/4) = ^{+} --^{2}u + max |f| =
^{+} f + max |f| __>__ 0.

Therefore, by the maximum princple, ^{+} u +
(x^{2}+y^{2}) max |f|/4 assumes its maximum on the boundary of
D. Since D is bounded, let K >= x^{2}+y^{2} for x and y in
[[partialdiff]]D;

|u(x,y)| __<__ max( ^{+} u +
(x^{2}+y^{2}) max |f| /4) __<__ max |g| + K max |f|.

It follows that the solution depends continuously on the data f and g. [[florin]]

The ideas in this section have been laying out what we would always hope for in a PDE problem: we would hope that the problem has a solution, that the solution changes little if we make small errors in the data, and that the problem only has one solution. (Being able to find that solution is important, of course.) To this point we have not addressed the question of uniqueness of solutions.

**THEOREM** (UNIQUENESS OF SOLUTIONS) If each of u and v is a continuous
function on the bounded region with a piecewise smooth boundary and

--^{2}u = f on D and --^{2}v = f on D

with u(x,y) = g(x,y) on [[partialdiff]]D with v(x,y) = g(x,y) on [[partialdiff]]D

then u = v.

Here's why. Let W = u - v. Note the equations that W must satisfy:

0 = --^{2}W =
[[partialdiff]]^{2}W/[[partialdiff]]x^{2} +
[[partialdiff]]^{2}W/[[partialdiff]]y^{2} with W = 0 on
[[partialdiff]]D.

By Green's first identity, 0 = òòD W.0 dA

= òòD W.--^{2}W dA
= ò[[partialdiff]]D
< W--W , [[eta]] > ds - òòD < --W , --W > dA.

Thus, 0 = < --W, --W > =
([[partialdiff]]W/[[partialdiff]]x)^{2} +
([[partialdiff]]W/[[partialdiff]]y)^{2} and
[[partialdiff]]W/[[partialdiff]]x = 0 = [[partialdiff]]W/[[partialdiff]]y. It
follows that W is constant. Since it is zero on the boundary of D, it must be
zero everywhere. Hence, u - v = W = 0. [[florin]]

These three ideas of existence, uniqueness, and continuous dependence on the
data are desirable properties for a differential equation to have. One says
that a partial differential equation is *WELL- POSED* if it has these
three properties.

A problem is __well-posed__ if

(a) it has a solution,

(b) the solution is unique, and

(c) the solution depends continuously on the initial data.

The PDE in this section is called a Dirichlet problem We have shown that it
is well posed. That is, if D is a bounded region with a piecewise smooth
boundary then the Dirichlet problem, --^{2}u = f on D, with u = g on
[[partialdiff]]D, is well posed.

We have seen examples of differential equations which did not have a solution and for which the solution was not unique. Here is an example of a partial differential equation where the solution does not depend continuously on the data.

EXAMPLE. The problem

--^{2}u(x,y) = 0 for y > 0 and x in R,

u(x,0) = 0 for x in R

[[partialdiff]]u/[[partialdiff]]y|x=0 = sin(nx)/n for x in R.

A solution is un(x,y) = sin(nx) sinh(ny) /n^{2}.^{}

^{}Note that limn->*sin(nx) sinh(ny)/n^{2} = *.

but, --^{2}u(x,y) = 0 for y > 0 and x in R,

u(x,0) = 0 for sin R

[[partialdiff]]u/[[partialdiff]]y(x,0) = 0 for x in R.

has u(x,y) = 0 for a solution. [[florin]]

**EXERCISE:**

1. Suppose u is as in the Maximum Principle identity and that f <= 0.

Show that the minimum value of u occurs on [[partialdiff]]D. ( Hint: consider v = -u.)

2. Suppose that | g1([[theta]]) - g2([[theta]]) | < .01 for 0 <= [[theta]] <= 2[[pi]] and that u1 and u2 are continuous on [[partialdiff]]D and satisfy

--^{2}u1=0 on D and --^{2}u2=0 on D

with u1 = g1 on [[partialdiff]]D with u2 = g2 on [[partialdiff]]D.

Show that

max |u1(r,[[theta]]) - u2(r,[[theta]])| < .01 for 0 < r < 1 and 0 <= [[theta]] <= 2[[pi]].

3. Suppose that u(x,y) = e^{-y }sin(x) for y >= 0 and all real
x.

a. Show that --^{2}u = 0 and u(x,0) = sin(x).

b. If possible give { [[alpha]], [[beta]] } such that u([[alpha]],[[beta]]) is minimum.

c. If possible give { [[alpha]], [[beta]] } such that u([[alpha]],[[beta]]) is maximum.

d. Show that u(x,y) = e^{y} sin(x) solves the same equation.

e. Explain why this does not contradict the uniqueness identity.

4. Let u(r,[[theta]]) = r sin([[theta]]) for 0 <= r <= 1 and 0 <= [[theta]] <= 2[[pi]].

a. Show that --^{2}u = 0 and u(1,[[theta]]) = sin([[theta]]).

b. If possible give { [[alpha]], [[beta]] } such that u([[alpha]],[[beta]]) is minimum.

c. If possible give { [[alpha]], [[beta]] } such that u([[alpha]],[[beta]]) is maximum.

d. Show that if u(r,[[theta]]) = F(1,r) sin([[theta]]) then --^{2}u =
0.

e. Explain why this does not contradict the uniqueness identity.