Section 17

Uniqueness: Alternate Methods, Same Solutions

James V. Herod*

Web page maintained by Evans M. Harrell, II, harrell@math.gatech.edu.

Consider the infinite series

w(t,x) = F(8,[[pi]]^{2}) ISU(n=1,*, )
F(sin(n[[pi]]/2),n^{2}) sinB(F(n[[pi]]x,a))
cosB(F(n[[pi]][[gamma]]t,a)). (17.1)

The first cusory look at this series raises a question: who says this thing converges anyway? It's easy to write down an infinite sum, but does this sum make sense for a given t and x?

We argue that that the series in (17.1) converges by appealing to the comparison test familiar from the calculus. Here's the way the test worked:

**Comparison Test for Convergent Series: **

If ISU(n=1,*, ) bn

is a convergent series and |an| <= bn , then the series

ISU(n=1,*, ) an

also converges.

To apply this test, choose t and x, define

an = F(sin(n[[pi]]/2),n^{2}) sinB(F(n[[pi]]x,a))
cosB(F(n[[pi]][[gamma]][[tau]],a))

and note that

|an| <= F(1,n^{2}) .

Further, taking bn = F(1,n^{2}) , we have a convergent series that
*dominates* (17.1). Hence, the series defined by (17.1) converges.

While we know now that the series converges, who can tell what is the limit? Suppose we define a function

w(t,x) = F(8,[[pi]]^{2}) ISU(n=1,*, )
F(sin(n[[pi]]/2),n^{2}) sinB(F(n[[pi]]x,a))
cosB(F(n[[pi]][[gamma]][[tau]],a)).

This is really a function for all t and x for we have shown that the series converges for all t and x. To build intution, we consider a graph.

Figure 17.1

This is a set of notes on partial differential equations, so we ask if this is some how related to some partial differential equation. The first impulse is to take derivatives with respect to t and x. But, questions of convergence arise again. The result is that since (17.1) converges at least at a rate independent of t and x -- that is, uniformly in t and x -- then where ever the series of derivatives converges, it converges to the derivative of w. That is, if for what ever t and x this series converges

F(8,[[pi]]^{2}) ISU(n=1,*, ) F(n [[pi]],a)
F(sin(n[[pi]]/2),n^{2}) cosB(F(n[[pi]]x,a))
cosB(F(n[[pi]][[gamma]][[tau]],a)),

it converges to wt(t,x). Not very satisfactory, is it?

Why not perform an experiment: take the derivatives and see what happens.

wtt(t,x) = F(8,[[pi]]^{2}) ISU(n=1,*, ) F(-n^{2}
[[pi]]^{2}[[gamma]]^{2},a^{2})
F(sin(n[[pi]]/2),n^{2}) sinB(F(n[[pi]]x,a))
cosB(F(n[[pi]][[gamma]][[tau]],a)),

where ever this series converges.

And,

wxx(t,x) = F(8,[[pi]]^{2}) ISU(n=1,*, ) F(-n^{2}
[[pi]]^{2},a^{2}) F(sin(n[[pi]]/2),n^{2})
sinB(F(n[[pi]]x,a)) cosB(F(n[[pi]][[gamma]][[tau]],a)),

where ever this series converges. Close comparison of the two suggest that

wtt - [[gamma]]^{2 }wxx = 0. (17.2)

A familiar equation indeed.

Note that w(t,0) = 0 and w(t,a) = 0 since sin(n[[pi]]) = 0.

We know that solutions for this equation have a special form. Could it be that we can get this form by arithmetic on (17.1)?

Indeed. Recall that

2 sinB(F(n[[pi]]x,a)) cosB(F(n[[pi]][[gamma]][[tau]],a)) = sinB(F(n[[pi]](x -[[gamma]]t),a)) + sinB(F(n[[pi]](x + [[gamma]]t),a)).

There are some more convergence questions about sums of converging sums, but the suggestion is that (17.1) can be rewritten as

F(4,[[pi]]^{2}) ISU(n=1,*, ) F(sin(n[[pi]]/2),n^{2})
sinB(F(n[[pi]](x -[[gamma]]t),a)) + F(4,[[pi]]^{2}) ISU(n=1,*, )
F(sin(n[[pi]]/2),n^{2}) sinB(F(n[[pi]](x + [[gamma]]t),a)).

This looks more and more like a solution to the wave equation. Is it possible to evaluate the sum of these series when t = 0? In that case we have a function

f(x) = F(8,[[pi]]^{2}) ISU(n=1,*, )
F(sin(n[[pi]]/2),n^{2}) sinB(F(n[[pi]]x,a))

with graph approximately as drawn in Figure 17.2, taking a = 1.

Figure 17.2

To see what wt(0,x) is, we do the process again. The function wt(t,x) would be given by a series

- F(8,[[pi]]^{2}) ISU(n=1,*, ) F(sin(n[[pi]]/2),n^{2})
sinB(F(n[[pi]]x,a)) F(n[[pi]][[gamma]],a)
sinB(F(n[[pi]][[gamma]][[tau]],a)).

And, at t = 0 this is 0, so that g(0) = 0.

Here's the delimma: We already knew how to write down a solution to the partial differential equation

utt - [[gamma]]^{2} uxx = 0

with u(0,x) = f(x) and ut(0,x) = g(x)

and u(t,0) = 0 , u(t,a) = 0.

Suppose that somehow, all the questions about series can be resolved and (17.1)
really does represent a solution to this familiar partial differential
equation. Certainly, it is *formally* a solution. Is this... could it be
that this solution is the same as the one that we already know about?

It's time to address the question of **uniqueness** of solutions for the
partial differential equation.

**Theorem:** There exists at most one solution of the wave equation

utt = [[gamma]]^{2} uxx for 0 < x < a, t >=
0

with

u(0,x) = f(x) and ut(0,x) = g(x)

and

u(t,0) = 0 = u(t,a).

Here is a suggestion for a proof of the uniqueness:

Suppose there are two solutions v and w and let u = v - w. Note that u is the
solution to the problem utt = [[gamma]]^{2} uxx with u(0,x) = 0 and
ut(0,x) = 0.

We prove that the function u made by subtracting the two solutions must be identically zero. Consider the function

Z(t) = F(1,2) I(0,a, )(c^{2 }ux^{2}(t,x)+
ut^{2}(t,x) ) dx.

Since the function u is twice differentiable, we differentiate Z with respect to t. Thus,

Z'(t) = I(0,a, )(c^{2} ux uxt + ut utt) dx.

Integrating by parts, we have

I(0,a, )c^{2} ux uxt dx = c^{2} ux(t,a) ut(t,a) -
c^{2} ux(t,0) ut(t,0) - [[Iota]](0,a, )c^{2} ut uxx dx.

But from the condition u(t,a) = 0 for all t, we have ut(t,a) = 0, and similarly ut(t,0) = 0. Hence, the difference of the two terms above is zero and the equation becomes

Z'(t) = I(0,a, )ut (utt - c^{2} uxx) dx. (17.3)

Since utt - c^{2} uxx = 0, we have that Z(t) is constant. Since u(0,x)
= 0 we have that ux(0,x) = 0. Add this with the fact that ut(0,x) = 0, we
determine the constant to be zero. This can happen only if ux and ut are 0 so
that u = 0.

This finishes the proof that the wave equation on a finite interval, as described in the statement of the theorem, has only one solution.

**Exercise**

** **17.1 Consider the partial differential equation

utt - 9 uxx = 0 for 0 < x < 2[[pi]], 0 < t

with u(t,0) = 0 = u(t,2[[pi]])

and u(0,x) = sin(x), ut(0,x) = 0.

(a) Compute the solution for this equation.

(b) For this solution u, compute the *energy* norm

Z(t) = F(1,2) I(0,a, )(c^{2 }ux^{2}(t,x)+
ut^{2}(t,x) ) dx.

17.2 (a) Show that the energy norm as defined in the previous problem is constant for all solutions for equations of the form

utt - [[gamma]]^{2} uxx = 0 for 0 < x < 2[[pi]], 0 < t

with u(t,0) = 0 = u(t,2[[pi]])

and u(0,x) = f(x), ut(0,x) = 0.

(b) Repeat (a) in case u(0,x) = 0 and ut(0,x) = g(x).

17.3 Consider the series

ISU(n=1,*, ) F(1,n) sinB(F(n[[pi]]x,a)) cosB(F(n[[pi]][[gamma]]t,a)).

(a) Show that this series formally satisfies the wave equation.

(b) Draw the graph of enough terms of the series to convince yourself that

F([[pi]] - x,2) = ISU(n=1,*, ) F(1,n) sinB(F(n[[pi]]x,a)) , if 0 < x < 2 [[pi]].

(c) Compute the energy norm for the partial sum

ISU(n=1,[[Nu]], ) F(1,n) sinB(F(n[[pi]]x,a)) , if 0 < x < 2 [[pi]].

**Remark: **This problem is put here as a reminder that just because a
series formally satisfies a partial differential equation does not mean it
really represents the solution of the wave equation for any physical string.