Domain of Dependence

PART II: SECOND ORDER EQUATIONS
Section 20
Domain of Dependence
James V. Herod*

*(c) Copyright 1993,1994,1995 by James V. Herod, herod@math.gatech.edu. All rights reserved.

Web page maintained by Evans M. Harrell, II, harrell@math.gatech.edu.


pde93.18Section 20: Domain of Dependence.

In the chapter "D'Alembert's Solution on an Interval", we considered the problem

uxx - uyy = sin(x)

with boundary and initial conditions. It is of value to come back to this problem here. The way that we considered the problem in that chapter is as follows: Make the change of variables [[xi]] = y - x and [[eta]] = y + x. (This means that y = ([[eta]] + [[xi]])/2 and x = ([[eta]] - [[xi]])/2.) This change of variables is suggested by the characteristics and have the effect of replacing the two second order terms with one mixed partial term.

u(x,y) = v([[xi]],[[eta]]) = v(y-x, y+x).

ux = - v[[xi]] + v[[eta]]

uy = + v[[xi]] + v[[eta]]

uxx = v[[xi]][[xi]] - 2 v[[xi]][[eta]] + v[[eta]][[eta]]

uyy = v[[xi]][[xi]] + 2 v[[xi]][[eta]] + v[[eta]][[eta]]

Thus, the problem has been changed to solving

- 4 v[[xi]][[eta]] = sin(F([[eta]] - [[xi]],2))

together with appropriate boundary conditions. One solution to the partial differential equation can be obtained by integrating twice. We find

v([[xi]],[[eta]]) = -sin(([[eta]]-[[xi]])/2).

Hence, u(x,y) = - sin(x)

Thus, this equation could be solved by changing the partial differential equation to this standard form. But, we rethink how do we connect this to initial conditions.

Suppose we require that u(0,y) = 0 and ux(0,y) = 0. Seems we've got to be a little more careful and keep up with the constants of integration. Better we take

v([[xi]],[[eta]]) = - sin(([[eta]]-[[xi]])/2) + f([[xi]]) + g([[eta]])

and u(x,y) = - sin(x) + f(y-x) + g(y+x).

We choose f and g to meet the boundary conditions:

0 = u(0,y) = f(y) + g(y)

0 = ux(0,y) = -1 - f ' (y) + g ' (y)

from which we get that f(y-x) = (x-y)/2 and g(y+x) = (x+y)/2.

Hence a solution that meets the requirements is

u(x,y) = - sin(x) + x.

If u(0,y) = F(y) and ux(0,y) = G(y), we would solve two problems and add the solutions, using the superposition of solutions for linear problems.

There is another idea at the edge here that needs to be made clear. It involves what is the interval of the initial curve on which initial values are given which influence the value of the solution u at any particular (x,y).

We restate the question. Suppose we consider a partial differential equation

uxx - uyy = 0

u(x,0) = f(x)

uy(x,0) = g(x).

And, think about the value of u at a particular point (a,b).

You will recall that the solution u at (a,b)

u(a,b) = [f(a+b) + f(a-b)]/2 +I(a-b,a+b, )g([[xi]]) d[[xi]].

Thus u(a,b) is dependent only on the initial values of f and g on the segment from (a-b, 0) to (a+b,0). This segment is defined to be the domain of dependence of (a,b) on the initial curve. The initial values outside this segment have no effect on the value of u at (a,b). The concept of a finite domain of dependence is common to hyperbolic equations.

Figure 20.1

Values of points lying with in the triangle formed by the three points are determined by the initial data on that segment. Let the triangle formed by these three points have sides [[partialdiff]]Bo, [[partialdiff]]B1, and [[partialdiff]]B2. Let B be the region interior to the triangle.

We now consider the equation

uxx - uyy = h(x,y)

u(x,0) = f(x) (20.1)

uy(x,0) = g(x).

Integrating both sides of this equation, we obtain

òòB [uxx - uyy] dxdy = òòB h(x,y) dxdy. (20.2)

Now we apply Green's Theorem:

òòB [uxx - uyy] dR = I([[partialdiff]]B, , )uydx + uxdy. (20.3)

Since B has boundary [[partialdiff]]B0, [[partialdiff]]B1, and [[partialdiff]]B2, we note that

I([[partialdiff]]B0, , )uydx + uxdy = I(a-b,a+b, ) uy dx, because on that curve dy = 0.

I([[partialdiff]]B1, , )uydx + uxdy = I([[partialdiff]]B1, , )-uy dy - ux dx, because on that curve dy = -dx

= - I([[partialdiff]]B1, , )dsu(x(s),y(s)) = - [u(a,b) - u(a+b,0)].

I([[partialdiff]]B2, , )uydx + uxdy = I([[partialdiff]]B2, , )uydy + uxdx, because on that curve dy = dx.

= I([[partialdiff]]B2, , )dsu(x(s),y(s)) = [u(a-b,0) - u(a,b)]

Hence

I([[partialdiff]]B, , )uydx + uxdy = -2 u(a,b) + u(a-b,0) + u(a+b,0) + I(a-b,a+b, ) uy dx

or, using (20.2) and (20.3),

òòB h(x,y) dB = -2 u(a,b) + u(a-b,0) + u(a+b,0) + I(a-b,a+b, ) uy dx

This provides a formula for u that generalizes the d'Alembert's result:

2 u(x,y) = u(x-y,0) + u(x+y,0) + I(x-y,x+y, ) uy d[[xi]] Q - òòB h([[xi]],[[eta]]) dB

= f(x+y) + f(x-y) + I(x-y,x+y, ) g([[xi]]) d[[xi]] - òòB h([[xi]],[[eta]]) dB (20.4)

It is of value to apply this result to an example.

Example: Solve the partial differential equation

uxx - uyy = 1

u(x,0) = sin(x)

uy(x,0) = x.

For this example, one can solve the problem with (20.4). The double integral, after an examination of Figure 20.1, evaluates to y2/2.

Exercise:

1. Make an example that is 0 for positive x and something else for negative. ask how long before u(t,5) is not zero and when it achieves its maximum value. This contrasts with immediate spread of a disturbance.

2. Make 2 examples that have the same initial distributions for positive reals and different for negative to emphasize the domain of dependence.

Work in Progress: Make more exercises that use these techniques.


Return to James Herod's Table of Contents

Back to preceding lecture

Onward to the next lecture

Return to Evans Harrell's