XII. Geometry and integral operators

Linear Methods of Applied Mathematics

Evans M. Harrell II and James V. Herod*

*(c) Copyright 1994,1995,1996 by Evans M. Harrell II and James V. Herod. All rights reserved.


version of 1 June 1996

XII. Geometry and integral operators

The following three problems illustrate, in a simple way, the primary concerns of the next several chapters. The first is a problem about matrices and vectors, and it will be our guide to solving integral equations and differential equations.

Model Problem XII.1. Let B = B(ACO2(  3, -2, -1,  1) ).

Suppose v is a vector in R2. If u is a vector then

(a)   B(ACO2( 1,  2, 1,  3))   u = v

if and only if

(b) v is a vector and u = Bv.

The equivalence of these two is easy to establish. Even more, given only statement (a), you should be able to construct B such that statement (b) is equivalent to statement (a).

Model Problem XII.2. Let K(x,t) = 1 + x t. The function u is a solution for

(a)  		u(x) =   I(0,1, K(x,t)u(t) dt) + x^2   for x in [0,1]

if and only if

(b)  		u(x) = x^2 - (25+12x)/18  for  x in [0,1].

If one supposes u is as in (b), then the integral calculus should show that u satisfies (a). On the other hand, the task of deriving a formula for u from the relationship in (a) involves techniques which we will discuss in this course.

Model Problem XII.3. Let

K(x,t) = BLC{(A( x(1-t) if x £ t ,, t(1-x) if t £ x.))

Suppose f is continuous on [0,1]. The function g is a solution for

(a) g''= -f and g(0) = g(1) = 0

if and only if

(b)  g(x) = I(0,1, K(x,t)f(t)dt).

VERIFICATION OF MODEL PROBLEM 3.

(a)=>(b) Suppose that f is continuous on [0,1] and g'' = -f with g(0) = g(1) = 0. Suppose also that K is as given by sample problem (3). Then

 I(0,1,K(x,t)f(t)dt) =  -I(0,1,K(x,t)g''(t)dt) =  -(1-x) I(0,x, t g''(t)dt) - x I(x,1, (1-t) g''(t)dt)

Using integration by parts this last line can be rewritten as

-(1-x) ([x g'(x)- 0 g'(0)] - I(0,x,g'(t)dt))- x ([(1-1)g'(1) - (1-x)g'(x)] + I(x,1,g'(t)dt))

= -(1-x)[x g'(x) -(g(x)-g(0)}

-x[-(1-x) g'(x) + (g(1) - g(x))]

= (1-x) g(x) + x g(x) = g(x).

To get the last line we used the assumption that g(1) = g(0) = 0.

(b)=>(a) Again, suppose that f is continuous and, now, suppose that

g(0) = g(1) = 1, etc.

As you can see, it is not hard to show that these two statements are equivalent. Before the course is over then, given statement (a), you should be able to construct K such that statement (b) is equivalent to statement (a). Perhaps you can do this already.

Model Problem XII.4. Let u(x,y) = e-y sin(x) for y >= 0 and all x. Then

u_xx + u_yy = 0><p>
and<p>
				u(x,0) = sin(x).<p>
<p>
	This result can be verified by simple calculus.
<hr>
	It gives insight into the unifying ideas of this course to realize each of
these model problems as being concerned with an equation of the form Lu=v.  It
is a worthwhile exercise to reformulate each of the problems in this form.<p>
<p>
<b>Model Problem XII.1'</b>.  
<IMG SRC=

Find B such that, if v is in R2, then these are equivalent:

(a) u is a vector and Au = v.

(b) v is a vector and u = Bv.

Model Problem XII.2'. Let K be as in Model Problem XI.2. Show that if u(x) =

3x2 - (25 + 12x )/6 then u solves the equation

u(x) = I(0,1, )K(x,t)f(t)dt  + 3x2

Model Problem XII.3. Let

K(x,t) = BLC{(A(    0        if 0 < x < t < 1, e^{t-x} - e^{2(t-x)} if 0 < t < x < 1.))

Suppose that f is continuous on [0,1]. Show these are equivalent:

a) y'' + 3y' + 2y = f, y(0) = y'(0) = 0   and  (b)  y(x) = I(0,1, K(x,t)f(t)dt)

Model Problem XII.4. Let u(r,\theta) = r sin(\theta). Show that

F(1,r) F(partial (rF(partialu,partial r)),r)  +  F(1,r^2)  F(partial^2u,partial theta^2) = 0 with u(1,[[theta]]) = sin([[theta]]).


Most often, we will take the interval on which our functions are defined to be [0,1]. Of course, we will not work in the class of all functions on [0,1]; rather, in the spirit of Chapters I-II, we ask that the linear space should consist of

functions f for which

I(0,1, |f(x)|^2 dx) < infinity

Then we have an inner product space. This space is
called L2( [0,1] ) . The dot product of two functions is given by

< f, g > =  I(0,1, f(x) g(x) dx)

and the norm of f is defined in terms of the dot product:

|| f ||<sup>2</sup> = I(0,1, |f(x)|^2dx).

(Compare with the norm in Rn.)

It does not seem appropriate to study in detail the nature of L2[0,1] at this time. Rather, suffice it to say that the space is large enough to contain all continuous functions - even functions which are continuous except at a finite number of places. The interested student can find what L2[0,1] is by looking in standard books in Real Analysis.

Having an inner product space, we can now decide if f and g in the space are perpendicular. The distance and the angle between f and g are given by the same formulas as we understood from the previous section: the distance from f to g is || f - g || and the angle [[alpha]] between f and g satisfies

      cosine([[alpha]]) = < f, g >/||f|| ||g||

provided neither f nor g is zero.

Suppose { fp } is a sequence of functions in L2( [0,1]). It is valuable to consider the possible meanings for limp fp(x) = g(x). There are three meanings.

The sequence {fp} converges point-wise to g at each x in [0,1] provided that for each x in [0,1],

       limp fp(x) = g(x).

The sequence converges to g uniformly on [0,1] provided that

       limp supx |fp(x) - g(x)| = 0.

And, the sequence converges to g in norm if

       limp || fp - g || = 0.

An understanding of these three modes of convergence should be sought. These are ideas that re-occur in mathematics.

(Compare with the notions of convergence for sequences of vectors in Rn.)

A type of integral equation will be studied in this section. For example, given a function called the kernel

       K: [0,1]x[0,1] -> R

and a function f: [0,1] -> R, we seek a function y such that for each x in [0,1],

y(x) =  I(0,1, K(x,t) y(t) dt)  +  f(x).

Such equations are called Fredholm equations of the second kind. An equation of the form

 =  I(0,1, K(x,t) y(t) dt)  +  f(x)

is a Fredholm equation of the first kind.

The requirements in this section on K and f will be that

 I(0,1, )I(0,1, |K(x,t)|<sup>2</sup> dx dt) < infinity

These requirements are met if K and f are continuous.

For simplicity, we denote by K the linear function given by

K(y)(x) =  I(0,1, K(x,t) y(t) dt).

Note that K has a domain large enough to contain all functions y which are continuous on [0,1]. Also, if y is continuous then K(y) is a function and its value at x is denoted K(y)(x). In spoken conversation, it is not so easy to distinguish the number valued function K and the function valued K. The bold character will be used in these notes to denoted the latter.

It is well to note the resemblence of this function K to the multiplication of a matrix A by a vector u:

A(u) (p) = \Sigma \Sigma A(p,q) u(q)

This formula has the same form as that for K given above.

It is a historical accident that differential equations were understood before integral equations. Often an integral equation can be converted into a differential equation or vice versa, so many of the laws of nature which we think of as differential equations might just as well have been developed as integral equations initially. In some instances it is easier to differentiate than to integrate, but at other times integral operators are more tractable.

In this course integral operators will be called upon to solve differential equations, and this is one of their main uses. They have many other uses as well, most notably in the theory of filtering and signal processing. In most of these applications the integral and differential operators are linear transformations. The analogy between linear transformations and matrices is deep and useful.

Just as a matrix has an adjoint, the integral operator K has an adjoint, denoted K*. The adjoint plays an important role in the theory and use of integral equations. (Review the adjoint for matrices.)

In order to understand K*, one must consider < K(f), g > and seek K* such that < Kf, g > = < f, K*g >.

< K(f), g >  = I(0,1, ) K(f)(x) g(x) dx=   I(0,1, )I(0,1, )  K(x,t) f(t) g(x) dtdx

An examination of these last equations leads one to guess that K* is given by

K*(g) (t) = I(0,1, ) K(x,t) g(x) dx,

or, keeping t as the variable of integration,

K*(g) (x) = I(0,1, )K(t,x) g(t) dt,

Those last equations verified that

       < K(f), g > = < f, K*(g) >.

Care had to be taken to watch whether the "variable of integration" is t or x in the integrals involved.

In summary, if K is the kernel associated with the linear operator K, then the kernel associated with K* is given by K*(x,y) [[equivalence]] K(y,x). It is of value to compare how to get K* from K with the process of how to get A* from A:

       A*p,q = Aq,p.

Consistent with the rather standard notation we have adopted above, it is clear that a briefer representation of the equation

y(x) =  I(0,1, )K(x,t) y(t) dt  + f(x)

is the concise equation y = K(y) + f, or (1 - K ) y = f.

EXAMPLE: Suppose that

K(x,t) =  BLC{(A((x-t)^2  if 0 < x < t < 1, 0    if 0 < t < x < 1)).

To get K*, let's use other letters for the argument of K* and K to avoid confusion. Suppose that 0 < u < v < 1. Then, K*(u,v) = K(v,u) = 0. In a similar manner, K*(u,v) = (u-v)2 if 0 < v < u < 1. Note that K* is not K.

K*(x,t) = BLC{(A( 0    if 0 < t < x < 1,(x-t)^2  if 0 < x < t < 1)).

The discussion of this example has been algebraic to this point. Consider this geometric notion that is suggested by the alternate name for "self-adjoint", namely, some call K "symmetric" if K(x,t) = K(t,x). The geometric name suggests a picture and the picture is the graph of K. The K of this example is not symmetric in x and t. Its graph is not symmetric about the line x = t. The function K is different from the function K*.


THE FREDHOLM ALTERNATIVE THEOREMS

A first understanding of the problem of solving an integral equation

       y = Ky + f

can be made by reviewing the Fredholm Alternative Theorems in this context.

(Review the alternative theorem for matrices.)

I. Exactly one of the following holds:

(a)(First Alternative) if f is in L2{0,1}, then

has one and only one solution.

(b)(Second Alternative)

   

has a nontrivial solution.

II. (a) If the first alternative holds for the equation

then it also holds for the equation

       z(x) = I(0,1, ) K(t,x) z(t) dt + g(x).

(b) In either alternative, the equation

and its adjoint equation

have the same number of linearly independent solutions.

III. Suppose the second alternative holds. Then

has a solution if and only if

for each solution z of the adjoint equation

Comparing this context for the Fredholm Alternative Theorems with an understanding of matrix examples seems irresistible. Since these ideas will re-occur in each section, the student should pause to make these comparisons.

EXAMPLE: Suppose that E is the linear space of continuous functions on the interval [-1,1]. with

and that

The equation y = K(y) has a non-trivial solution: the constant function 1. To see this, one computes

One implication of these computations is that the problem y = Ky + f is a second alternative problem. It may be verified that y(x) = 1 is also a nontrivial solution for y = K*y. It follows from the third of the Fredholm alternative theorems that a necessary condition for y = Ky + f to have a solution is that

Note that one such f is f(x) = x + x3.

Exercises XII

XII.1.   (a) Find the distance from sin([[pi]]x) to cos([[pi]]x) in L2[0,1] and

   L2[-1,1].

Ans: 1., [[Rho]](2)

   (b) Find the angle between sin([[pi]]x) and cos([[pi]]x) in L2[0,1] and L2[-1,1].

Ans: [[pi]]/2,[[pi]]/2.

XII.2.   Repeat 1. (a) and (b) for x and x2.

Ans:1/R(30), 4/R(15),

Arccos(R(15)/4), [[pi]]/2

XII.3.   Suppose K(x,t) =1 + 2 x t2 on [0,1]x[0,1] and y(x) = 3 - x. Compute    K(y) and K*(y). Ans: (5+3x)/2,                 (15+14x2)/6

XII.4..   Suppose K(x,t) = BLC{( A(x t if 0 < x < t < 1,x t2 if 0 < t < x < 1)). For y(x) = 3 - x, compute K(y)    and K*(y).        Ans: K[y](x) = - F(x5,4) + F(4x4,3) - F(3x3, 2)                    + F(7x,6) XII.5.

(1) Suppose that E is the linear space of continuous functions on [0,1] with

and that

(2) Show that y = Ky has non-trivial solution the constant function 1.

(3) Show that y = K*y has non-trivial solution the function [[pi]] + 2 cos([[pi]]x).

(4) What conditions must hold on f in order that

       y = Ky + f

should have a solution?


Onward to Chapter XIII

Back to Chapter II (for green syllabus)

Back to Chapter XI

Return to Table of Contents

Return to Evans Harrell's home page