Integral Equations

Integral Equations and the Method of Green's Functions

James V. Herod*

*(c) Copyright 1993,1994,1995 by James V. Herod, herod@math.gatech.edu. All rights reserved.

Page maintained by Evans M. Harrell, II, harrell@math.gatech.edu.


CHAPTER I. INTEGRAL EQUATIONS

SECTION 1.1. GEOMETRY AND INTEGRAL OPERATORS

In this section, instead of working in the space Rn, we will work in a space of functions defined on an interval. At an abstract level, many sets of functions have the same properties as a vector space like Rn, and this analogy will be extremely useful in this section. It will be developed rather rapidly. If you would prefer a somewhat more detailed discussion of vector spaces, read the first two sections of this link before proceeding. Most often, we will take the interval on which our functions are defined to be [0,1]. Of course, we will not work in the class of all functions on [0,1]; rather, in the spirit of the previous section, we ask that the linear space should consist of

functions f for which

I(0,1, |f(x)|^2 dx) < infinity

Then, we have an inner product space as we did in the previous section. This space is called L2( [0,1] ) . The dot product of two functions is given by

< f, g > =  I(0,1, f(x) g(x) dx)

and the norm of f is defined in terms of the dot product:

|| f ||<sup>2</sup> = I(0,1, |f(x)|^2dx).

(Compare with the norm in Rn.)

It does not seem appropriate to study in detail the nature of L2[0,1] at this time. Rather, suffice it to say that the space is large enough to contain all continuous functions - even functions which are continuous except at a finite number of places. The interested student can find what L2[0,1] is by looking in standard books in Real Analysis.

Having an inner product space, we can now decide if f and g in the space are perpendicular. The distance and the angle between f and g are given by the same formulas as we understood from the previous section: the distance from f to g is || f - g || and the angle [[alpha]] between f and g satisfies

cosine([[alpha]]) = < f, g >/||f|| ||g||

provided neither f nor g is zero.

Suppose { fp } is a sequence of functions in L2( [0,1]). It is valuable to consider the possible meanings for limp fp(x) = g(x). There are three meanings.

The sequence {fp} converges point-wise to g at each x in [0,1] provided that for each x in [0,1],

limp fp(x) = g(x).

The sequence converges to g uniformly on [0,1] provided that

limp supx |fp(x) - g(x)| = 0.

And, the sequence converges to g in norm if

limp || fp - g || = 0.

An understanding of these three modes of convergence should be sought. These are ideas that re-occur in mathematics.

(Compare with the notions of convergence for sequences of vectors in Rn.)

A type of integral equation will be studied in this section. For example, given a function called the kernel

K: [0,1]x[0,1] -> R

and a function f: [0,1] -> R, we seek a function y such that for each x in [0,1],

y(x) =  I(0,1, K(x,t) y(t) dt)  +  f(x).

Such equations are called Fredholm equations of the second kind. An equation of the form

 =  I(0,1, K(x,t) y(t) dt)  +  f(x)

is a Fredholm equation of the first kind.

The requirements in this section on K and f will be that

 I(0,1, )I(0,1, |K(x,t)|<sup>2</sup> dx dt) < infinity

These requirements are met if K and f are continuous.

For simplicity, we denote by K the linear function given by

K(y)(x) =  I(0,1, K(x,t) y(t) dt).

Note that K has a domain large enough to contain all functions y which are continuous on [0,1]. Also, if y is continuous then K(y) is a function and its value at x is denoted K(y)(x). In spoken conversation, it is not so easy to distinguish the number valued function K and the function valued K. The bold character will be used in these notes to denoted the latter.

It is well to note the resemblence of this function K to the multiplication of a matrix A by a vector u:

A(u) (p) = \Sigma \Sigma A(p,q) u(q)

This formula has the same form as that for K given above.

It is a historical accident that differential equations were understood before integral equations. Often an integral equation can be converted into a differential equation or vice versa, so many of the laws of nature which we think of as differential equations might just as well have been developed as integral equations initially. In some instances it is easier to differentiate than to integrate, but at other times integral operators are more tractable.

In this course integral operators will be called upon to solve differential equations, and this is one of their main uses. They have many other uses as well, most notably in the theory of filtering and signal processing. In most of these applications the integral and differential operators are linear transformations. The analogy between linear transformations and matrices is deep and useful.

Just as a matrix has an adjoint, the integral operator K has an adjoint, denoted K*. The adjoint plays an important role in the theory and use of integral equations. (Review the adjoint for matrices.)

In order to understand K*, one must consider < K(f), g > and seek K* such that < Kf, g > = < f, K*g >.

< K(f), g >  = I(0,1, ) K(f)(x) g(x) dx=   I(0,1, )I(0,1, )  K(x,t) f(t) g(x) dtdx

An examination of these last equations leads one to guess that K* is given by

K*(g) (t) = I(0,1, ) K(x,t) g(x) dx,

or, keeping t as the variable of integration,

K*(g) (x) = I(0,1, )K(t,x) g(t) dt,

Those last equations verified that

< K(f), g > = < f, K*(g) >.

Care had to be taken to watch whether the "variable of integration" is t or x in the integrals involved.

In summary, if K is the kernel associated with the linear operator K, then the kernel associated with K* is given by K*(x,y) [[equivalence]] K(y,x). It is of value to compare how to get K* from K with the process of how to get A* from A:

A*p,q = Aq,p.

Consistent with the rather standard notation we have adopted above, it is clear that a briefer representation of the equation

y(x) =  I(0,1, )K(x,t) y(t) dt  + f(x)

is the concise equation y = K(y) + f, or (1 - K ) y = f.

EXAMPLE: Suppose that

K(x,t) =  BLC{(A((x-t)^2  if 0 < x < t < 1, 0    if 0 < t < x < 1)).

To get K*, let's use other letters for the argument of K* and K to avoid confusion. Suppose that 0 < u < v < 1. Then, K*(u,v) = K(v,u) = 0. In a similar manner, K*(u,v) = (u-v)2 if 0 < v < u < 1. Note that K* is not K.

K*(x,t) = BLC{(A( 0    if 0 < t < x < 1,(x-t)^2  if 0 < x < t < 1)).

The discussion of this example has been algebraic to this point. Consider this geometric notion that is suggested by the alternate name for "self-adjoint", namely, some call K "symmetric" if K(x,t) = K(t,x). The geometric name suggests a picture and the picture is the graph of K. The K of this example is not symmetric in x and t. Its graph is not symmetric about the line x = t. The function K is different from the function K*.

Exercises.

I.1. (a) Find the distance from sin([[pi]]x) to cos([[pi]]x) in L2[0,1] and

L2[-1,1].

Ans: 1., [[Rho]](2)

(b) Find the angle between sin([[pi]]x) and cos([[pi]]x) in L2[0,1] and L2[-1,1].

Ans: [[pi]]/2,[[pi]]/2.

I.2. Repeat 1. (a) and (b) for x and x2.

Ans:1/R(30), 4/R(15),

Arccos(R(15)/4), [[pi]]/2

I.3. Suppose K(x,t) =1 + 2 x t2 on [0,1]x[0,1] and y(x) = 3 - x. Compute K(y) and K*(y). Ans: (5+3x)/2, (15+14x2)/6

4. Suppose K(x,t) = BLC{( A(x t if 0 < x < t < 1,x t2 if 0 < t < x < 1)). For y(x) = 3 - x, compute K(y) and K*(y). Ans: K[y](x) = - F(x5,4) + F(4x4,3) - F(3x3, 2) + F(7x,6)


Onward to Section 1.2

Return to Table of Contents (Green Track)

Return to Evans Harrell's