Linear Methods of Applied Mathematics

Evans M. Harrell II and James V. Herod*

version of 1 June 1996

The following three problems illustrate, in a simple way, the primary concerns of the next several chapters. The first is a problem about matrices and vectors, and it will be our guide to solving integral equations and differential equations.

**Model Problem XII.1**. Let

Suppose v is a vector in R^{2}. If u is a vector then

if and only if

(b) v is a vector and u = Bv.

The equivalence of these two is easy to establish. Even more, given only statement (a), you should be able to construct B such that statement (b) is equivalent to statement (a).

**Model Problem XII.2**. Let K(x,t) = 1 + x t. The function u is a solution for

if and only if

If one supposes u is as in (b), then the integral calculus should show that u satisfies (a). On the other hand, the task of deriving a formula for u from the relationship in (a) involves techniques which we will discuss in this course.

**Model Problem XII.3**. Let

Suppose f is continuous on [0,1]. The function g is a solution for

(a) g''= -f and g(0) = g(1) = 0

if and only if

VERIFICATION OF MODEL PROBLEM 3.

(a)=>(b) Suppose that f is continuous on [0,1] and g'' = -f with g(0) = g(1) = 0. Suppose also that K is as given by sample problem (3). Then

Using integration by parts this last line can be rewritten as

= -(1-x)[x g'(x) -(g(x)-g(0)}

-x[-(1-x) g'(x) + (g(1) - g(x))]

= (1-x) g(x) + x g(x) = g(x).

To get the last line we used the assumption that g(1) = g(0) = 0.

(b)=>(a) Again, suppose that f is continuous and, now, suppose that

As you can see, it is not hard to show that these two statements are equivalent. Before the course is over then, given statement (a), you should be able to construct K such that statement (b) is equivalent to statement (a). Perhaps you can do this already.

**Model Problem XII.4**. Let u(x,y) = e^{-y} sin(x) for y >= 0 and all x.
Then

Find B such that, if v is in R^{2}, then these are equivalent:

(a) u is a vector and Au = v.

(b) v is a vector and u = Bv.

**Model Problem XII.2'**. Let K be as in Model Problem XI.2. Show that if u(x) =

3x^{2 }- (25 + 12x )/6 then u solves the equation

**Model Problem XII.3**. Let

Suppose that f is continuous on [0,1]. Show these are equivalent:

**Model Problem XII.4**. Let u(r,\theta) = r sin(\theta). Show that

with u(1,[[theta]]) = sin([[theta]]).

Most often, we will take the interval on which our functions are defined to be [0,1]. Of course, we will not work in the class of

functions f for which

Then we have an inner product space. This
space is

called L^{2}( [0,1] ) . The dot product of two functions is
given by

and the norm of f is defined in terms of the dot product:

(Compare with the
norm
in R^{n}.)

It does not seem appropriate to study in detail the nature of
L^{2}[0,1] at this time. Rather, suffice it to say that the space is
large enough to contain all continuous functions - even functions which are
continuous except at a finite number of places. The interested student can
find what L^{2}[0,1] is by looking in standard books in Real
Analysis.

Having an inner product space, we can now decide if f and g in the space are perpendicular. The distance and the angle between f and g are given by the same formulas as we understood from the previous section: the distance from f to g is || f - g || and the angle [[alpha]] between f and g satisfies

cosine([[alpha]]) = < f, g >/||f|| ||g||

provided neither f nor g is zero.

Suppose { f_{p} } is a sequence of functions in L^{2}( [0,1]). It is
valuable to consider the possible meanings for lim_{p} f_{p}(x) = g(x). There are
three meanings.

The sequence {f_{p}} converges *point-wise* to g at each x in [0,1]
provided that for each x in [0,1],

lim_{p} f_{p}(x) = g(x).

The sequence converges to g *uniformly* on [0,1] provided that

lim_{p} sup_{x} |f_{p}(x) - g(x)| = 0.

And, the sequence converges to g *in norm* if

lim_{p} || f_{p} - g || = 0.

An understanding of these three modes of convergence should be sought. These are ideas that re-occur in mathematics.

(Compare with the notions of
convergence for sequences of vectors
in R^{n}.)

A type of integral equation will be studied in this section. For example, given a function called the kernel

K: [0,1]x[0,1] -> R

and a function f: [0,1] -> R, we seek a function y such that for each x in [0,1],

Such equations are called Fredholm equations of the second kind. An equation of the form

is a Fredholm equation of the first kind.

The requirements in this section on K and f will be that

These requirements are met if K and f are continuous.

For simplicity, we denote by **K** the linear function given by

Note that **K** has a domain large enough to contain all functions y which
are continuous on [0,1]. Also, if y is continuous then **K**(y) is a
function and its value at x is denoted **K**(y)(x). In spoken conversation,
it is not so easy to distinguish the number valued function K and the function
valued **K.** The bold character will be used in these notes to denoted the
latter.

It is well to note the resemblence of this function **K** to the
multiplication of a matrix A by a vector u:

This formula has the same form as that for **K** given above.

It is a historical accident that differential equations were understood before integral equations. Often an integral equation can be converted into a differential equation or vice versa, so many of the laws of nature which we think of as differential equations might just as well have been developed as integral equations initially. In some instances it is easier to differentiate than to integrate, but at other times integral operators are more tractable.

In this course integral operators will be called upon to solve differential equations, and this is one of their main uses. They have many other uses as well, most notably in the theory of filtering and signal processing. In most of these applications the integral and differential operators are linear transformations. The analogy between linear transformations and matrices is deep and useful.

Just as a matrix has an adjoint, the integral operator **K** has an adjoint,
denoted **K***. The adjoint plays an important role in the theory and use of
integral equations.
(Review the
adjoint
for matrices.)

In order to understand **K***, one must consider < **K**(f), g >
and seek **K*** such that < **K**f, g > = < f, **K***g
>.

An examination of these last equations leads one to guess that **K*** is
given by

or, keeping t as the variable of integration,

Those last equations verified that

< **K**(f), g > = < f, **K***(g) >.

Care had to be taken to watch whether the "variable of integration" is t or x in the integrals involved.

In summary, if K is the kernel associated with the linear operator **K**,
then the kernel associated with **K*** is given by K*(x,y) [[equivalence]]
K(y,x). It is of value to compare how to get **K*** from **K** with the
process of how to get A* from A:

A*p,q = Aq,p.

Consistent with the rather standard notation we have adopted above, it is clear that a briefer representation of the equation

is the concise equation y = **K**(y) + f, or (1 - **K** ) y = f.

**EXAMPLE:** Suppose that

To get K*, let's use other letters for the argument of K* and K to avoid
confusion. Suppose that 0 < u < v < 1. Then, K*(u,v) = K(v,u) = 0. In
a similar manner, K*(u,v) = (u-v)^{2} if 0 < v < u < 1. Note
that K* is not K.

The discussion of this example has been algebraic to this point. Consider this geometric notion that is suggested by the alternate name for "self-adjoint", namely, some call K "symmetric" if K(x,t) = K(t,x). The geometric name suggests a picture and the picture is the graph of K. The K of this example is not symmetric in x and t. Its graph is not symmetric about the line x = t. The function K is different from the function K*.

A first understanding of the problem of solving an integral equation

y = **K**y + f

can be made by reviewing the Fredholm Alternative Theorems in this context.

(Review the alternative theorem for matrices.)

I. Exactly one of the following holds:

(a)(**First Alternative**) if f is in L^{2}{0,1}, then

has one and only one solution.

(b)(**Second Alternative**)

has a nontrivial solution.

II. (a) If the first alternative holds for the equation

then it also holds for the equation

z(x) = I(0,1, ) K(t,x) z(t) dt + g(x).

(b) In either alternative, the equation

and its adjoint equation

have the same number of linearly independent solutions.

III. Suppose the second alternative holds. Then

has a solution if and only if

for each solution z of the adjoint equation

Comparing this context for the Fredholm Alternative Theorems with an understanding of matrix examples seems irresistible. Since these ideas will re-occur in each section, the student should pause to make these comparisons.

**EXAMPLE**: Suppose that E is the linear space of continuous functions on
the interval [-1,1]. with

and that

The equation y = **K**(y) has a non-trivial solution: the constant function
1. To see this, one computes

One implication of these computations is that the problem y = **K**y + f is
a second alternative problem. It may be verified that y(x) = 1 is also a
nontrivial solution for y = **K***y. It follows from the third of the
Fredholm alternative theorems that a necessary condition for y = **K**y + f
to have a solution is that

Note that one such f is f(x) = x + x^{3}.

**Exercises XII**

**XII.1**. (a) Find the distance from sin([[pi]]x) to cos([[pi]]x) in
L^{2}[0,1] and

L^{2}[-1,1].

Ans: 1., [[Rho]](2)

(b) Find the angle between sin([[pi]]x) and cos([[pi]]x) in L^{2}[0,1]
and L^{2}[-1,1].

Ans: [[pi]]/2,[[pi]]/2.

**XII.2**. Repeat 1. (a) and (b) for x and x^{2}.

Ans:1/R(30), 4/R(15),

Arccos(R(15)/4), [[pi]]/2

**XII.3**. Suppose K(x,t) =1 + 2 x t^{2} on [0,1]x[0,1] and y(x) = 3 -
x. Compute **K**(y) and **K***(y). Ans:
(5+3x)/2, (15+14x^{2})/6

**XII.4**.. Suppose K(x,t) = BLC{( A(x t if 0 < x < t < 1,x t^{2} if 0
< t < x < 1)). For y(x) = 3 - x, compute **K**(y) and
**K***(y). Ans: **K**[y](x) = - F(x^{5},4) +
F(4x^{4},3) - F(3x^{3}, 2) + F(7x,6)
**XII.5**.

(1) Suppose that E is the linear space of continuous functions on [0,1] with

and that

(2) Show that y = **K**y has non-trivial solution the constant function 1.

(3) Show that y = **K***y has non-trivial solution the function [[pi]] + 2
cos([[pi]]x).

(4) What conditions must hold on f in order that

y = **K**y + f

should have a solution?

Onward to Chapter XIII

Back to Chapter II (for green syllabus)

Back to Chapter XI

Return to Table of Contents

Return to Evans Harrell's home page