Next: 7.38, §7 More Up: 7.38 Previous: 7.38, §5 Total

7.38, §6 Notes

(The material in this section is elective).

It may be interesting to see exactly how functions are similar to vectors. Let's start with a vector in two dimensions, like the vector . I can represent this vector graphically as a point in a plane, but I can also represent it as the 'spike function' in the first figure below:

The first coefficient, v1, is 3, giving a spike of height of 3 when the subscript, call it i, is 1. The second coefficient v2=4, so we have a spike of height 4 at i=2. Similarly, the three-dimensional vector can be graphed as the three-spike function in the second figure. If I keep adding more dimensions, going to the limit of infinite-dimensional space, my spike graph vi becomes a function graph f of a continuous coordinate x instead of i. You can think of function f(x) as a column vector of numbers, with the numbers being the successive values of f(x). In this way, vectors become functions. And vector analysis turns into functional analysis.

To take the dot product of two vectors and , we multiply corresponding coefficients and sum:

For functions f(x) and g(x), the sum over i becomes an integral over x:

Since we now have a dot product, or inner product, for functions we can define the ``norm'' of a function, ,corresponding to length for vectors. More importantly, we can define orthogonality for functions. Functions f and g are orthogonal if the integral above is zero.

For vectors we have matrices that turn vectors into other vectors: a matrix A turns a vector into another vector . For functions we have ``operators'' that turn functions into other functions. For example, the operator turns a function f(x) into another function f''(x). Among the functions of period , a function such as is an eigenfunction of this operator:

The eigenvalue is n2.

Symmetry for matrices can be expressed as because this can be written using matrix multiplication as , which can only be true for all vectors and if A=AT. And since ,as can be seen from integration by parts, is a symmetric, or ``self-adjoint'', operator, with orthogonal eigenfunctions.

How about orthogonality relations? Given eigenfunctions Xn, we have seen that you get the Fourier coefficients of an arbitrary function f(x) by the following formula:

But where does this come from? Remember that we get the coordinates in the new coordinate system, (here, the Fourier coefficients fn), by multiplying the original vector, (here f(x)), by the inverse transformation matrix P1.

Now the transformation matrix P has the eigenfunctions as columns:

Since the eigenfunctions are orthogonal, you get P-1 by simply taking the transpose, so:

giving fn = (Xn,f). The difference from fn = (Xn,f)/(Xn,Xn) above is simply due to the fact that we usually do not normalize eigenfunctions to norm 1.

Note: I simply told you that the proper orthogonal eigenfunctions for the double eigenvalues in 7.38 are and , but I could actually have derived it from Gram-Schmidt! There is really nothing new for PDE, if you think of it this way.


Next: 7.38, §7 More Up: 7.38 Previous: 7.38, §5 Total