(The material in this section is elective).
It may be interesting to see exactly how functions are similar to
vectors. Let's start with a vector in two dimensions, like the vector
. I can represent this vector graphically as a point
in a plane, but I can also represent it as the 'spike function' in the
first figure below:
To take the dot product of two vectors and
, we
multiply corresponding coefficients and sum:
Since we now have a dot product, or inner product, for functions we
can define the ``norm'' of a function, ,corresponding to length for vectors. More importantly, we can define
orthogonality for functions. Functions f and g are orthogonal if
the integral above is zero.
For vectors we have matrices that turn vectors into other vectors: a
matrix A turns a vector into another vector
. For
functions we have ``operators'' that turn functions into other
functions. For example, the operator
turns
a function f(x) into another function f''(x). Among the functions
of period
, a function such as
is an eigenfunction
of this operator:
Symmetry for matrices can be expressed as because this can be written using matrix
multiplication as
, which can
only be true for all vectors
and
if A=AT. And
since
,as can be seen from integration by parts,
is
a symmetric, or ``self-adjoint'', operator, with orthogonal
eigenfunctions.
How about orthogonality relations? Given eigenfunctions Xn, we have seen that you get the Fourier coefficients of an arbitrary function f(x) by the following formula:
Note: I simply told you that the proper orthogonal eigenfunctions for
the double eigenvalues in 7.38 are and
, but I
could actually have derived it from Gram-Schmidt! There is really
nothing new for PDE, if you think of it this way.