If your advisor actually asks you to do the above thing, your problem might of course be more complicated than my one.
One problem would be if the boundary conditions on are not
homogenous. For example, if the top wall of the duct in the previous
section moved horizontally with a given speed , you
would have the boundary conditions
The next thing is finding those eigenfunctions. If is
a constant times the second derivative, the eigenfunctions
are sines and cosines. Then you look at the boundary conditions
to figure out just which ones. But suppose you have something
like
In the simplest case that 0 and and are constants, the eigenfunctions are still sines and cosines. The constant will just change the eigenvalues. So that is relatively trivial.
In any other case, you will need to solve the basic eigenvalue problem , an ordinary differential equation, symbolically. But if , and/or depend on , I never taught you how to do that! Then you will need to search through mathematical handbooks. Look under Bessel functions, Gamma function, error function, orthogonal polynomials such as those of Legendre and Hermite, etcetera. Note that you often need to play around a bit with your equation to get it in a form that you can find in the tables. Or look a bit deeper; common conversions are often mentioned somewhere.
There is now also another problem. The orthogonality property no
longer applies in the form used in the previous section. There is a
theorem, called the Sturm-Liouville
theorem, that says
that you have to find a positive solution to the differential
equation
If you are in two spatial dimensions and time, you will have separate
and operators. Taking as the simpler operator,
after you switch to the basis of eigenfunctions of , the
equations for the will still contain both and . You will
now need to find the eigenfunctions of the operator. Note that
you may be forced to include the eigenvalue inside
the definition of the operator. For example, that happens in
polar coordinates, (flow in a pipe), where is the
angular coordinate and
the radial one. The net
description of then involves terms of the form
that must be summed over both and .
And the orthogonality integrals become double integrals over both
and . All a whole lot of work, but not fundamentally different
from what I told you.
As far as I can think of right now, the above covers all there is to say about the method of separation of variables. Not extremely difficult, but it sure requires a graduate student with a lot of time to carefully get all the details right.
Let me finally warn you about some common mistakes. One mistake that I see a lot is where the student leaves out an eigenfunction with eigenvalue 0. You need all the eigenfunctions. Remember that say an eigenfunction 1 is indeed a function: it is the function that is 1 at every position . Another mistake that I see a lot is that a student tries to treat 1 or 1 as a number. It is a function, and you still need to write it as or . And do the integrals. Then there are the boundary conditions. If the original problem has a boundary condition at some -boundary that , then you should subtract a function that satisfies that boundary condition as described above. And then you should discover that the remainder satisfies the boundary condition 0. Your eigenfunctions better satisfies that homogeneous boundary condition too, or forget it. Don't try to define an eigenfunction expansion for a time-like variable that has initial conditions. If you are tempted to do that, instead try a Laplace transform in time. That is another way to solve a lot of simple partial differential equations. For separation of variables as explained here, you really want boundary conditions for the eigenfunctions.