17.2 Solving Partial Differential Equations

Figure 17.1: Laminar viscous flow in a long duct.
\begin{figure}
\centering
\begin{picture}(400,112)(0,-6)
\thinlines
%...
...ultiput(75,27)(100,0){3}{\makebox(0,0)[b]{$u$}}
\end{picture}
\end{figure}

Consider unsteady viscous laminar flow of, say, water, in a long and thin horizontal two-dimensional duct. The velocity $u$ depends on the time and the vertical position $y$, so $u$ $\vphantom0\raisebox{1.5pt}{$=$}$ $u(t;y)$. However, for a very long duct, it does not depend on the streamwise coordinate $x$.

According to fluid mechanics, the velocity develops according to the equation

\begin{displaymath}
\dot u = \nu \frac{\partial^2}{\partial y^2} u + f
\end{displaymath}

Here $f$ is some given function of $t$ and $y$ accounting for forces like gravity or electromagnetic ones. The equation above is called a partial differential equation because there are derivatives with respect to two variables; not just $t$ but also $y$. To solve it, you also need an initial condition:

\begin{displaymath}
u(0;y)= g
\end{displaymath}

where $g$ is some given function.

Note that so far, the above system looks almost exactly like the first order system of ordinary differential equations in the previous section. However, where the system of ordinary differential equations has vectors, the scalar partial differential equation above has functions of $y$. The only other difference is that where the system of ordinary differential equations had some matrix $A$, the partial differential equation above has an operator

\begin{displaymath}
L \equiv \nu \frac{\partial^2}{\partial y^2}
\end{displaymath}

But that is no big difference: when you apply a matrix $A$ on a vector $\vec v$, you get a new vector $A\vec v$. In exactly the same way, if you apply $L$ above on a function $F(y)$, you get a new function of $y$ equal to $\nu F''(y)$. It is the same thing.

There is however one thing really different for the partial differential equation; it has boundary conditions in $y$. The fluid must be at rest at the walls of the duct. With the walls at $y$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0 and $y$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\ell$, (with $\ell$ the height of the duct), that means

\begin{displaymath}
\mbox{$y=0$:}\quad u(t;0) = 0 \qquad \mbox{$y=\ell$:}\quad u(t;\ell) = 0
\end{displaymath}

(It is like the first and the last component of vector $\vec u$ would have to be zero.)

Still, you can solve the partial differential equation much like the system of ordinary differential equations in the previous section. I will now show you how.

First, we need the eigenfunctions of the operator $L$. Now a simple second-order derivative operator has eigenfunctions that are sines and cosines. So here the eigenfunctions could be sines or cosines of $y$. But the eigenfunctions must satisfy the above boundary conditions for $u$ too. And these boundary conditions better be homogenous! (I will tell you in the next section what to do if the boundary conditions for $u$ at $y$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0 and $y$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\ell$ are not homogeneous.) Fortunately, the ones above are homogeneous; there are no terms independent of $u$. So we can proceed. The cosines of $y$ are out: cosines are 1 at $y$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0, not 0. The sines are always 0 at zero, so that is OK. But they must also be 0 at $y$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\ell$, and that only happens for

\begin{displaymath}
\begin{array}{cccc}
Y_1 = \sin(\pi y/\ell) & Y_2 = \sin(...
...2 &
\lambda_3 = -\nu 3^2\pi^2/\ell^2 & \ldots
\end{array}
\end{displaymath}

You find the eigenvalues by simply computing $L Y_i$ for $i$ $\vphantom0\raisebox{1.5pt}{$=$}$ $1,2,3,\ldots$. That also verifies that the $Y_i$ are really eigenfunctions like I told you.

(Note that $\sin(-\pi x/\ell)$ $\vphantom0\raisebox{1.5pt}{$=$}$ $-\sin(\pi x/\ell)$, so that is not an additional independent eigenfunction. That is just like $-\vec e_1$ would not be an additional eigenvector in the previous section.)

Next you write everything in terms of these eigenfunctions:

\begin{eqnarray*}
u & = & u_1 Y_1 + u_2 Y_2 + u_3 Y_3 + \ldots \\
f & = & f...
...Y_3 + \ldots \\
g & = & g_1 Y_1 + g_2 Y_2 + g_3 Y_3 + \ldots
\end{eqnarray*}

I can leave out the primes of the previous section, because nobody ever writes the components of a function of $y$. Allow me to write the first of the expansions above out showing the arguments of the functions:

\begin{displaymath}
u = u_1(t) Y_1(y) + u_2(t) Y_2(y) + u_3(t) Y_3(y) + \ldots
\end{displaymath}

I do that so that you can see why the solution method we are using is called separation of variables (the one for a partial differential equation, not for an ordinary one.) Each term is separated in a function of $t$ times a function of $y$.

Once again, you have to compute these coefficients $f_1,f_2,\ldots$ and $g_1,g_2,\ldots$. But how do you do that? You can hardly invert an infinite matrix $E$ $\vphantom0\raisebox{1.5pt}{$=$}$ $(Y_1,Y_2,Y_3,...)$ of eigenfunctions like in the previous section. Well, for an operator like $L$, just a constant multiple of the second derivative, there is a trick: you can integrate to find them. In particular,

\begin{displaymath}
f_i = \frac{\int_{y=0}^\ell Y_i(y) f(t;y) { \rm d}y}
{\...
...i^2(y) { \rm d}y}
\qquad
\mbox{for all }i = 1,2,3,\ldots
\end{displaymath}

If you are astonished by that, don't be. The second derivative operator is a real symmetric one, so in the vector case you would find the $\vec f'$ $\vphantom0\raisebox{1.5pt}{$=$}$ $E^{\rm T}\vec f$. So you would find the $f_i'$ as the dot product of the rows of $E^{\rm T}$, the eigenvectors, times the given column vector $\vec f$. In the eigenfunction case, the dot-product summation becomes integration over $y$. And the bottom factors in the ratios above are just correction factors for the fact that I did not normalize the eigenfunctions in any way. You can see why the justification for the equations above is called the orthogonality property.

Much like in the previous section, the basis of eigenfunctions makes $L$ diagonal, with the eigenvalues on the main diagonal. So the partial differential equation becomes a system of independent equations for the coefficients of $u$:

\begin{eqnarray*}
\dot u_1 & = & \lambda_1 u_1 + f_1 \qquad u_1(0) = g_1 \\
...
... \vdots & \phantom{ - \lambda_3 u_3 + f_3 \qquad u_3(0)} \vdots
\end{eqnarray*}

These equations are no more difficult to solve than for the case of ordinary differential equations.

Afterwards, you can find $u$ at any time $t$ and position $y$ you want by summing:

\begin{displaymath}
u(t;y) = \sum_{i=1}^\infty u_i(t) Y_i(y)
\end{displaymath}

Of course, you cannot normally sum infinitely many terms, even on a computer. You will need to instruct the computer to stop at some large value of $i$, call it $I$. The same holds in case you cannot do the earlier integrals for the $f_i$ and $g_i$ analytically; then you will need to do them numerically, up to some large $I$. And you may even have to solve the ordinary differential equations numerically. (Note that a first order linear equation can be reduced to an integral, so you would not need to use an ODE solver from your library.)