Notations

The below are the simplest possible descriptions of various symbols, just to help you keep reading if you do not remember/know what they stand for.

Watch it. There are so many ad hoc usages of symbols, some will have been overlooked here. Always use common sense first in guessing what a symbol means in a given context.

$
\setbox 0=\hbox{$\cdot$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
A dot might indicate And also many more prosaic things (punctuation signs, decimal points, ...).

$
\setbox 0=\hbox{$\times$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Multiplication symbol. May indicate:

$
\setbox 0=\hbox{$!$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Might be used to indicate a factorial. Example: 5! $\vphantom0\raisebox{1.5pt}{$=$}$ 1 $\times$ 2 $\times$ 3 $\times$ 4 $\times$ 5 $\vphantom0\raisebox{1.5pt}{$=$}$ 120.

The function that generalizes $n!$ to noninteger values of $n$ is called the gamma function; $n!$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\Gamma(n+1)$. The gamma function generalization is due to, who else, Euler. (However, the fact that $n!$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\Gamma(n+1)$ instead of $n!$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\Gamma(n)$ is due to the idiocy of Legendre.) In Legendre-resistant notation,

\begin{displaymath}
n!=\int_0^{\infty}t^ne^{-t}{ \rm d}{t}
\end{displaymath}

Straightforward integration shows that 0! is 1 as it should, and integration by parts shows that $(n+1)!$ $\vphantom0\raisebox{1.5pt}{$=$}$ $(n+1)n!$, which ensures that the integral also produces the correct value of $n!$ for any higher integer value of $n$ than 0. The integral, however, exists for any real value of $n$ above $\vphantom0\raisebox{1.5pt}{$-$}$1, not just integers. The values of the integral are always positive, tending to positive infinity for both $n\downarrow-1$, (because the integral then blows up at small values of $t$), and for $n\uparrow\infty$, (because the integral then blows up at medium-large values of $t$). In particular, Stirling’s formula says that for large positive $n$, $n!$ can be approximated as

\begin{displaymath}
n! \sim \sqrt{2\pi n} n^n e^{-n} \left[1 + \ldots\right]
\end{displaymath}

where the value indicated by the dots becomes negligibly small for large $n$. The function $n!$ can be extended further to any complex value of $n$, except the negative integer values of $n$, where $n!$ is infinite, but is then no longer positive. Euler’s integral can be done for $n$ $\vphantom0\raisebox{1.5pt}{$=$}$ $-\frac12$ by making the change of variables $\sqrt{t}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $u$, producing the integral $\int_0^\infty2e^{-u^2}{ \rm d}{u}$, or $\int_{-\infty}^{\infty}e^{-u^2}{ \rm d}{u}$, which equals $\sqrt{\int_{-\infty}^{\infty}e^{-x^2}{ \rm d}{x}\int_{-\infty}^{\infty}e^{-y^2}{ \rm d}{y}}$ and the integral under the square root can be done analytically using polar coordinates. The result is that

\begin{displaymath}
(-\frac12)! = \int_{-\infty}^{\infty}e^{-u^2}{ \rm d}{u} = \sqrt{\pi}
\end{displaymath}

To get $\frac12!$, multiply by $\frac12$, since $n!$ $\vphantom0\raisebox{1.5pt}{$=$}$ $n(n-1)!$.

A double exclamation mark may mean every second item is skipped, e.g. 5!! $\vphantom0\raisebox{1.5pt}{$=$}$ 1 $\times$ 3 $\times$ 5. In general, $(2n+1)!!$ $\vphantom0\raisebox{1.5pt}{$=$}$ $(2n+1)!$/$2^nn!$. Of course, 5!! should logically mean (5!)!. Logic would indicate that 5 $\times$ 3 $\times$ 1 should be indicated by something like 5!’. But what is logic in physics?

$
\setbox 0=\hbox{$\vert$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$\sum$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Summation symbol. Example: if in three dimensional space a vector $\vec{f}$ has components $f_1$ $\vphantom0\raisebox{1.5pt}{$=$}$ 2, $f_2$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1, $f_3$ $\vphantom0\raisebox{1.5pt}{$=$}$ 4, then $\sum_{{\rm {all }}i}f_i$ stands for $2+1+4$ $\vphantom0\raisebox{1.5pt}{$=$}$ 7.

One important thing to remember: the symbol used for the summation index does not make a difference: $\sum_{{\rm {all }}j}f_j$ is exactly the same as $\sum_{{\rm {all }}i}f_i$. So freely rename the index, but always make sure that the new name is not already used for something else in the part that it appears in. If you use the same name for two different things, it becomes a mess.

Related to that, $\sum_{{\rm {all }}i}f_i$ is not something that depends on an index $i$. It is just a combined simple number. Like 7 in the example above. It is commonly said that the summation index sums away.

$
\setbox 0=\hbox{$\prod$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Multiplication symbol. Example: if in three dimensional space a vector $\vec{f}$ has components $f_1$ $\vphantom0\raisebox{1.5pt}{$=$}$ 2, $f_2$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1, $f_3$ $\vphantom0\raisebox{1.5pt}{$=$}$ 4, then $\prod_{{\rm {all }}i}f_i$ stands for $2\times1\times4$ $\vphantom0\raisebox{1.5pt}{$=$}$ 6.

One important thing to remember: the symbol used for the multiplications index does not make a difference: $\prod_{{\rm {all }}j}f_j$ is exactly the same as $\prod_{{\rm {all }}i}f_i$. So freely rename the index, but always make sure that the new name is not already used for something else in the part that it appears in. If you use the same name for two different things, it becomes a mess.

Related to that, $\prod_{{\rm {all }}i}f_i$ is not something that depends on an index $i$. It is just a combined simple number. Like 6 in the example above. It is commonly said that the multiplication index factors away. (By who?)

$
\setbox 0=\hbox{$\int$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Integration symbol, the continuous version of the summation symbol. For example,

\begin{displaymath}
\int_{\mbox{\scriptsize all }x} f(x){ \rm d}x
\end{displaymath}

is the summation of $f(x){ \rm d}{x}$ over all infinitesimally small fragments ${\rm d}{x}$ that make up the entire $x$-range. For example, $\int_{x=0}^2(2+x){ \rm d}{x}$ equals 3 $\times$ 2 $\vphantom0\raisebox{1.5pt}{$=$}$ 6; the average value of $2+x$ between $x$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0 and $x$ $\vphantom0\raisebox{1.5pt}{$=$}$ 2 is 3, and the sum of all the infinitesimally small segments ${\rm d}{x}$ gives the total length 2 of the range in $x$ from 0 to 2.

One important thing to remember: the symbol used for the integration variable does not make a difference: $\int_{{\rm {all }}y}f(y){ \rm d}{y}$ is exactly the same as $\int_{{\rm {all }}x}f(x){ \rm d}{x}$. So freely rename the integration variable, but always make sure that the new name is not already used for something else in the part it appears in. If you use the same name for two different things, it becomes a mess.

Related to that $\int_{{\rm {all }}x}f(x){ \rm d}{x}$ is not something that depends on a variable $x$. It is just a combined number. Like 6 in the example above. It is commonly said that the integration variable integrates away.

$
\setbox 0=\hbox{$\to$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$\vec{\phantom{a}}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Vector symbol. An arrow above a letter indicates it is a vector. A vector is a quantity that requires more than one number to be characterized. Typical vectors in physics include position ${\skew0\vec r}$, velocity $\vec{v}$, linear momentum ${\skew0\vec p}$, acceleration $\vec{a}$, force $\vec{F}$, angular momentum $\vec{L}$, etcetera.

$
\setbox 0=\hbox{$'$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$\nabla$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The spatial differentiation operator nabla. In Cartesian coordinates:

\begin{displaymath}
\nabla \equiv
\left(
\frac{\partial}{\partial x},
\f...
...partial}{\partial y} +
{\hat k}\frac{\partial}{\partial z}
\end{displaymath}

Nabla can be applied to a scalar function $f$ in which case it gives a vector of partial derivatives called the gradient of the function:

\begin{displaymath}
\mathop{\rm grad}\nolimits f = \nabla f =
{\hat\imath}\f...
...al f}{\partial y} +
{\hat k}\frac{\partial f}{\partial z}.
\end{displaymath}

Nabla can be applied to a vector in a dot product multiplication, in which case it gives a scalar function called the divergence of the vector:

\begin{displaymath}
\div\vec v = \nabla\cdot\vec v =
\frac{\partial v_x}{\pa...
...partial v_y}{\partial y} +
\frac{\partial v_z}{\partial z}
\end{displaymath}

or in index notation

\begin{displaymath}
\div\vec v = \nabla\cdot\vec v =
\sum_{i=1}^3 \frac{\partial v_i}{\partial x_i}
\end{displaymath}

Nabla can also be applied to a vector in a vectorial product multiplication, in which case it gives a vector function called the curl or rot of the vector. In index notation, the $i$-th component of this vector is

\begin{displaymath}
\left(\mathop{\rm curl}\nolimits \vec v\right)_i =
\left...
...line{\imath}}}}{\partial x_{{\overline{\overline{\imath}}}}}
\end{displaymath}

where ${\overline{\imath}}$ is the index following $i$ in the sequence 123123..., and ${\overline{\overline{\imath}}}$ the one preceding it (or the second following it).

The operator $\nabla^2$ is called the Laplacian. In Cartesian coordinates:

\begin{displaymath}
\nabla^2 \equiv
\frac{\partial^2}{\partial x^2}+
\frac{\partial^2}{\partial y^2}+
\frac{\partial^2}{\partial z^2}
\end{displaymath}

Sometimes the Laplacian is indicated as $\Delta$.

In non Cartesian coordinates, don’t guess; look these operators up in a table book, [4, pp. 124-126]: . For example, in spherical coordinates,

\begin{displaymath}
\nabla = {\hat\imath}_r \frac{\partial}{\partial r} +
{\...
...hi \frac{1}{r \sin\theta}
\frac{\partial}{\partial \phi} %
\end{displaymath} (N.1)

That allows the gradient of a scalar function $f$, i.e. $\nabla{f}$, to be found immediately. But if you apply $\nabla$ on a vector, you have to be very careful because you also need to differentiate ${\hat\imath}_r$, ${\hat\imath}_\theta$, and ${\hat\imath}_\phi$. In particular, the correct divergence of a vector $\vec{v}$ is
\begin{displaymath}
\nabla \cdot \vec v = \frac{1}{r^2} \frac{\partial r^2 v_r...
...rac{1}{r\sin\theta}
\frac{\partial v_\phi}{\partial\phi} %
\end{displaymath} (N.2)

The curl $\nabla$ $\times$ $\vec{v}$ of the vector is
\begin{displaymath}
\frac{{\hat\imath}_r}{r\sin\theta} \left(
\frac{\partial...
...rtial r}
- \frac{\partial v_r}{\partial\theta}
\right) %
\end{displaymath} (N.3)

Finally the Laplacian is:
\begin{displaymath}
\nabla^2 = \frac{1}{r^2}
\left\{
\frac{\partial}{\part...
...n^2\theta}
\frac{\partial^2}{\partial \phi^2}
\right\} %
\end{displaymath} (N.4)

See also spherical coordinates.

Cylindrical coordinates are usually indicated as $r$, $\theta$ and $z$. Here $z$ is the Cartesian coordinate, while $r$ is the distance from the $z$-axis and $\theta$ the angle around the $z$ axis. In two dimensions, i.e. without the $z$ terms, they are usually called polar coordinates. In cylindrical coordinates:

\begin{displaymath}
\nabla = {\hat\imath}_r \frac{\partial}{\partial r} +
{\...
...ial \theta} +
{\hat\imath}_z \frac{\partial}{\partial z} %
\end{displaymath} (N.5)


\begin{displaymath}
\nabla \cdot \vec v = \frac{1}{r} \frac{\partial r v_r}{\p...
...theta}{\partial\theta}
+ \frac{\partial v_z}{\partial z} %
\end{displaymath} (N.6)


\begin{displaymath}
\nabla\times\vec{v} =
{\hat\imath}_r \left(
\frac{1}{r...
...rtial r}
- \frac{\partial v_r}{\partial\theta}
\right) %
\end{displaymath} (N.7)


\begin{displaymath}
\nabla^2 =
\frac{1}{r} \frac{\partial}{\partial r}
\le...
...}{\partial \theta^2}
+
\frac{\partial^2}{\partial z^2} %
\end{displaymath} (N.8)

$
\setbox 0=\hbox{$\mathop{\Box}\nolimits $}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The D'Alembertian is defined as

\begin{displaymath}
\frac{1}{c^2}\frac{\partial^2}{\partial t^2}
- \frac{\pa...
...partial^2}{\partial y^2}
- \frac{\partial^2}{\partial z^2}
\end{displaymath}

where $c$ is a constant called the wave speed.

$
\setbox 0=\hbox{$^*$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
A superscript star normally indicates a complex conjugate. In the complex conjugate of a number, every ${\rm i}$ is changed into a $-{\rm i}$.

$\raisebox{.3pt}{$<$}$    
Less than.

$\raisebox{-.3pt}{$\leqslant$}$    
Less than or equal.

$\raisebox{.3pt}{$>$}$    
Greater than.

$\raisebox{-.5pt}{$\geqslant$}$    
Greater than or equal.

$\vphantom0\raisebox{1.5pt}{$=$}$    
Equals sign. The quantity to the left is the same as the one to the right.

$\vphantom0\raisebox{1.5pt}{$\equiv$}$    
Emphatic equals sign. Typically means “by definition equal” or everywhere equal.

$\vphantom0\raisebox{1.1pt}{$\approx$}$    
Indicates approximately equal. Read it as “is approximately equal to.”

$\vphantom0\raisebox{1.5pt}{$\sim$}$    
Indicates approximately equal. Often used when the approximation applies only when something is small or large. Read it as is approximately equal to or as “is asymptotically equal to.”

$
\setbox 0=\hbox{$\propto$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Proportional to. The two sides are equal except for some unknown constant factor.

$
\setbox 0=\hbox{$\Gamma$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(Gamma) May indicate:

$
\setbox 0=\hbox{$\Delta$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(capital delta) May indicate:

$
\setbox 0=\hbox{$\delta$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(delta) May indicate:

$
\setbox 0=\hbox{$\partial$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(partial) Indicates a vanishingly small change or interval of the following variable. For example, $\partial{f}$/$\partial{x}$ is the ratio of a vanishingly small change in function $f$ divided by the vanishingly small change in variable $x$ that causes this change in $f$. Such ratios define derivatives, in this case the partial derivative of $f$ with respect to $x$.

$
\setbox 0=\hbox{$\varepsilon$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(variant of epsilon) May indicate:

$
\setbox 0=\hbox{$\eta$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(eta) May be used to indicate a $y$-position.

$
\setbox 0=\hbox{$\Theta$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(capital theta) Used in this book to indicate some function of $\theta$ to be determined.

$
\setbox 0=\hbox{$\theta$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(theta) May indicate:

$
\setbox 0=\hbox{$\vartheta$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(variant of theta) An alternate symbol for $\theta$.

$
\setbox 0=\hbox{$\lambda$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(lambda) May indicate:

$
\setbox 0=\hbox{$\xi$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(xi) May indicate:

$
\setbox 0=\hbox{$\pi$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(pi) May indicate:

$
\setbox 0=\hbox{$\rho$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(rho) May indicate:

$
\setbox 0=\hbox{$\tau$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(tau) May indicate:

$
\setbox 0=\hbox{$\Phi$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(capital phi) May indicate:

$
\setbox 0=\hbox{$\phi$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(phi) May indicate:

$
\setbox 0=\hbox{$\varphi$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(variant of phi) May indicate:

$
\setbox 0=\hbox{$\omega$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(omega) May indicate:

$
\setbox 0=\hbox{$A$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$a$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

absolute    
May indicate:

adjoint    
The adjoint $A^H$ or $A^\dagger$ of a matrix is the complex-conjugate transpose of the matrix.

Alternatively, it is the matrix you get if you take it to the other side of an inner product. (While keeping the value of the inner product the same regardless of whatever two vectors or functions may be involved.)

Hermitianmatrices are self-adjoint;they are equal to their adjoint. Skew-Hermitianmatrices are the negative of their adjoint.

Unitarymatrices are the inverse of their adjoint. Unitary matrices generalize rotations and reflections of vectors. Unitary operators preserve inner products.

Fourier transforms are unitary operators on account of the Parseval equality that says that inner products are preserved.

angle    
Consider two semi-infinite lines extending from a common intersection point. Then the angle between these lines is defined in the following way: draw a unit circle in the plane of the lines and centered at their intersection point. The angle is then the length of the circular arc that is in between the lines. More precisely, this gives the angle in radians, rad. Sometimes an angle is expressed in degrees, where $2\pi$ rad is taken to be 360$\POW9,{\circ}$. However, using degrees is usually a very bad idea in science.

In three dimensions, you may be interested in the so-called solid angle $\Omega$ inside a conical surface. This angle is defined in the following way: draw a sphere of unit radius centered at the apex of the conical surface. Then the solid angle is the area of the spherical surface that is inside the cone. Solid angles are in steradians. The cone does not need to be a circular one, (i.e. have a circular cross section), for this to apply. In fact, the most common case is the solid angle corresponding to an infinitesimal element ${\rm d}\theta$ $\times$ ${\rm d}\phi$ of spherical coordinate system angles. In that case the surface of the unit sphere inside the conical surface is is approximately rectangular, with sides ${\rm d}\theta$ and $\sin(\theta){\rm d}\phi$. That makes the enclosed solid angle equal to ${\rm d}\Omega$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\sin(\theta){\rm d}\theta{\rm d}\phi$.

$
\setbox 0=\hbox{$B$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$b$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

basis    
A basis is a minimal set of vectors or functions that you can write all other vectors or functions in terms of. For example, the unit vectors ${\hat\imath}$, ${\hat\jmath}$, and ${\hat k}$ are a basis for normal three-dimensional space. Every three-dimensional vector can be written as a linear combination of the three.

$
\setbox 0=\hbox{$C$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

Cauchy-Schwartz inequality    
The Cauchy-Schwartz inequality describes a limitation on the magnitude of inner products. In particular, it says that for any vectors $\vec{v}$ and $\vec{w}$

\begin{displaymath}
\vert\vec v^H \vec w\vert \le \vert\vec v\vert \vert\vec w\vert
\end{displaymath}

For example, if $\vec v$ and $\vec w$ are real vectors, the inner product is the dot product and we have

\begin{displaymath}
\vec v\cdot \vec w = \vert\vec v\vert \vert\vec w\vert\cos\theta
\end{displaymath}

where $\vert\vec v\vert$ is the length of vector $\vec v$ and $\vert\vec w\vert$ the one of $\vec w$, and $\theta$ is the angle in between the two vectors. Since a cosine is less than one in magnitude, the Cauchy-Schwartz inequality is therefore true for vectors.

$
\setbox 0=\hbox{$\cos$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The cosine function, a periodic function oscillating between 1 and -1 as shown in [4, pp. 40-]. See also sin.

curl    
The curl of a vector $\vec{v}$ is defined as $\mathop{\rm curl}\nolimits \vec{v}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\mathop{\rm {rot}}\vec{v}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\nabla$ $\times$ $\vec{v}$.

$
\setbox 0=\hbox{${\rm d}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Indicates a vanishingly small change or interval of the following variable. For example, ${\rm d}{x}$ can be thought of as a small segment of the $x$-axis.

In three dimensions, ${\rm d}^3{\skew0\vec r}$ $\vphantom0\raisebox{1.5pt}{$\equiv$}$ ${\rm d}{x}{\rm d}{y}{\rm d}{z}$ is an infinitesimal volume element. The symbol $\int$ means that you sum over all such infinitesimal volume elements.

derivative    
A derivative of a function is the ratio of a vanishingly small change in a function divided by the vanishingly small change in the independent variable that causes the change in the function. The derivative of $f(x)$ with respect to $x$ is written as ${\rm d}{f}$/${\rm d}{x}$, or also simply as $f'$. Note that the derivative of function $f(x)$ is again a function of $x$: a ratio $f'$ can be found at every point $x$. The derivative of a function $f(x,y,z)$ with respect to $x$ is written as $\partial{f}$/$\partial{x}$ to indicate that there are other variables, $y$ and $z$, that do not vary.

determinant    
The determinant of a square matrix $A$ is a single number indicated by $\vert A\vert$. If this number is nonzero, $A\vec{v}$ can be any vector $\vec{w}$ for the right choice of $\vec{v}$. Conversely, if the determinant is zero, $A\vec{v}$ can only produce a very limited set of vectors, though if it can produce a vector $w$, it can do so for multiple vectors $\vec{v}$.

There is a recursive algorithm that allows you to compute determinants from increasingly bigger matrices in terms of determinants of smaller matrices. For a 1 $\times$ 1 matrix consisting of a single number, the determinant is simply that number:

\begin{displaymath}
\left\vert a_{11} \right\vert = a_{11}
\end{displaymath}

(This determinant should not be confused with the absolute value of the number, which is written the same way. Since you normally do not deal with 1 $\times$ 1 matrices, there is normally no confusion.) For 2 $\times$ 2 matrices, the determinant can be written in terms of 1 $\times$ 1 determinants:

\begin{displaymath}
\left\vert
\begin{array}{ll}
a_{11} & a_{12} \\
a_{...
... \\
a_{21} & \phantom{a_{22}}
\end{array}
\right\vert
\end{displaymath}

so the determinant is $a_{11}a_{22}-a_{12}a_{21}$ in short. For 3 $\times$ 3 matrices, you have

\begin{eqnarray*}
\lefteqn{
\left\vert
\begin{array}{lll}
a_{11} & a_{12...
...a_{31} & a_{32} & \phantom{a_{33}}
\end{array}
\right\vert
\end{eqnarray*}

and you already know how to work out those 2 $\times$ 2 determinants, so you now know how to do 3 $\times$ 3 determinants. Written out fully:

\begin{displaymath}
a_{11}(a_{22}a_{33}-a_{23}a_{32})
-a_{12}(a_{21}a_{33}-a_{23}a_{31})
+a_{13}(a_{21}a_{32}-a_{22}a_{31})
\end{displaymath}

For 4 $\times$ 4 determinants,

\begin{eqnarray*}
\lefteqn{
\left\vert
\begin{array}{llll}
a_{11} & a_{1...
...a_{42} & a_{43} & \phantom{a_{44}}
\end{array}
\right\vert
\end{eqnarray*}

Etcetera. Note the alternating sign pattern of the terms.

As you might infer from the above, computing a good size determinant takes a large amount of work. Fortunately, it is possible to simplify the matrix to put zeros in suitable locations, and that can cut down the work of finding the determinant greatly. You are allowed to use the following manipulations without seriously affecting the computed determinant:

  1. You can transposethe matrix, i.e. change its columns into its rows.
  2. You can create zeros in a row by subtracting a suitable multiple of another row.
  3. You can also swap rows, as long as you remember that each time that you swap two rows, it will flip over the sign of the computed determinant.
  4. You can also multiply an entire row by a constant, but that will multiply the computed determinant by the same constant.
Applying these tricks in a systematic way, called “Gaussian elimination” or “reduction to lower triangular form”, you can eliminate all matrix coefficients $a_{ij}$ for which $j$ is greater than $i$, and that makes evaluating the determinant pretty much trivial.

div(ergence)    
The divergence of a vector $\vec{v}$ is defined as $\div\vec{v}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\nabla\cdot\vec{v}$.

$
\setbox 0=\hbox{$e$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$e^{{{\rm i}}ax}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Assuming that $a$ is an ordinary real number, and $x$ a real variable, $e^{{{\rm i}}ax}$ is a complex function of magnitude one. The derivative of $e^{{{\rm i}}ax}$ with respect to $x$ is ${{\rm i}}ae^{{{\rm i}}ax}$

eigenvector    
A concept from linear algebra. A vector $\vec{v}$ is an eigenvector of a matrix $A$ if $\vec{v}$ is nonzero and $A\vec{v}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\lambda\vec{v}$ for some number $\lambda$ called the corresponding eigenvalue.

exponential function    
A function of the form $e^{\ldots}$, also written as $\exp(\ldots)$. See function and $e$.

$
\setbox 0=\hbox{$F$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$f$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

function    
A mathematical object that associates values with other values. A function $f(x)$ associates every value of $x$ with a value $f$. For example, the function $f(x)$ $\vphantom0\raisebox{1.5pt}{$=$}$ $x^2$ associates $x$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0 with $f$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0, $x$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\frac12$ with $f$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\frac14$, $x$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1 with $f$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1, $x$ $\vphantom0\raisebox{1.5pt}{$=$}$ 2 with $f$ $\vphantom0\raisebox{1.5pt}{$=$}$ 4, $x$ $\vphantom0\raisebox{1.5pt}{$=$}$ 3 with $f$ $\vphantom0\raisebox{1.5pt}{$=$}$ 9, and more generally, any arbitrary value of $x$ with the square of that value $x^2$. Similarly, function $f(x)$ $\vphantom0\raisebox{1.5pt}{$=$}$ $x^3$ associates any arbitrary $x$ with its cube $x^3$, $f(x)$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\sin(x)$ associates any arbitrary $x$ with the sine of that value, etcetera.

One way of thinking of a function is as a procedure that allows you, whenever given a number, to compute another number.

functional    
A functional associates entire functions with single numbers. For example, the expectation energy is mathematically a functional: it associates any arbitrary wave function with a number: the value of the expectation energy if physics is described by that wave function.

$
\setbox 0=\hbox{$g$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

Gauss' Theorem    
This theorem, also called divergence theorem or Gauss-Ostrogradsky theorem, says that for a continuously differentiable vector $\vec{v}$,

\begin{displaymath}
\int_V \nabla \cdot \vec v { \rm d}V
=
\int_A \vec v \cdot \vec n { \rm d}A
\end{displaymath}

where the first integral is over the volume of an arbitrary region and the second integral is over all the surface area of that region; $\vec{n}$ is at each point found as the unit vector that is normal to the surface at that point.

grad(ient)    
The gradient of a scalar $f$ is defined as $\mathop{\rm grad}\nolimits {f}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\nabla{f}$.

hypersphere    
A hypersphere is the generalization of the normal three-di­men­sion­al sphere to $n$-di­men­sion­al space. A sphere of radius $R$ in three-di­men­sion­al space consists of all points satisfying

\begin{displaymath}
r_1^2 + r_2^2 + r_3^2 \mathrel{\raisebox{-.7pt}{$\leqslant$}}R^2
\end{displaymath}

where $r_1$, $r_2$, and $r_3$ are Cartesian coordinates with origin at the center of the sphere. Similarly a hypersphere in $n$-di­men­sion­al space is defined as all points satisfying

\begin{displaymath}
r_1^2 + r_2^2 + \ldots + r_n^2 \mathrel{\raisebox{-.7pt}{$\leqslant$}}R^2
\end{displaymath}

So a two-di­men­sion­al hypersphere of radius $R$ is really just a circle of radius $R$. A one-di­men­sion­al hypersphere is really just the line segment $\vphantom0\raisebox{1.5pt}{$-$}$$R$ $\raisebox{-.3pt}{$\leqslant$}$ $x$ $\raisebox{-.3pt}{$\leqslant$}$ $\vphantom0\raisebox{1.5pt}{$-$}$$R$.

The volume” $V_n$ and surface “area $A_n$ of an $n$-di­men­sion­al hypersphere is given by

\begin{displaymath}
V_n = C_n R^n \qquad A_n = n C_n R^{n-1}
\end{displaymath}


\begin{displaymath}
C_n = \left\{
\begin{array}{l}
\strut(2\pi)^{n/2}/2 \t...
...imes n
\quad \mbox{if $n$ is odd}
\end{array}
\right.
\end{displaymath}

(This is readily derived recursively. For a sphere of unit radius, note that the $n$-di­men­sion­al volume is an integration of $n{-}1$-di­men­sion­al volumes with respect to $r_1$. Then renotate $r_1$ as $\sin\phi$ and look up the resulting integral in a table book. The formula for the area follows because $V=\int{A}{\rm d}{r}$ where $r$ is the distance from the origin.) In three dimensions, $C_3=4\pi/3$ according to the above formula. That makes the three-di­men­sion­al volume $4\pi{R}^3$$\raisebox{.5pt}{$/$}$​3 equal to the actual volume of the sphere, and the three-di­men­sion­al area $4{\pi}R^2$ equal to the actual surface area. On the other hand in two dimensions, $C_2=\pi$. That makes the two-di­men­sion­al volume ${\pi}R^2$ really the area of the circle. Similarly the two-di­men­sion­al surface area $2{\pi}R$ is really the perimeter of the circle. In one dimensions $C_1=2$ and the volume $2R$ is really the length of the interval, and the area 2 is really its number of end points.

Often the infinitesimal $n$-​di­men­sion­al volume element ${\rm d}^n{\skew0\vec r}$ is needed. This is the infinitesimal integration element for integration over all coordinates. It is:

\begin{displaymath}
{\rm d}^n{\skew0\vec r}= {\rm d}r_1 {\rm d}r_2 \ldots {\rm d}r_n = {\rm d}A_n {\rm d}r
\end{displaymath}

Specifically, in two dimensions:

\begin{displaymath}
{\rm d}^2{\skew0\vec r}= {\rm d}r_1 {\rm d}r_2 = {\rm d}x {\rm d}y = (r { \rm d}\theta) {\rm d}r
= {\rm d}A_2 {\rm d}r
\end{displaymath}

while in three dimensions:

\begin{displaymath}
{\rm d}^3{\skew0\vec r}= {\rm d}r_1 {\rm d}r_2 {\rm d}r_3 ...
... { \rm d}\theta {\rm d}\phi) {\rm d}r = {\rm d}A_3 {\rm d}r
\end{displaymath}

The expressions in parentheses are ${\rm d}{A}_2$ in polar coordinates, respectively ${\rm d}{A}_3$ in spherical coordinates.

$
\setbox 0=\hbox{$\Im$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The imaginary part of a complex number. If $c$ $\vphantom0\raisebox{1.5pt}{$=$}$ $c_r+{{\rm i}}c_i$ with $c_r$ and $c_i$ real numbers, then $\Im(c)$ $\vphantom0\raisebox{1.5pt}{$=$}$ $c_i$. Note that $c-c^*$ $\vphantom0\raisebox{1.5pt}{$=$}$ $2{\rm i}\Im(c)$.

$
\setbox 0=\hbox{$i$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate: Not to be confused with ${\rm i}$.

$
\setbox 0=\hbox{${\hat\imath}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The unit vector in the $x$-direction.

$
\setbox 0=\hbox{${\rm i}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The standard square root of minus one: ${\rm i}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\sqrt{-1}$, ${\rm i}^2$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\vphantom0\raisebox{1.5pt}{$-$}$1, 1/${\rm i}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $-{\rm i}$, ${\rm i}^*$ $\vphantom0\raisebox{1.5pt}{$=$}$ $-{\rm i}$.

index notation    
A more concise and powerful way of writing vector and matrix components by using a numerical index to indicate the components. For Cartesian coordinates, you might number the coordinates $x$ as 1, $y$ as 2, and $z$ as 3. In that case, a sum like $v_x+v_y+v_z$ can be more concisely written as $\sum_i{v}_i$. And a statement like $v_x$ $\raisebox{.2pt}{$\ne$}$ 0, $v_y$ $\raisebox{.2pt}{$\ne$}$ 0, $v_z$ $\raisebox{.2pt}{$\ne$}$ 0 can be more compactly written as $v_i$ $\raisebox{.2pt}{$\ne$}$ 0. To really see how it simplifies the notations, have a look at the matrix entry. (And that one shows only 2 by 2 matrices. Just imagine 100 by 100 matrices.)

iff    
Emphatic if. Should be read as if and only if.

integer    
Integer numbers are the whole numbers: $\ldots,-2,-1,0,1,2,3,4,\ldots$.

inverse    
(Of matrices.) If a matrix $A$ converts a vector $\vec{v}$ into a vector $\vec{w}$, then the inverse of the matrix, $A^{-1}$, converts $\vec{w}$ back into $\vec{v}$.

In other words, $A^{-1} A = A A^{-1} = I$ with $I$ the unit, or identity, matrix.

The inverse of a matrix only exists if the matrix is square and has nonzero determinant.

irrotational    
A vector $\vec{v}$ is irrotational if its curl $\nabla$ $\times$ $\vec{v}$ is zero.

$
\setbox 0=\hbox{$j$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{${\hat\jmath}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The unit vector in the $y$-direction.

$
\setbox 0=\hbox{$k$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{${\hat k}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The unit vector in the $z$-direction.

$
\setbox 0=\hbox{$l$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$\ell$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$\lim$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Indicates the final result of an approaching process. $\lim_{\varepsilon\to0}$ indicates for practical purposes the value of the following expression when $\varepsilon$ is extremely small.

linear combination    
A very generic concept indicating sums of objects times coefficients. For example, a position vector ${\skew0\vec r}$ is the linear combination $x{\hat\imath}+y{\hat\jmath}+z{\hat k}$ with the objects the unit vectors ${\hat\imath}$, ${\hat\jmath}$, and ${\hat k}$ and the coefficients the position coordinates $x$, $y$, and $z$. A linear combination of a set of functions $f_1(x),f_2(x),f_3(x),\ldots,f_n(x)$ would be the function

\begin{displaymath}
c_1 f_1(x) + c_2 f_2(x) + c_3 f_3(x) + \ldots c_n f_n(x)
\end{displaymath}

where $c_1,c_2,c_3,\ldots,c_n$ are constants, i.e. independent of $x$.

linear dependence    
A set of vectors or functions is linearly dependent if at least one of the set can be expressed in terms of the others. Consider the example of a set of functions $f_1(x),f_2(x),\ldots,f_n(x)$. This set is linearly dependent if

\begin{displaymath}
c_1 f_1(x) + c_2 f_2(x) + c_3 f_3(x) + \ldots c_n f_n(x) = 0
\end{displaymath}

where at least one of the constants $c_1,c_2,c_2,\ldots,c_n$ is nonzero. To see why, suppose that say $c_2$ is nonzero. Then you can divide by $c_2$ and rearrange to get

\begin{displaymath}
f_2(x) = - \frac{c_1}{c_2} f_1(x) - \frac{c_3}{c_2} f_3(x) - \ldots
- \frac{c_n}{c_2} f_n(x)
\end{displaymath}

That expresses $f_2(x)$ in terms of the other functions.

linear independence    
A set of vectors or functions is linearly independent if none of the set can be expressed in terms of the others. Consider the example of a set of functions $f_1(x),f_2(x),\ldots,f_n(x)$. This set is linearly independent if

\begin{displaymath}
c_1 f_1(x) + c_2 f_2(x) + c_3 f_3(x) + \ldots c_n f_n(x) = 0
\end{displaymath}

only if every one of the constants $c_1,c_2,c_3,\ldots,c_n$ is zero. To see why, assume that say $f_2(x)$ could be expressed in terms of the others,

\begin{displaymath}
f_2(x) = C_1 f_1(x) + C_3 f_3(x) + \ldots + C_n f_n(x)
\end{displaymath}

Then taking $c_2$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1, $c_1$ $\vphantom0\raisebox{1.5pt}{$=$}$ $-C_1$, $c_3$ $\vphantom0\raisebox{1.5pt}{$=$}$ $-C_3$, ...$c_n$ $\vphantom0\raisebox{1.5pt}{$=$}$ $-C_n$, the condition above would be violated. So $f_2$ cannot be expressed in terms of the others.

$
\setbox 0=\hbox{$m$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

matrix    
A table of numbers.

As a simple example, a two-dimensional matrix $A$ is a table of four numbers called $a_{11}$, $a_{12}$, $a_{21}$, and $a_{22}$:

\begin{displaymath}
\left(
\begin{array}{ll}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{array}
\right)
\end{displaymath}

unlike a two-dimensional (ket) vector $\vec{v}$, which would consist of only two numbers $v_1$ and $v_2$ arranged in a column:

\begin{displaymath}
\left(
\begin{array}{l}
v_1 \\
v_2
\end{array}
\right)
\end{displaymath}

(Such a vector can be seen as a rectangular matrix of size 2 $\times$ 1, but let’s not get into that.)

In index notation, a matrix $A$ is a set of numbers $\{a_{ij}\}$ indexed by two indices. The first index $i$ is the row number, the second index $j$ is the column number. A matrix turns a vector $\vec{v}$ into another vector $\vec{w}$ according to the recipe

\begin{displaymath}
w_i = \sum_{\mbox{{\scriptsize all }}j} a_{ij} v_j \quad \mbox{for all $i$}
\end{displaymath}

where $v_j$ stands for “the $j$-th component of vector $\vec{v}$,” and $w_i$ for “the $i$-th component of vector $\vec{w}$.”

As an example, the product of $A$ and $\vec{v}$ above is by definition

\begin{displaymath}
\left(
\begin{array}{ll}
a_{11} & a_{12} \\
a_{21} ...
...2} v_2 \\
a_{21} v_1 + a_{22} v_2
\end{array}
\right)
\end{displaymath}

which is another two-dimensional ket vector.

Note that in matrix multiplications like the example above, in geometric terms you take dot products between the rows of the first factor and the column of the second factor.

To multiply two matrices together, just think of the columns of the second matrix as separate vectors. For example:

\begin{displaymath}
\left(
\begin{array}{ll}
a_{11} & a_{12} \\
a_{21} ...
...{21} & a_{21} b_{12} + a_{22} b_{22}
\end{array}
\right)
\end{displaymath}

which is another two-dimensional matrix. In index notation, the $ij$ component of the product matrix has value $\sum_ka_{ik}b_{kj}$.

The zero matrix is like the number zero; it does not change a matrix it is added to and turns whatever it is multiplied with into zero. A zero matrix is zero everywhere. In two dimensions:

\begin{displaymath}
\left(
\begin{array}{ll}
0 & 0 \\
0 & 0
\end{array}
\right)
\end{displaymath}

A unit matrix is the equivalent of the number one for matrices; it does not change the quantity it is multiplied with. A unit matrix is one on its “main diagonal” and zero elsewhere. The 2 by 2 unit matrix is:

\begin{displaymath}
\left(
\begin{array}{ll}
1 & 0 \\
0 & 1
\end{array}
\right)
\end{displaymath}

More generally the coefficients, $\{\delta_{ij}\}$, of a unit matrix are one if $i$ $\vphantom0\raisebox{1.5pt}{$=$}$ $j$ and zero otherwise.

The transpose of a matrix $A$, $A^{\rm {T}}$, is what you get if you switch the two indices. Graphically, it turns its rows into its columns and vice versa. The Hermitian adjoint$A^H$ is what you get if you switch the two indices and then take the complex conjugate of every element. If you want to take a matrix to the other side of an inner product, you will need to change it to its Hermitian adjoint. Hermitian matricesare equal to their Hermitian adjoint, so this does nothing for them.

See also determinant and eigenvector.

$
\setbox 0=\hbox{$n$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate: and maybe some other stuff.

natural    
Natural numbers are the numbers: $1,2,3,4,\ldots$.

normal    
A normal operator or matrix is one that has orthonormal eigenfunctions or eigenvectors. Since eigenvectors are not orthonormal in general, a normal operator or matrix is abnormal!

For an operator or matrix $A$ to be normal, it must commute with its Hermitian adjoint, $[A,A^\dagger]$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0. Hermitian matrices are normal since they are equal to their Hermitian adjoint. Skew-Hermitian matrices are normal since they are equal to the negative of their Hermitian adjoint. Unitary matrices are normal because they are the inverse of their Hermitian adjoint.

O    
May indicate the origin of the coordinate system.

opposite    
The opposite of a number $a$ is $-a$. In other words, it is the additive inverse.

perpendicular bisector    
For two given points $P$ and $Q$, the perpendicular bisector consists of all points $R$ that are equally far from $P$ as they are from $Q$. In two dimensions, the perpendicular bisector is the line that passes through the point exactly half way in between $P$ and $Q$, and that is orthogonal to the line connecting $P$ and $Q$. In three dimensions, the perpendicular bisector is the plane that passes through the point exactly half way in between $P$ and $Q$, and that is orthogonal to the line connecting $P$ and $Q$. In vector notation, the perpendicular bisector of points $P$ and $Q$ is all points $R$ whose radius vector ${\skew0\vec r}$ satisfies the equation:

\begin{displaymath}
({\skew0\vec r}-{\skew0\vec r}_P)\cdot({\skew0\vec r}_Q-{\...
..._Q-{\skew0\vec r}_P)\cdot({\skew0\vec r}_Q-{\skew0\vec r}_P)
\end{displaymath}

(Note that the halfway point ${\skew0\vec r}-{\skew0\vec r}_P$ $\vphantom0\raisebox{1.5pt}{$=$}$ ${\textstyle\frac{1}{2}}({\skew0\vec r}_Q-{\skew0\vec r}_P)$ is included in this formula, as is the half way point plus any vector that is normal to $({\skew0\vec r}_Q-{\skew0\vec r}_P)$.)

phase angle    
Any complex number can be written in polar form as $c$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\vert c\vert e^{{\rm i}\alpha}$ where both the magnitude $\vert c\vert$ and the phase angle $\alpha$ are real numbers. Note that when the phase angle varies from zero to $2\pi$, the complex number $c$ varies from positive real to positive imaginary to negative real to negative imaginary and back to positive real. When the complex number is plotted in the complex plane, the phase angle is the direction of the number relative to the origin. The phase angle $\alpha$ is often called the argument, but so is about everything else in mathematics, so that is not very helpful.

In complex time-dependent waves of the form $e^{{\rm i}({\omega}t-\phi)}$, and its real equivalent $\cos({\omega}t-\phi)$, the phase angle $\phi$ gives the angular argument of the wave at time zero.

$
\setbox 0=\hbox{$q$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$R$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$\Re$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The real part of a complex number. If $c$ $\vphantom0\raisebox{1.5pt}{$=$}$ $c_r+{{\rm i}}c_i$ with $c_r$ and $c_i$ real numbers, then $\Re(c)$ $\vphantom0\raisebox{1.5pt}{$=$}$ $c_r$. Note that $c+c^*$ $\vphantom0\raisebox{1.5pt}{$=$}$ $2\Re(c)$.

$
\setbox 0=\hbox{$r$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{${\skew0\vec r}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The position vector. In Cartesian coordinates $(x,y,z)$ or $x{\hat\imath}+y{\hat\jmath}+z{\hat k}$. In spherical coordinates $r\hat\imath_r$. Its three Cartesian components may be indicated by $r_1,r_2,r_3$ or by $x,y,z$ or by $x_1,x_2,x_3$.

reciprocal    
The reciprocal of a number $a$ is 1/$a$. In other words, it is the multiplicative inverse.

rot    
The rot of a vector $\vec{v}$ is defined as $\mathop{\rm curl}\nolimits \vec{v}$ $\vphantom0\raisebox{1.5pt}{$\equiv$}$ $\mathop{\rm {rot}}\vec{v}$ $\vphantom0\raisebox{1.5pt}{$\equiv$}$ $\nabla$ $\times$ $\vec{v}$.

scalar    
A quantity characterized by a single number.

$
\setbox 0=\hbox{$\sin$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The sine function, a periodic function oscillating between 1 and -1 as shown in [4, pp. 40-]. Good to remember: $\cos^2\alpha+\sin^2\alpha$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1 and $\sin2\alpha$ $\vphantom0\raisebox{1.5pt}{$=$}$ $2\sin\alpha\cos\alpha$ and $\cos2\alpha$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\cos^2\alpha-\sin^2\alpha$.

spherical coordinates    
The spherical coordinates $r$, $\theta$, and $\phi$ of an arbitrary point P are defined as

Figure N.1: Spherical coordinates of an arbitrary point P.
\begin{figure}
\centering
\setlength{\unitlength}{1pt}
\begin{picture}(...
...0,0){$\phi$}}
\put(40,120){\makebox(0,0){P}}
\end{picture}
\end{figure}

In Cartesian coordinates, the unit vectors in the $x$, $y$, and $z$ directions are called ${\hat\imath}$, ${\hat\jmath}$, and ${\hat k}$. Similarly, in spherical coordinates, the unit vectors in the $r$, $\theta$, and $\phi$ directions are called ${\hat\imath}_r$, ${\hat\imath}_\theta$, and ${\hat\imath}_\phi$. Here, say, the $\theta$ direction is defined as the direction of the change in position if you increase $\theta$ by an infinitesimally small amount while keeping $r$ and $\varphi$ the same. Note therefore in particular that the direction of ${\hat\imath}_r$ is the same as that of ${\skew0\vec r}$; radially outward.

An arbitrary vector $\vec{v}$ can be decomposed in components $v_r$, $v_\theta$, and $v_\phi$ along these unit vectors. In particular

\begin{displaymath}
\vec v \equiv v_r {\hat\imath}_r + v_\theta {\hat\imath}_\theta + v_\phi {\hat\imath}_\phi
\end{displaymath}

Recall from calculus that in spherical coordinates, a volume integral of an arbitrary function $f$ takes the form

\begin{displaymath}
\int f { \rm d}^3{\skew0\vec r}= \int\int\int f r^2 \sin\theta { \rm d}r {\rm d}\theta {\rm d}\phi
\end{displaymath}

In other words, the volume element in spherical coordinates is

\begin{displaymath}
{\rm d}V = {\rm d}^3 {\skew0\vec r}= r^2 \sin\theta { \rm d}r {\rm d}\theta {\rm d}\phi
\end{displaymath}

Often it is convenient of think of volume integrations as a two-step process: first perform an integration over the angular coordinates $\theta$ and $\phi$. Physically, that integrates over spherical surfaces. Then perform an integration over $r$ to integrate all the spherical surfaces together. The combined infinitesimal angular integration element

\begin{displaymath}
{\rm d}\Omega = \sin\theta {\rm d}\theta {\rm d}\phi
\end{displaymath}

is called the infinitesimal solid angle ${\rm d}\Omega$. In two-dimensional polar coordinates $r$ and $\theta$, the equivalent would be the infinitesimal polar angle ${\rm d}\theta$. Recall that ${\rm d}\theta$, (in proper radians of course), equals the arclength of an infinitesimal part of the circle of integration divided by the circle radius. Similarly ${\rm d}\Omega$ is the surface of an infinitesimal part of the sphere of integration divided by the square sphere radius.

See the $\nabla$ entry for the gradient operator and Laplacian in spherical coordinates.

Stokes' Theorem    
This theorem, first derived by Kelvin and first published by someone else I cannot recall, says that for any reasonably smoothly varying vector $\vec{v}$,

\begin{displaymath}
\int_A \left(\nabla \times \vec v\right) { \rm d}A
=
\oint \vec v \cdot {\rm d}\vec r
\end{displaymath}

where the first integral is over any smooth surface area $A$ and the second integral is over the edge of that surface. How did Stokes get his name on it? He tortured his students with it, that’s how!

One important consequence of the Stokes theorem is for vector fields $\vec{v}$ that are irrotational, i.e. that have $\nabla$ $\times$ $\vec{v}$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0. Such fields can be written as

\begin{displaymath}
\vec v = \nabla f \qquad
f({\skew0\vec r}) \equiv \int_{...
...erline{\skew0\vec r}})\cdot{\rm d}{\underline{\skew0\vec r}}
\end{displaymath}

Here ${\skew0\vec r}_{\rm {ref}}$ is the position of an arbitrarily chosen reference point, usually the origin. The reason the field $\vec{v}$ can be written this way is the Stokes theorem. Because of the theorem, it does not make a difference along which path from ${\skew0\vec r}_{\rm {ref}}$ to ${\skew0\vec r}$ you integrate. (Any two paths give the same answer, as long as $\vec{v}$ is irrotational everywhere in between the paths.) So the definition of $f$ is unambiguous. And you can verify that the partial derivatives of $f$ give the components of $\vec{v}$ by approaching the final position ${\skew0\vec r}$ in the integration from the corresponding direction.

symmetry    
A symmetry is an operation under which an object does not change. For example, a human face is almost, but not completely, mirror symmetric: it looks almost the same in a mirror as when seen directly. The electrical field of a single point charge is spherically symmetric; it looks the same from whatever angle you look at it, just like a sphere does. A simple smooth glass (like a glass of water) is cylindrically symmetric; it looks the same whatever way you rotate it around its vertical axis.

$
\setbox 0=\hbox{$t$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

triple product    
A product of three vectors. There are two different versions:

$
\setbox 0=\hbox{$u$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$V$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$v$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$\vec{v}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

vector    
A quantity characterized by a list of numbers. A vector $\vec v$ in index notation is a set of numbers $\{v_i\}$ indexed by an index $i$. In normal three-dimensional Cartesian space, $i$ takes the values 1, 2, and 3, making the vector a list of three numbers, $v_1$, $v_2$, and $v_3$. These numbers are called the three components of $\vec{v}$.

vectorial product    
An vectorial product, or cross product is a product of vectors that produces another vector. If

\begin{displaymath}
\vec c=\vec a\times\vec b,
\end{displaymath}

it means in index notation that the $i$-th component of vector $\vec{c}$ is

\begin{displaymath}
c_i = a_{{\overline{\imath}}} b_{{\overline{\overline{\ima...
... - a_{{\overline{\overline{\imath}}}}b_{{\overline{\imath}}}
\end{displaymath}

where ${\overline{\imath}}$ is the index following $i$ in the sequence 123123..., and ${\overline{\overline{\imath}}}$ the one preceding it. For example, $c_1$ will equal $a_2b_3-a_3b_2$.

$
\setbox 0=\hbox{$w$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$\vec{w}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Generic vector.

$
\setbox 0=\hbox{$X$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Used in this book to indicate a function of $x$ to be determined.

$
\setbox 0=\hbox{$x$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$Y$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Used in this book to indicate a function of $y$ to be determined.

$
\setbox 0=\hbox{$y$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate:

$
\setbox 0=\hbox{$Z$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Used in this book to indicate a function of $z$ to be determined.

$
\setbox 0=\hbox{$z$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May indicate: