No­ta­tions

The be­low are the sim­plest pos­si­ble de­scrip­tions of var­i­ous sym­bols, just to help you keep read­ing if you do not re­mem­ber/know what they stand for. Don't cite them on a math test and then blame this book for your grade.

Watch it. There are so many ad hoc us­ages of sym­bols, some will have been over­looked here. Al­ways use com­mon sense first in guess­ing what a sym­bol means in a given con­text.

The quoted val­ues of phys­i­cal con­stants are usu­ally taken from NIST CO­DATA in 2012 or later. The fi­nal digit of the listed value is nor­mally doubt­ful. (It cor­re­sponds to the first nonzero digit of the stan­dard de­vi­a­tion). Num­bers end­ing in triple dots are ex­act and could be writ­ten down to more dig­its than listed if needed.

$
\setbox 0=\hbox{$\cdot$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
A dot might in­di­cate And also many more pro­saic things (punc­tu­a­tion signs, dec­i­mal points, ...).

$
\setbox 0=\hbox{$\times$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Mul­ti­pli­ca­tion sym­bol. May in­di­cate:

$
\setbox 0=\hbox{$!$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Might be used to in­di­cate a fac­to­r­ial. Ex­am­ple: 5! $\vphantom0\raisebox{1.5pt}{$=$}$ 1 $\times$ 2 $\times$ 3 $\times$ 4 $\times$ 5 $\vphantom0\raisebox{1.5pt}{$=$}$ 120.

The func­tion that gen­er­al­izes $n!$ to non­in­te­ger val­ues of $n$ is called the gamma func­tion; $n!$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\Gamma(n+1)$. The gamma func­tion gen­er­al­iza­tion is due to, who else, Euler. (How­ever, the fact that $n!$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\Gamma(n+1)$ in­stead of $n!$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\Gamma(n)$ is due to the id­iocy of Le­gendre.) In Le­gendre-re­sis­tant no­ta­tion,

\begin{displaymath}
n!=\int_0^{\infty}t^ne^{-t}{ \rm d}{t}
\end{displaymath}

Straight­for­ward in­te­gra­tion shows that 0! is 1 as it should, and in­te­gra­tion by parts shows that $(n+1)!$ $\vphantom0\raisebox{1.5pt}{$=$}$ $(n+1)n!$, which en­sures that the in­te­gral also pro­duces the cor­rect value of $n!$ for any higher in­te­ger value of $n$ than 0. The in­te­gral, how­ever, ex­ists for any real value of $n$ above $\vphantom{0}\raisebox{1.5pt}{$-$}$1, not just in­te­gers. The val­ues of the in­te­gral are al­ways pos­i­tive, tend­ing to pos­i­tive in­fin­ity for both $n\downarrow-1$, (be­cause the in­te­gral then blows up at small val­ues of $t$), and for $n\uparrow\infty$, (be­cause the in­te­gral then blows up at medium-large val­ues of $t$). In par­tic­u­lar, Stir­ling’s for­mula says that for large pos­i­tive $n$, $n!$ can be ap­prox­i­mated as

\begin{displaymath}
n! \sim \sqrt{2\pi n} n^n e^{-n} \left[1 + \ldots\right]
\end{displaymath}

where the value in­di­cated by the dots be­comes neg­li­gi­bly small for large $n$. The func­tion $n!$ can be ex­tended fur­ther to any com­plex value of $n$, ex­cept the neg­a­tive in­te­ger val­ues of $n$, where $n!$ is in­fi­nite, but is then no longer pos­i­tive. Euler’s in­te­gral can be done for $n$ $\vphantom0\raisebox{1.5pt}{$=$}$ $-\frac12$ by mak­ing the change of vari­ables $\sqrt{t}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $u$, pro­duc­ing the in­te­gral $\int_0^\infty2e^{-u^2}{ \rm d}{u}$, or $\int_{-\infty}^{\infty}e^{-u^2}{ \rm d}{u}$, which equals $\sqrt{\int_{-\infty}^{\infty}e^{-x^2}{ \rm d}{x}\int_{-\infty}^{\infty}e^{-y^2}{ \rm d}{y}}$ and the in­te­gral un­der the square root can be done an­a­lyt­i­cally us­ing po­lar co­or­di­nates. The re­sult is that

\begin{displaymath}
(-\frac12)! = \int_{-\infty}^{\infty}e^{-u^2}{ \rm d}{u} = \sqrt{\pi}
\end{displaymath}

To get $\frac12!$, mul­ti­ply by $\frac12$, since $n!$ $\vphantom0\raisebox{1.5pt}{$=$}$ $n(n-1)!$.

A dou­ble ex­cla­ma­tion mark may mean every sec­ond item is skipped, e.g. 5!! $\vphantom0\raisebox{1.5pt}{$=$}$ 1 $\times$ 3 $\times$ 5. In gen­eral, $(2n+1)!!$ $\vphantom0\raisebox{1.5pt}{$=$}$ $(2n+1)!$/$2^nn!$. Of course, 5!! should log­i­cally mean (5!)!. Logic would in­di­cate that 5 $\times$ 3 $\times$ 1 should be in­di­cated by some­thing like 5!’. But what is logic in physics?

$
\setbox 0=\hbox{$\vert$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{$\vert\ldots\rangle$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
A ket is used to in­di­cate some state. For ex­am­ple, ${\left\vert l\:m\right\rangle}$ in­di­cates an an­gu­lar mo­men­tum state with az­imuthal quan­tum num­ber $l$ and mag­netic quan­tum num­ber $m$. Sim­i­larly, ${\left\vert\leavevmode \kern.03em\raise.7ex\hbox{\the\scriptfont0 1}\kern-.2em
...
...n-.2em
/\kern-.21em\lower.56ex\hbox{\the\scriptfont0 2}\kern.05em\right\rangle}$ is the spin-down state of a par­ti­cle with spin $\frac12$. Other com­mon ones are ${\left\vert{\underline x}\right\rangle} $ for the po­si­tion eigen­func­tion ${\underline x}$, i.e. $\delta(x-{\underline x})$, ${\left\vert 1{\rm {s}}\right\rangle}$ for the 1s or $\psi_{100}$ hy­dro­gen state, ${\left\vert 2{\rm {p}}_z\right\rangle}$ for the 2p$_z$ or $\psi_{210}$ state, etcetera. In short, what­ever can in­di­cate some state can be pushed into a ket.

$
\setbox 0=\hbox{$\langle\ldots\vert$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
A bra is like a ket ${\left\vert\ldots\right\rangle}$, but ap­pears in the left side of in­ner prod­ucts, in­stead of the right one.

$
\setbox 0=\hbox{$\uparrow$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
In­di­cates the spin up state. Math­e­mat­i­cally, equals the func­tion ${\uparrow}(S_z)$ which is by de­f­i­n­i­tion equal to 1 at $S_z$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\frac12\hbar$ and equal to 0 at $S_z$ $\vphantom0\raisebox{1.5pt}{$=$}$ $-\frac12\hbar$. A spa­tial wave func­tion mul­ti­plied by $\uparrow$ is a par­ti­cle in that spa­tial state with its spin up. For mul­ti­ple par­ti­cles, the spins are listed with par­ti­cle 1 first.

$
\setbox 0=\hbox{$\downarrow$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
In­di­cates the spin down state. Math­e­mat­i­cally, equals the func­tion ${\downarrow}(S_z)$ which is by de­f­i­n­i­tion equal to 0 at $S_z$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\frac12\hbar$ and equal to 1 at $S_z$ $\vphantom0\raisebox{1.5pt}{$=$}$ $-\frac12\hbar$. A spa­tial wave func­tion mul­ti­plied by $\downarrow$ is a par­ti­cle in that spa­tial state with its spin down. For mul­ti­ple par­ti­cles, the spins are listed with par­ti­cle 1 first.

$
\setbox 0=\hbox{$\sum$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Sum­ma­tion sym­bol. Ex­am­ple: if in three di­men­sional space a vec­tor $\vec{f}$ has com­po­nents $f_1$ $\vphantom0\raisebox{1.5pt}{$=$}$ 2, $f_2$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1, $f_3$ $\vphantom0\raisebox{1.5pt}{$=$}$ 4, then $\sum_{{\rm {all }}i}f_i$ stands for $2+1+4$ $\vphantom0\raisebox{1.5pt}{$=$}$ 7.

One im­por­tant thing to re­mem­ber: the sym­bol used for the sum­ma­tion in­dex does not make a dif­fer­ence: $\sum_{{\rm {all }}j}f_j$ is ex­actly the same as $\sum_{{\rm {all }}i}f_i$. So freely re­name the in­dex, but al­ways make sure that the new name is not al­ready used for some­thing else in the part that it ap­pears in. If you use the same name for two dif­fer­ent things, it be­comes a mess.

Re­lated to that, $\sum_{{\rm {all }}i}f_i$ is not some­thing that de­pends on an in­dex $i$. It is just a com­bined sim­ple num­ber. Like 7 in the ex­am­ple above. It is com­monly said that the sum­ma­tion in­dex sums away.

$
\setbox 0=\hbox{$\prod$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(Not to be con­fused with ${\mit\Pi}$ fur­ther down.) Mul­ti­pli­ca­tion sym­bol. Ex­am­ple: if in three di­men­sional space a vec­tor $\vec{f}$ has com­po­nents $f_1$ $\vphantom0\raisebox{1.5pt}{$=$}$ 2, $f_2$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1, $f_3$ $\vphantom0\raisebox{1.5pt}{$=$}$ 4, then $\prod_{{\rm {all }}i}f_i$ stands for $2\times1\times4$ $\vphantom0\raisebox{1.5pt}{$=$}$ 6.

One im­por­tant thing to re­mem­ber: the sym­bol used for the mul­ti­pli­ca­tions in­dex does not make a dif­fer­ence: $\prod_{{\rm {all }}j}f_j$ is ex­actly the same as $\prod_{{\rm {all }}i}f_i$. So freely re­name the in­dex, but al­ways make sure that the new name is not al­ready used for some­thing else in the part that it ap­pears in. If you use the same name for two dif­fer­ent things, it be­comes a mess.

Re­lated to that, $\prod_{{\rm {all }}i}f_i$ is not some­thing that de­pends on an in­dex $i$. It is just a com­bined sim­ple num­ber. Like 6 in the ex­am­ple above. It is com­monly said that the mul­ti­pli­ca­tion in­dex fac­tors away. (By who?)

$
\setbox 0=\hbox{$\int$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
In­te­gra­tion sym­bol, the con­tin­u­ous ver­sion of the sum­ma­tion sym­bol. For ex­am­ple,

\begin{displaymath}
\int_{\mbox{\scriptsize all }x} f(x){ \rm d}x
\end{displaymath}

is the sum­ma­tion of $f(x){ \rm d}{x}$ over all in­fin­i­tes­i­mally small frag­ments ${\rm d}{x}$ that make up the en­tire $x$-range. For ex­am­ple, $\int_{x=0}^2(2+x){ \rm d}{x}$ equals 3 $\times$ 2 $\vphantom0\raisebox{1.5pt}{$=$}$ 6; the av­er­age value of $2+x$ be­tween $x$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0 and $x$ $\vphantom0\raisebox{1.5pt}{$=$}$ 2 is 3, and the sum of all the in­fin­i­tes­i­mally small seg­ments ${\rm d}{x}$ gives the to­tal length 2 of the range in $x$ from 0 to 2.

One im­por­tant thing to re­mem­ber: the sym­bol used for the in­te­gra­tion vari­able does not make a dif­fer­ence: $\int_{{\rm {all }}y}f(y){ \rm d}{y}$ is ex­actly the same as $\int_{{\rm {all }}x}f(x){ \rm d}{x}$. So freely re­name the in­te­gra­tion vari­able, but al­ways make sure that the new name is not al­ready used for some­thing else in the part it ap­pears in. If you use the same name for two dif­fer­ent things, it be­comes a mess.

Re­lated to that $\int_{{\rm {all }}x}f(x){ \rm d}{x}$ is not some­thing that de­pends on a vari­able $x$. It is just a com­bined num­ber. Like 6 in the ex­am­ple above. It is com­monly said that the in­te­gra­tion vari­able in­te­grates away.

$
\setbox 0=\hbox{$\to$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{$\vec{\phantom{a}}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Vec­tor sym­bol. An ar­row above a let­ter in­di­cates it is a vec­tor. A vec­tor is a quan­tity that re­quires more than one num­ber to be char­ac­ter­ized. Typ­i­cal vec­tors in physics in­clude po­si­tion ${\skew0\vec r}$, ve­loc­ity $\vec{v}$, lin­ear mo­men­tum ${\skew0\vec p}$, ac­cel­er­a­tion $\vec{a}$, force $\vec{F}$, an­gu­lar mo­men­tum $\vec{L}$, etcetera.

$
\setbox 0=\hbox{$\widehat{\phantom{a}}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
A hat over a let­ter in this book in­di­cates that it is the op­er­a­tor, turn­ing func­tions into other func­tions.

$
\setbox 0=\hbox{$'$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{$\nabla$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The spa­tial dif­fer­en­ti­a­tion op­er­a­tor nabla. In Carte­sian co­or­di­nates:

\begin{displaymath}
\nabla \equiv
\left(
\frac{\partial}{\partial x},
\f...
...partial}{\partial y} +
{\hat k}\frac{\partial}{\partial z}
\end{displaymath}

Nabla can be ap­plied to a scalar func­tion $f$ in which case it gives a vec­tor of par­tial de­riv­a­tives called the gra­di­ent of the func­tion:

\begin{displaymath}
\mathop{\rm grad}\nolimits f = \nabla f =
{\hat\imath}\f...
...al f}{\partial y} +
{\hat k}\frac{\partial f}{\partial z}.
\end{displaymath}

Nabla can be ap­plied to a vec­tor in a dot prod­uct mul­ti­pli­ca­tion, in which case it gives a scalar func­tion called the di­ver­gence of the vec­tor:

\begin{displaymath}
\div\vec v = \nabla\cdot\vec v =
\frac{\partial v_x}{\pa...
...partial v_y}{\partial y} +
\frac{\partial v_z}{\partial z}
\end{displaymath}

or in in­dex no­ta­tion

\begin{displaymath}
\div\vec v = \nabla\cdot\vec v =
\sum_{i=1}^3 \frac{\partial v_i}{\partial x_i}
\end{displaymath}

Nabla can also be ap­plied to a vec­tor in a vec­to­r­ial prod­uct mul­ti­pli­ca­tion, in which case it gives a vec­tor func­tion called the curl or rot of the vec­tor. In in­dex no­ta­tion, the $i$-th com­po­nent of this vec­tor is

\begin{displaymath}
\left(\mathop{\rm curl}\nolimits \vec v\right)_i =
\left...
...line{\imath}}}}{\partial x_{{\overline{\overline{\imath}}}}}
\end{displaymath}

where ${\overline{\imath}}$ is the in­dex fol­low­ing $i$ in the se­quence 123123..., and ${\overline{\overline{\imath}}}$ the one pre­ced­ing it (or the sec­ond fol­low­ing it).

The op­er­a­tor $\nabla^2$ is called the Lapla­cian. In Carte­sian co­or­di­nates:

\begin{displaymath}
\nabla^2 \equiv
\frac{\partial^2}{\partial x^2}+
\frac{\partial^2}{\partial y^2}+
\frac{\partial^2}{\partial z^2}
\end{displaymath}

Some­times the Lapla­cian is in­di­cated as $\Delta$. In rel­a­tivis­tic in­dex no­ta­tion it is equal to $\partial_i\partial^i$, with maybe a mi­nus sign de­pend­ing on who you talk with.

In non Carte­sian co­or­di­nates, don’t guess; look these op­er­a­tors up in a ta­ble book, [41, pp. 124-126]: . For ex­am­ple, in spher­i­cal co­or­di­nates,

\begin{displaymath}
\nabla = {\hat\imath}_r \frac{\partial}{\partial r} +
{\...
...hi \frac{1}{r \sin\theta}
\frac{\partial}{\partial \phi} %
\end{displaymath} (N.2)

That al­lows the gra­di­ent of a scalar func­tion $f$, i.e. $\nabla{f}$, to be found im­me­di­ately. But if you ap­ply $\nabla$ on a vec­tor, you have to be very care­ful be­cause you also need to dif­fer­en­ti­ate ${\hat\imath}_r$, ${\hat\imath}_\theta$, and ${\hat\imath}_\phi$. In par­tic­u­lar, the cor­rect di­ver­gence of a vec­tor $\vec{v}$ is
\begin{displaymath}
\nabla \cdot \vec v = \frac{1}{r^2} \frac{\partial r^2 v_r...
...rac{1}{r\sin\theta}
\frac{\partial v_\phi}{\partial\phi} %
\end{displaymath} (N.3)

The curl $\nabla$ $\times$ $\vec{v}$ of the vec­tor is
\begin{displaymath}
\frac{{\hat\imath}_r}{r\sin\theta} \left(
\frac{\partial...
...rtial r}
- \frac{\partial v_r}{\partial\theta}
\right) %
\end{displaymath} (N.4)

Fi­nally the Lapla­cian is:
\begin{displaymath}
\nabla^2 = \frac{1}{r^2}
\left\{
\frac{\partial}{\part...
...n^2\theta}
\frac{\partial^2}{\partial \phi^2}
\right\} %
\end{displaymath} (N.5)

See also spher­i­cal co­or­di­nates.

Cylin­dri­cal co­or­di­nates are usu­ally in­di­cated as $r$, $\theta$ and $z$. Here $z$ is the Carte­sian co­or­di­nate, while $r$ is the dis­tance from the $z$-axis and $\theta$ the an­gle around the $z$ axis. In two di­men­sions, i.e. with­out the $z$ terms, they are usu­ally called po­lar co­or­di­nates. In cylin­dri­cal co­or­di­nates:

\begin{displaymath}
\nabla = {\hat\imath}_r \frac{\partial}{\partial r} +
{\...
...ial \theta} +
{\hat\imath}_z \frac{\partial}{\partial z} %
\end{displaymath} (N.6)


\begin{displaymath}
\nabla \cdot \vec v = \frac{1}{r} \frac{\partial r v_r}{\p...
...theta}{\partial\theta}
+ \frac{\partial v_z}{\partial z} %
\end{displaymath} (N.7)


\begin{displaymath}
\nabla\times\vec{v} =
{\hat\imath}_r \left(
\frac{1}{r...
...rtial r}
- \frac{\partial v_r}{\partial\theta}
\right) %
\end{displaymath} (N.8)


\begin{displaymath}
\nabla^2 =
\frac{1}{r} \frac{\partial}{\partial r}
\le...
...}{\partial \theta^2}
+
\frac{\partial^2}{\partial z^2} %
\end{displaymath} (N.9)

$
\setbox 0=\hbox{$\mathop{\Box}\nolimits $}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The D'Alem­bert­ian is de­fined as

\begin{displaymath}
\frac{1}{c^2}\frac{\partial^2}{\partial t^2}
- \frac{\pa...
...partial^2}{\partial y^2}
- \frac{\partial^2}{\partial z^2}
\end{displaymath}

where $c$ is a con­stant called the wave speed. In rel­a­tivis­tic in­dex no­ta­tion, $\mathop{\Box}\nolimits $ is equal to $-\partial_\mu\partial^\mu$.

$
\setbox 0=\hbox{$^*$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
A su­per­script star nor­mally in­di­cates a com­plex con­ju­gate. In the com­plex con­ju­gate of a num­ber, every ${\rm i}$ is changed into a $-{\rm i}$.

$\raisebox{.3pt}{$<$}$    
Less than.

$\raisebox{-.3pt}{$\leqslant$}$    
Less than or equal.

$
\setbox 0=\hbox{$\langle\ldots\rangle$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$\raisebox{.3pt}{$>$}$    
Greater than.

$\raisebox{-.5pt}{$\geqslant$}$    
Greater than or equal.

$
\setbox 0=\hbox{$[\ldots]$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$\vphantom0\raisebox{1.5pt}{$=$}$    
Equals sign. The quan­tity to the left is the same as the one to the right.

$\vphantom0\raisebox{1.5pt}{$\equiv$}$    
Em­phatic equals sign. Typ­i­cally means “by de­f­i­n­i­tion equal” or every­where equal.

$\vphantom0\raisebox{1.1pt}{$\approx$}$    
In­di­cates ap­prox­i­mately equal. Read it as “is ap­prox­i­mately equal to.”

$\vphantom0\raisebox{1.5pt}{$\sim$}$    
In­di­cates ap­prox­i­mately equal. Of­ten used when the ap­prox­i­ma­tion ap­plies only when some­thing is small or large. Read it as is ap­prox­i­mately equal to or as “is as­ymp­tot­i­cally equal to.”

$
\setbox 0=\hbox{$\propto$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Pro­por­tional to. The two sides are equal ex­cept for some un­known con­stant fac­tor.

$
\setbox 0=\hbox{$\alpha$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(al­pha) May in­di­cate:

$
\setbox 0=\hbox{$\beta$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(beta) May in­di­cate:

$
\setbox 0=\hbox{$\Gamma$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(Gamma) May in­di­cate:

$
\setbox 0=\hbox{$\gamma$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(gamma) May in­di­cate:

$
\setbox 0=\hbox{$\Delta$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(cap­i­tal delta) May in­di­cate:

$
\setbox 0=\hbox{$\delta$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(delta) May in­di­cate:

$
\setbox 0=\hbox{$\partial$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(par­tial) In­di­cates a van­ish­ingly small change or in­ter­val of the fol­low­ing vari­able. For ex­am­ple, $\partial{f}$/$\partial{x}$ is the ra­tio of a van­ish­ingly small change in func­tion $f$ di­vided by the van­ish­ingly small change in vari­able $x$ that causes this change in $f$. Such ra­tios de­fine de­riv­a­tives, in this case the par­tial de­riv­a­tive of $f$ with re­spect to $x$.

Also used in rel­a­tivis­tic in­dex no­ta­tion, chap­ter 1.2.5.

$
\setbox 0=\hbox{$\epsilon$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(ep­silon) May in­di­cate:

$
\setbox 0=\hbox{$\varepsilon$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(vari­ant of ep­silon) May in­di­cate:

$
\setbox 0=\hbox{$\eta$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(eta) May be used to in­di­cate a $y$-po­si­tion of a par­ti­cle.

$
\setbox 0=\hbox{$\Theta$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(cap­i­tal theta) Used in this book to in­di­cate some func­tion of $\theta$ to be de­ter­mined.

$
\setbox 0=\hbox{$\theta$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(theta) May in­di­cate:

$
\setbox 0=\hbox{$\vartheta$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(vari­ant of theta) An al­ter­nate sym­bol for $\theta$.

$
\setbox 0=\hbox{$\kappa$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(kappa) May in­di­cate:

$
\setbox 0=\hbox{$\Lambda$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(Lambda) May in­di­cate:

$
\setbox 0=\hbox{$\lambda$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(lambda) May in­di­cate:

$
\setbox 0=\hbox{$\mu$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(mu) May in­di­cate:

$
\setbox 0=\hbox{$\nu$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(nu) May in­di­cate:

$
\setbox 0=\hbox{$\xi$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(xi) May in­di­cate:

$
\setbox 0=\hbox{${\mit\Pi}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(Oblique Pi) (Not to be con­fused with $\prod$ de­scribed higher up.) Par­ity op­er­a­tor. Re­places ${\skew0\vec r}$ by $-{\skew0\vec r}$. That is equiv­a­lent to a mir­ror­ing in a mir­ror through the ori­gin, fol­lowed by a 180$\POW9,{\circ}$ ro­ta­tion around the axis nor­mal to the mir­ror.

$
\setbox 0=\hbox{$\pi$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(pi) May in­di­cate:

$
\setbox 0=\hbox{$\tilde\pi $}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Canon­i­cal mo­men­tum den­sity.

$
\setbox 0=\hbox{$\rho$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(rho) May in­di­cate:

$
\setbox 0=\hbox{$\sigma$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(sigma) May in­di­cate:

$
\setbox 0=\hbox{$\tau$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(tau) May in­di­cate:

$
\setbox 0=\hbox{$\Phi$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(cap­i­tal phi) May in­di­cate:

$
\setbox 0=\hbox{$\phi$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(phi) May in­di­cate:

$
\setbox 0=\hbox{$\varphi$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(vari­ant of phi) May in­di­cate:

$
\setbox 0=\hbox{$\chi$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(chi) May in­di­cate

$
\setbox 0=\hbox{$\Psi$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(cap­i­tal psi) Up­per case psi is used for the wave func­tion.

$
\setbox 0=\hbox{$\psi$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(psi) Typ­i­cally used to in­di­cate an en­ergy eigen­func­tion. De­pend­ing on the sys­tem, in­dices may be added to dis­tin­guish dif­fer­ent ones. In some cases $\psi$ might be used in­stead of $\Psi$ to in­di­cate a sys­tem in an en­ergy eigen­state. Let me know and I will change it. A sys­tem in an en­ergy eigen­state should be writ­ten as $\Psi$ $\vphantom0\raisebox{1.5pt}{$=$}$ $c\psi$, not $\psi$, with $c$ a con­stant of mag­ni­tude 1.

$
\setbox 0=\hbox{$\Omega$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(Omega) May in­di­cate:

$
\setbox 0=\hbox{$\omega$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
(omega) May in­di­cate:

$
\setbox 0=\hbox{$A$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

Å    
Ångstrom. Equal to 10$\POW9,{-10}$ m.

$
\setbox 0=\hbox{$a$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{$a_0$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

ab­solute    
May in­di­cate:

adi­a­batic    
An adi­a­batic process is a process in which there is no heat trans­fer with the sur­round­ings. If the process is also re­versible, it is called isen­tropic. Typ­i­cally, these processes are fairly quick, in or­der not to give heat con­duc­tion enough time to do its stuff, but not so ex­ces­sively quick that they be­come ir­re­versible.

Adi­a­batic processes in quan­tum me­chan­ics are de­fined quite dif­fer­ently to keep stu­dents on their toes. See chap­ter 7.1.5. These processes are very slow, to give the sys­tem all pos­si­ble time to ad­just to its sur­round­ings. Of course, quan­tum physi­cist were not aware that the same term had al­ready been used for a hun­dred years or so for rel­a­tively fast processes. They as­sumed they had just in­vented a great new term!

ad­joint    
The ad­joint $A^H$ or $A^\dagger$ of an op­er­a­tor is the one you get if you take it to the other side of an in­ner prod­uct. (While keep­ing the value of the in­ner prod­uct the same re­gard­less of what­ever two vec­tors or func­tions may be in­volved.) Her­mit­ian op­er­a­tors are self-ad­joint;they do not change if you take them to the other side of an in­ner prod­uct. Skew-Her­mit­ianop­er­a­tors just change sign. Uni­tary op­er­a­torschange into their in­verse when taken to the other side of an in­ner prod­uct. Uni­tary op­er­a­tors gen­er­al­ize ro­ta­tions of vec­tors: an in­ner prod­uct of vec­tors is the same whether you ro­tate the first vec­tor one way, or the sec­ond vec­tor the op­po­site way. Uni­tary op­er­a­tors pre­serve in­ner prod­ucts (when ap­plied to both vec­tors or func­tions). Fourier trans­forms are uni­tary op­er­a­tors on ac­count of the Par­se­val equal­ity that says that in­ner prod­ucts are pre­served.

am­pli­tude    
Every­thing in quan­tum me­chan­ics is an am­pli­tude. How­ever, most im­por­tantly, the quan­tum am­pli­tude gives the co­ef­fi­cient of a state in a wave func­tion. For ex­am­ple, the usual quan­tum wave func­tion gives the quan­tum am­pli­tude that the par­ti­cle is at the given po­si­tion.

an­gle    
Con­sider two semi-in­fi­nite lines ex­tend­ing from a com­mon in­ter­sec­tion point. Then the an­gle be­tween these lines is de­fined in the fol­low­ing way: draw a unit cir­cle in the plane of the lines and cen­tered at their in­ter­sec­tion point. The an­gle is then the length of the cir­cu­lar arc that is in be­tween the lines. More pre­cisely, this gives the an­gle in ra­di­ans, rad. Some­times an an­gle is ex­pressed in de­grees, where $2\pi$ rad is taken to be 360$\POW9,{\circ}$. How­ever, us­ing de­grees is usu­ally a very bad idea in sci­ence.

In three di­men­sions, you may be in­ter­ested in the so-called solid an­gle $\Omega$ in­side a con­i­cal sur­face. This an­gle is de­fined in the fol­low­ing way: draw a sphere of unit ra­dius cen­tered at the apex of the con­i­cal sur­face. Then the solid an­gle is the area of the spher­i­cal sur­face that is in­side the cone. Solid an­gles are in stera­di­ans. The cone does not need to be a cir­cu­lar one, (i.e. have a cir­cu­lar cross sec­tion), for this to ap­ply. In fact, the most com­mon case is the solid an­gle cor­re­spond­ing to an in­fin­i­tes­i­mal el­e­ment ${\rm d}\theta$ $\times$ ${\rm d}\phi$ of spher­i­cal co­or­di­nate sys­tem an­gles. In that case the sur­face of the unit sphere in­side the con­i­cal sur­face is is ap­prox­i­mately rec­tan­gu­lar, with sides ${\rm d}\theta$ and $\sin(\theta){\rm d}\phi$. That makes the en­closed solid an­gle equal to ${\rm d}\Omega$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\sin(\theta){\rm d}\theta{\rm d}\phi$.

$
\setbox 0=\hbox{$B$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{${\cal B}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{$b$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

ba­sis    
A ba­sis is a min­i­mal set of vec­tors or func­tions that you can write all other vec­tors or func­tions in terms of. For ex­am­ple, the unit vec­tors ${\hat\imath}$, ${\hat\jmath}$, and ${\hat k}$ are a ba­sis for nor­mal three-di­men­sional space. Every three-di­men­sional vec­tor can be writ­ten as a lin­ear com­bi­na­tion of the three.

$
\setbox 0=\hbox{$C$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$\POW9,{\circ}$C    
De­grees Centi­grade. A com­monly used tem­per­a­ture scale that has the value $-$273.15 $\POW9,{\circ}$C in­stead of zero when sys­tems are in their ground state. Rec­om­men­da­tion: use de­grees Kelvin (K) in­stead. How­ever, dif­fer­ences in tem­per­a­ture are the same in Centi­grade as in Kelvin.

$
\setbox 0=\hbox{$c$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

Cauchy-Schwartz in­equal­ity    
The Cauchy-Schwartz in­equal­ity de­scribes a lim­i­ta­tion on the mag­ni­tude of in­ner prod­ucts. In par­tic­u­lar, it says that for any $f$ and $g$,

\begin{displaymath}
\vert\left\langle\vphantom{g}f\hspace{-\nulldelimiterspace...
...hspace{.03em}\right.\!\left\vert\vphantom{g}g\right\rangle }
\end{displaymath}

In words, the mag­ni­tude of an in­ner prod­uct $\left\langle\vphantom{g}f\hspace{-\nulldelimiterspace}\hspace{.03em}\right.\!\left\vert\vphantom{f}g\right\rangle $ is at most the mag­ni­tude (i.e. the length or norm) of $f$ times the one of $g$. For ex­am­ple, if $f$ and $g$ are real vec­tors, the in­ner prod­uct is the dot prod­uct and you have $f\cdot{g}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\vert f\vert\vert g\vert\cos\theta$, where $\vert f\vert$ is the length of vec­tor $f$ and $\vert g\vert$ the one of $g$, and $\theta$ is the an­gle in be­tween the two vec­tors. Since a co­sine is less than one in mag­ni­tude, the Cauchy-Schwartz in­equal­ity is there­fore true for vec­tors.

But it is true even if $f$ and $g$ are func­tions. To prove it, first rec­og­nize that $\left\langle\vphantom{g}f\hspace{-\nulldelimiterspace}\hspace{.03em}\right.\!\left\vert\vphantom{f}g\right\rangle $ may in gen­eral be a com­plex num­ber, which ac­cord­ing to (2.6) must take the form $e^{{\rm i}\alpha}\vert\left\langle\vphantom{g}f\hspace{-\nulldelimiterspace}\hspace{.03em}\right.\!\left\vert\vphantom{f}g\right\rangle \vert$ where $\alpha$ is some real num­ber whose value is not im­por­tant, and that $\left\langle\vphantom{f}g\hspace{-\nulldelimiterspace}\hspace{.03em}\right.\!\left\vert\vphantom{g}f\right\rangle $ is its com­plex con­ju­gate $e^{-{\rm i}\alpha}\vert\left\langle\vphantom{g}f\hspace{-\nulldelimiterspace}\hspace{.03em}\right.\!\left\vert\vphantom{f}g\right\rangle \vert$. Now, (yes, this is go­ing to be some con­vo­luted rea­son­ing), look at

\begin{displaymath}
\left\langle\vphantom{f + \lambda e^{-{\rm i}\alpha} g}f +...
...m i}\alpha} g}f + \lambda e^{-{\rm i}\alpha} g\right\rangle
\end{displaymath}

where $\lambda$ is any real num­ber. The above dot prod­uct gives the square mag­ni­tude of $f+{\lambda}e^{-{\rm i}\alpha}g$, so it can never be neg­a­tive. But if you mul­ti­ply out, you get

\begin{displaymath}
\left\langle\vphantom{f}f\hspace{-\nulldelimiterspace}\hsp...
...03em}\right.\!\left\vert\vphantom{g}g\right\rangle \lambda^2
\end{displaymath}

and if this qua­dratic form in $\lambda$ is never neg­a­tive, its dis­crim­i­nant must be less or equal to zero:

\begin{displaymath}
\vert\left\langle\vphantom{g}f\hspace{-\nulldelimiterspace...
...\hspace{.03em}\right.\!\left\vert\vphantom{g}g\right\rangle
\end{displaymath}

and tak­ing square roots gives the Cauchy-Schwartz in­equal­ity.

Clas­si­cal    
Can mean any older the­ory. In this work, most of the time it ei­ther means non­quan­tum, or non­rel­a­tivis­tic.

$
\setbox 0=\hbox{$\cos$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The co­sine func­tion, a pe­ri­odic func­tion os­cil­lat­ing be­tween 1 and -1 as shown in [41, pp. 40-]. See also sin.

curl    
The curl of a vec­tor $\vec{v}$ is de­fined as $\mathop{\rm curl}\nolimits \vec{v}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\mathop{\rm {rot}}\vec{v}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\nabla$ $\times$ $\vec{v}$.

$
\setbox 0=\hbox{$D$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{$\vec{D}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Prim­i­tive (trans­la­tion) vec­tor of a rec­i­p­ro­cal lat­tice.

$
\setbox 0=\hbox{${\cal D}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Den­sity of states.

D    
Of­ten used to in­di­cate a state with two units of or­bital an­gu­lar mo­men­tum.

$
\setbox 0=\hbox{$d$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{$\vec{d}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Prim­i­tive (trans­la­tion) vec­tor of a crys­tal lat­tice.

$
\setbox 0=\hbox{${\rm d}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
In­di­cates a van­ish­ingly small change or in­ter­val of the fol­low­ing vari­able. For ex­am­ple, ${\rm d}{x}$ can be thought of as a small seg­ment of the $x$-axis.

In three di­men­sions, ${\rm d}^3{\skew0\vec r}$ $\vphantom0\raisebox{1.5pt}{$\equiv$}$ ${\rm d}{x}{\rm d}{y}{\rm d}{z}$ is an in­fin­i­tes­i­mal vol­ume el­e­ment. The sym­bol $\int$ means that you sum over all such in­fin­i­tes­i­mal vol­ume el­e­ments.

de­riv­a­tive    
A de­riv­a­tive of a func­tion is the ra­tio of a van­ish­ingly small change in a func­tion di­vided by the van­ish­ingly small change in the in­de­pen­dent vari­able that causes the change in the func­tion. The de­riv­a­tive of $f(x)$ with re­spect to $x$ is writ­ten as ${\rm d}{f}$/${\rm d}{x}$, or also sim­ply as $f'$. Note that the de­riv­a­tive of func­tion $f(x)$ is again a func­tion of $x$: a ra­tio $f'$ can be found at every point $x$. The de­riv­a­tive of a func­tion $f(x,y,z)$ with re­spect to $x$ is writ­ten as $\partial{f}$/$\partial{x}$ to in­di­cate that there are other vari­ables, $y$ and $z$, that do not vary.

de­ter­mi­nant    
The de­ter­mi­nant of a square ma­trix $A$ is a sin­gle num­ber in­di­cated by $\vert A\vert$. If this num­ber is nonzero, $A\vec{v}$ can be any vec­tor $\vec{w}$ for the right choice of $\vec{v}$. Con­versely, if the de­ter­mi­nant is zero, $A\vec{v}$ can only pro­duce a very lim­ited set of vec­tors, though if it can pro­duce a vec­tor $w$, it can do so for mul­ti­ple vec­tors $\vec{v}$.

There is a re­cur­sive al­go­rithm that al­lows you to com­pute de­ter­mi­nants from in­creas­ingly big­ger ma­tri­ces in terms of de­ter­mi­nants of smaller ma­tri­ces. For a 1 $\times$ 1 ma­trix con­sist­ing of a sin­gle num­ber, the de­ter­mi­nant is sim­ply that num­ber:

\begin{displaymath}
\left\vert a_{11} \right\vert = a_{11}
\end{displaymath}

(This de­ter­mi­nant should not be con­fused with the ab­solute value of the num­ber, which is writ­ten the same way. Since you nor­mally do not deal with 1 $\times$ 1 ma­tri­ces, there is nor­mally no con­fu­sion.) For 2 $\times$ 2 ma­tri­ces, the de­ter­mi­nant can be writ­ten in terms of 1 $\times$ 1 de­ter­mi­nants:

\begin{displaymath}
\left\vert
\begin{array}{ll}
a_{11} & a_{12} \\
a_{...
... \\
a_{21} & \phantom{a_{22}}
\end{array}
\right\vert
\end{displaymath}

so the de­ter­mi­nant is $a_{11}a_{22}-a_{12}a_{21}$ in short. For 3 $\times$ 3 ma­tri­ces, you have

\begin{eqnarray*}
\lefteqn{
\left\vert
\begin{array}{lll}
a_{11} & a_{12...
...a_{31} & a_{32} & \phantom{a_{33}}
\end{array}
\right\vert
\end{eqnarray*}

and you al­ready know how to work out those 2 $\times$ 2 de­ter­mi­nants, so you now know how to do 3 $\times$ 3 de­ter­mi­nants. Writ­ten out fully:

\begin{displaymath}
a_{11}(a_{22}a_{33}-a_{23}a_{32})
-a_{12}(a_{21}a_{33}-a_{23}a_{31})
+a_{13}(a_{21}a_{32}-a_{22}a_{31})
\end{displaymath}

For 4 $\times$ 4 de­ter­mi­nants,

\begin{eqnarray*}
\lefteqn{
\left\vert
\begin{array}{llll}
a_{11} & a_{1...
...a_{42} & a_{43} & \phantom{a_{44}}
\end{array}
\right\vert
\end{eqnarray*}

Etcetera. Note the al­ter­nat­ing sign pat­tern of the terms.

As you might in­fer from the above, com­put­ing a good size de­ter­mi­nant takes a large amount of work. For­tu­nately, it is pos­si­ble to sim­plify the ma­trix to put ze­ros in suit­able lo­ca­tions, and that can cut down the work of find­ing the de­ter­mi­nant greatly. You are al­lowed to use the fol­low­ing ma­nip­u­la­tions with­out se­ri­ously af­fect­ing the com­puted de­ter­mi­nant:

  1. You can trans­posethe ma­trix, i.e. change its columns into its rows.
  2. You can cre­ate ze­ros in a row by sub­tract­ing a suit­able mul­ti­ple of an­other row.
  3. You can also swap rows, as long as you re­mem­ber that each time that you swap two rows, it will flip over the sign of the com­puted de­ter­mi­nant.
  4. You can also mul­ti­ply an en­tire row by a con­stant, but that will mul­ti­ply the com­puted de­ter­mi­nant by the same con­stant.
Ap­ply­ing these tricks in a sys­tem­atic way, called “Gauss­ian elim­i­na­tion” or “re­duc­tion to up­per tri­an­gu­lar form”, you can elim­i­nate all ma­trix co­ef­fi­cients $a_{ij}$ for which $j$ is less than $i$, and that makes eval­u­at­ing the de­ter­mi­nant pretty much triv­ial.

div(er­gence)    
The di­ver­gence of a vec­tor $\vec{v}$ is de­fined as $\div\vec{v}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\nabla\cdot\vec{v}$.

$
\setbox 0=\hbox{$E$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{${\cal E}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{$e$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

e    
May in­di­cate

$
\setbox 0=\hbox{$e^{{{\rm i}}ax}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
As­sum­ing that $a$ is an or­di­nary real num­ber, and $x$ a real vari­able, $e^{{{\rm i}}ax}$ is a com­plex func­tion of mag­ni­tude one. The de­riv­a­tive of $e^{{{\rm i}}ax}$ with re­spect to $x$ is ${{\rm i}}ae^{{{\rm i}}ax}$

eigen­vec­tor    
A con­cept from lin­ear al­ge­bra. A vec­tor $\vec{v}$ is an eigen­vec­tor of a ma­trix $A$ if $\vec{v}$ is nonzero and $A\vec{v}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\lambda\vec{v}$ for some num­ber $\lambda$ called the cor­re­spond­ing eigen­value.

The ba­sic quan­tum me­chan­ics sec­tion of this book avoids lin­ear al­ge­bra com­pletely, and the ad­vanced part al­most com­pletely. The few ex­cep­tions are al­most all two-di­men­sional ma­trix eigen­value prob­lems. In case you did not have any lin­ear al­ge­bra, here is the so­lu­tion: the two-di­men­sional ma­trix eigen­value prob­lem

\begin{displaymath}
\left(\begin{array}{cc} a_{11}&a_{12} \ a_{21}&a_{22} \end{array}\right)
\vec v = \lambda \vec v
\end{displaymath}

has eigen­val­ues that are the two roots of the qua­dratic equa­tion

\begin{displaymath}
\lambda^2 - (a_{11}+a_{22})\lambda + a_{11}a_{22}-a_{12}a_{21} = 0
\end{displaymath}

The cor­re­spond­ing eigen­vec­tors are

\begin{displaymath}
\vec v_1 =
\left(\begin{array}{c} a_{12} \ \lambda_1-a_...
...begin{array}{c} \lambda_2-a_{22} \ a_{21}\end{array}\right)
\end{displaymath}

On oc­ca­sion you may have to swap $\lambda_1$ and $\lambda_2$ to use these for­mu­lae. If $\lambda_1$ and $\lambda_2$ are equal, there might not be two eigen­vec­tors that are not mul­ti­ples of each other; then the ma­trix is called de­fec­tive. How­ever, Her­mit­ian ma­tri­ces are never de­fec­tive.

See also ma­trix” and “de­ter­mi­nant.

eV    
The elec­tron volt, a com­monly used unit of en­ergy. Its value is equal to 1.602 176 57 10$\POW9,{-19}$ J.

ex­po­nen­tial func­tion    
A func­tion of the form $e^{\ldots}$, also writ­ten as $\exp(\ldots)$. See func­tion and $e$.

$
\setbox 0=\hbox{$F$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{${\cal F}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Fock op­er­a­tor.

$
\setbox 0=\hbox{$f$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

func­tion    
A math­e­mat­i­cal ob­ject that as­so­ciates val­ues with other val­ues. A func­tion $f(x)$ as­so­ciates every value of $x$ with a value $f$. For ex­am­ple, the func­tion $f(x)$ $\vphantom0\raisebox{1.5pt}{$=$}$ $x^2$ as­so­ciates $x$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0 with $f$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0, $x$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\frac12$ with $f$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\frac14$, $x$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1 with $f$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1, $x$ $\vphantom0\raisebox{1.5pt}{$=$}$ 2 with $f$ $\vphantom0\raisebox{1.5pt}{$=$}$ 4, $x$ $\vphantom0\raisebox{1.5pt}{$=$}$ 3 with $f$ $\vphantom0\raisebox{1.5pt}{$=$}$ 9, and more gen­er­ally, any ar­bi­trary value of $x$ with the square of that value $x^2$. Sim­i­larly, func­tion $f(x)$ $\vphantom0\raisebox{1.5pt}{$=$}$ $x^3$ as­so­ciates any ar­bi­trary $x$ with its cube $x^3$, $f(x)$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\sin(x)$ as­so­ciates any ar­bi­trary $x$ with the sine of that value, etcetera.

One way of think­ing of a func­tion is as a pro­ce­dure that al­lows you, when­ever given a num­ber, to com­pute an­other num­ber.

A wave func­tion $\Psi(x,y,z)$ as­so­ciates each spa­tial po­si­tion $(x,y,z)$ with a wave func­tion value. Go­ing be­yond math­e­mat­ics, its square mag­ni­tude as­so­ciates any spa­tial po­si­tion with the rel­a­tive prob­a­bil­ity of find­ing the par­ti­cle near there.

func­tional    
A func­tional as­so­ciates en­tire func­tions with sin­gle num­bers. For ex­am­ple, the ex­pec­ta­tion en­ergy is math­e­mat­i­cally a func­tional: it as­so­ciates any ar­bi­trary wave func­tion with a num­ber: the value of the ex­pec­ta­tion en­ergy if physics is de­scribed by that wave func­tion.

$
\setbox 0=\hbox{$G$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{$g$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

Gauss' The­o­rem    
This the­o­rem, also called di­ver­gence the­o­rem or Gauss-Os­tro­grad­sky the­o­rem, says that for a con­tin­u­ously dif­fer­en­tiable vec­tor $\vec{v}$,

\begin{displaymath}
\int_V \nabla \cdot \vec v { \rm d}V
=
\int_A \vec v \cdot \vec n { \rm d}A
\end{displaymath}

where the first in­te­gral is over the vol­ume of an ar­bi­trary re­gion and the sec­ond in­te­gral is over all the sur­face area of that re­gion; $\vec{n}$ is at each point found as the unit vec­tor that is nor­mal to the sur­face at that point.

grad(ient)    
The gra­di­ent of a scalar $f$ is de­fined as $\mathop{\rm grad}\nolimits {f}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\nabla{f}$.

$
\setbox 0=\hbox{$H$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{$h$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{$\hbar$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The re­duced Planck con­stant, equal to 1.054 571 73 10$\POW9,{-34}$ J s. A mea­sure of the un­cer­tainty of na­ture in quan­tum me­chan­ics. Mul­ti­ply by $2\pi$ to get the orig­i­nal Planck con­stant $h$. For nu­clear physics, a fre­quently help­ful value is $\hbar{c}$ $\vphantom0\raisebox{1.5pt}{$=$}$ 197.326 972 MeV fm.

hy­per­sphere    
A hy­per­sphere is the gen­er­al­iza­tion of the nor­mal three-di­men­sion­al sphere to $n$-di­men­sion­al space. A sphere of ra­dius $R$ in three-di­men­sion­al space con­sists of all points sat­is­fy­ing

\begin{displaymath}
r_1^2 + r_2^2 + r_3^2 \mathrel{\raisebox{-.7pt}{$\leqslant$}}R^2
\end{displaymath}

where $r_1$, $r_2$, and $r_3$ are Carte­sian co­or­di­nates with ori­gin at the cen­ter of the sphere. Sim­i­larly a hy­per­sphere in $n$-di­men­sion­al space is de­fined as all points sat­is­fy­ing

\begin{displaymath}
r_1^2 + r_2^2 + \ldots + r_n^2 \mathrel{\raisebox{-.7pt}{$\leqslant$}}R^2
\end{displaymath}

So a two-di­men­sion­al hy­per­sphere of ra­dius $R$ is re­ally just a cir­cle of ra­dius $R$. A one-di­men­sion­al hy­per­sphere is re­ally just the line seg­ment $\vphantom{0}\raisebox{1.5pt}{$-$}$$R$ $\raisebox{-.3pt}{$\leqslant$}$ $x$ $\raisebox{-.3pt}{$\leqslant$}$ $\vphantom{0}\raisebox{1.5pt}{$-$}$$R$.

The vol­ume” ${\cal V}_n$ and sur­face “area $A_n$ of an $n$-di­men­sion­al hy­per­sphere is given by

\begin{displaymath}
{\cal V}_n = C_n R^n \qquad A_n = n C_n R^{n-1}
\end{displaymath}


\begin{displaymath}
C_n = \left\{
\begin{array}{l}
\strut(2\pi)^{n/2}/2 \t...
...imes n
\quad \mbox{if $n$ is odd}
\end{array}
\right.
\end{displaymath}

(This is read­ily de­rived re­cur­sively. For a sphere of unit ra­dius, note that the $n$-di­men­sion­al vol­ume is an in­te­gra­tion of $n{-}1$-di­men­sion­al vol­umes with re­spect to $r_1$. Then reno­tate $r_1$ as $\sin\phi$ and look up the re­sult­ing in­te­gral in a ta­ble book. The for­mula for the area fol­lows be­cause ${\cal V}=\int{A}{\rm d}{r}$ where $r$ is the dis­tance from the ori­gin.) In three di­men­sions, $C_3=4\pi/3$ ac­cord­ing to the above for­mula. That makes the three-di­men­sion­al vol­ume $4\pi{R}^3$$\raisebox{.5pt}{$/$}$​3 equal to the ac­tual vol­ume of the sphere, and the three-di­men­sion­al area $4{\pi}R^2$ equal to the ac­tual sur­face area. On the other hand in two di­men­sions, $C_2=\pi$. That makes the two-di­men­sion­al vol­ume ${\pi}R^2$ re­ally the area of the cir­cle. Sim­i­larly the two-di­men­sion­al sur­face area $2{\pi}R$ is re­ally the perime­ter of the cir­cle. In one di­men­sions $C_1=2$ and the vol­ume $2R$ is re­ally the length of the in­ter­val, and the area 2 is re­ally its num­ber of end points.

Of­ten the in­fin­i­tes­i­mal $n$-di­men­sion­al vol­ume el­e­ment ${\rm d}^n{\skew0\vec r}$ is needed. This is the in­fin­i­tes­i­mal in­te­gra­tion el­e­ment for in­te­gra­tion over all co­or­di­nates. It is:

\begin{displaymath}
{\rm d}^n{\skew0\vec r}= {\rm d}r_1 {\rm d}r_2 \ldots {\rm d}r_n = {\rm d}A_n {\rm d}r
\end{displaymath}

Specif­i­cally, in two di­men­sions:

\begin{displaymath}
{\rm d}^2{\skew0\vec r}= {\rm d}r_1 {\rm d}r_2 = {\rm d}x {\rm d}y = (r { \rm d}\theta) {\rm d}r
= {\rm d}A_2 {\rm d}r
\end{displaymath}

while in three di­men­sions:

\begin{displaymath}
{\rm d}^3{\skew0\vec r}= {\rm d}r_1 {\rm d}r_2 {\rm d}r_3 ...
... { \rm d}\theta {\rm d}\phi) {\rm d}r = {\rm d}A_3 {\rm d}r
\end{displaymath}

The ex­pres­sions in paren­the­ses are ${\rm d}{A}_2$ in po­lar co­or­di­nates, re­spec­tively ${\rm d}{A}_3$ in spher­i­cal co­or­di­nates.

$
\setbox 0=\hbox{$I$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{$\Im$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The imag­i­nary part of a com­plex num­ber. If $c$ $\vphantom0\raisebox{1.5pt}{$=$}$ $c_r+{{\rm i}}c_i$ with $c_r$ and $c_i$ real num­bers, then $\Im(c)$ $\vphantom0\raisebox{1.5pt}{$=$}$ $c_i$. Note that $c-c^*$ $\vphantom0\raisebox{1.5pt}{$=$}$ $2{\rm i}\Im(c)$.

$
\setbox 0=\hbox{${\cal I}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{$i$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate: Not to be con­fused with ${\rm i}$.

$
\setbox 0=\hbox{${\hat\imath}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The unit vec­tor in the $x$-di­rec­tion.

$
\setbox 0=\hbox{${\rm i}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The stan­dard square root of mi­nus one: ${\rm i}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\sqrt{-1}$, ${\rm i}^2$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\vphantom{0}\raisebox{1.5pt}{$-$}$1, 1/${\rm i}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $-{\rm i}$, ${\rm i}^*$ $\vphantom0\raisebox{1.5pt}{$=$}$ $-{\rm i}$.

in­dex no­ta­tion    
A more con­cise and pow­er­ful way of writ­ing vec­tor and ma­trix com­po­nents by us­ing a nu­mer­i­cal in­dex to in­di­cate the com­po­nents. For Carte­sian co­or­di­nates, you might num­ber the co­or­di­nates $x$ as 1, $y$ as 2, and $z$ as 3. In that case, a sum like $v_x+v_y+v_z$ can be more con­cisely writ­ten as $\sum_i{v}_i$. And a state­ment like $v_x$ $\raisebox{.2pt}{$\ne$}$ 0, $v_y$ $\raisebox{.2pt}{$\ne$}$ 0, $v_z$ $\raisebox{.2pt}{$\ne$}$ 0 can be more com­pactly writ­ten as $v_i$ $\raisebox{.2pt}{$\ne$}$ 0. To re­ally see how it sim­pli­fies the no­ta­tions, have a look at the ma­trix en­try. (And that one shows only 2 by 2 ma­tri­ces. Just imag­ine 100 by 100 ma­tri­ces.)

iff    
Em­phatic if. Should be read as if and only if.

in­te­ger    
In­te­ger num­bers are the whole num­bers: $\ldots,-2,-1,0,1,2,3,4,\ldots$.

in­verse    
(Of ma­tri­ces or op­er­a­tors.) If an op­er­a­tor $A$ con­verts a vec­tor or func­tion $f$ into a vec­tor or func­tion $g$, then the in­verse of the op­er­a­tor $A^{-1}$ con­verts $g$ back into $f$. For ex­am­ple, the op­er­a­tor 2 con­verts vec­tors or func­tions into two times them­selves, and its in­verse op­er­a­tor $\frac12$ con­verts these back into the orig­i­nals. Some op­er­a­tors do not have in­verses. For ex­am­ple, the op­er­a­tor 0 con­verts all vec­tors or func­tions into zero. But given zero, there is no way to fig­ure out what func­tion or vec­tor it came from; the in­verse op­er­a­tor does not ex­ist.

ir­ro­ta­tional    
A vec­tor $\vec{v}$ is ir­ro­ta­tional if its curl $\nabla$ $\times$ $\vec{v}$ is zero.

iso    
Means equal” or “con­stant.

iso­lated    
An iso­lated sys­tem is one that does not in­ter­act with its sur­round­ings in any way. No heat is trans­fered with the sur­round­ings, no work is done on or by the sur­round­ings.

$
\setbox 0=\hbox{$J$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{$j$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{${\hat\jmath}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The unit vec­tor in the $y$-di­rec­tion.

$
\setbox 0=\hbox{$K$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{${\mathscr K}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Thom­son (Kelvin) co­ef­fi­cient.

K    
May in­di­cate:

$
\setbox 0=\hbox{$k$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{${\hat k}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The unit vec­tor in the $z$-di­rec­tion.

$
\setbox 0=\hbox{$k_{\rm B}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Boltz­mann con­stant. Equal to 1.380 649 10$\POW9,{-23}$ J/K. Re­lates ab­solute tem­per­a­ture to a typ­i­cal unit of heat mo­tion en­ergy.

kmol    
A kilo mole refers to 6.022 141 3 10$\POW9,{26}$ atoms or mol­e­cules. The weight of this many par­ti­cles is about the num­ber of pro­tons and neu­trons in the atom nu­cleus/mol­e­cule nu­clei. So a kmol of hy­dro­gen atoms has a mass of about 1 kg, and a kmol of hy­dro­gen mol­e­cules about 2 kg. A kmol of he­lium atoms has a mass of about 4 kg, since he­lium has two pro­tons and two neu­trons in its nu­cleus. These num­bers are not very ac­cu­rate, not just be­cause the elec­tron masses are ig­nored, and the free neu­tron and pro­ton masses are some­what dif­fer­ent, but also be­cause of rel­a­tiv­ity ef­fects that cause ac­tual nu­clear masses to de­vi­ate from the sum of the free pro­ton and neu­tron masses.

$
\setbox 0=\hbox{$L$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{${\cal L}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
La­grangian.

L    
The atomic states or or­bitals with the­o­ret­i­cal Bohr en­ergy $E_2$

$
\setbox 0=\hbox{$l$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{$\ell$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{$\pounds $}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
La­grangian den­sity. This is best un­der­stood in the UK.

$
\setbox 0=\hbox{$\lim$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
In­di­cates the fi­nal re­sult of an ap­proach­ing process. $\lim_{\varepsilon\to0}$ in­di­cates for prac­ti­cal pur­poses the value of the fol­low­ing ex­pres­sion when $\varepsilon$ is ex­tremely small.

lin­ear com­bi­na­tion    
A very generic con­cept in­di­cat­ing sums of ob­jects times co­ef­fi­cients. For ex­am­ple, a po­si­tion vec­tor ${\skew0\vec r}$ in ba­sic physics is the lin­ear com­bi­na­tion $x{\hat\imath}+y{\hat\jmath}+z{\hat k}$ with the ob­jects the unit vec­tors ${\hat\imath}$, ${\hat\jmath}$, and ${\hat k}$ and the co­ef­fi­cients the po­si­tion co­or­di­nates $x$, $y$, and $z$. A lin­ear com­bi­na­tion of a set of func­tions $f_1(x),f_2(x),f_3(x),\ldots,f_n(x)$ would be the func­tion

\begin{displaymath}
c_1 f_1(x) + c_2 f_2(x) + c_3 f_3(x) + \ldots c_n f_n(x)
\end{displaymath}

where $c_1,c_2,c_3,\ldots,c_n$ are con­stants, i.e. in­de­pen­dent of $x$.

lin­ear de­pen­dence    
A set of vec­tors or func­tions is lin­early de­pen­dent if at least one of the set can be ex­pressed in terms of the oth­ers. Con­sider the ex­am­ple of a set of func­tions $f_1(x),f_2(x),\ldots,f_n(x)$. This set is lin­early de­pen­dent if

\begin{displaymath}
c_1 f_1(x) + c_2 f_2(x) + c_3 f_3(x) + \ldots c_n f_n(x) = 0
\end{displaymath}

where at least one of the con­stants $c_1,c_2,c_2,\ldots,c_n$ is nonzero. To see why, sup­pose that say $c_2$ is nonzero. Then you can di­vide by $c_2$ and re­arrange to get

\begin{displaymath}
f_2(x) = - \frac{c_1}{c_2} f_1(x) - \frac{c_3}{c_2} f_3(x) - \ldots
- \frac{c_n}{c_2} f_n(x)
\end{displaymath}

That ex­presses $f_2(x)$ in terms of the other func­tions.

lin­ear in­de­pen­dence    
A set of vec­tors or func­tions is lin­early in­de­pen­dent if none of the set can be ex­pressed in terms of the oth­ers. Con­sider the ex­am­ple of a set of func­tions $f_1(x),f_2(x),\ldots,f_n(x)$. This set is lin­early in­de­pen­dent if

\begin{displaymath}
c_1 f_1(x) + c_2 f_2(x) + c_3 f_3(x) + \ldots c_n f_n(x) = 0
\end{displaymath}

only if every one of the con­stants $c_1,c_2,c_3,\ldots,c_n$ is zero. To see why, as­sume that say $f_2(x)$ could be ex­pressed in terms of the oth­ers,

\begin{displaymath}
f_2(x) = C_1 f_1(x) + C_3 f_3(x) + \ldots + C_n f_n(x)
\end{displaymath}

Then tak­ing $c_2$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1, $c_1$ $\vphantom0\raisebox{1.5pt}{$=$}$ $-C_1$, $c_3$ $\vphantom0\raisebox{1.5pt}{$=$}$ $-C_3$, ...$c_n$ $\vphantom0\raisebox{1.5pt}{$=$}$ $-C_n$, the con­di­tion above would be vi­o­lated. So $f_2$ can­not be ex­pressed in terms of the oth­ers.

$
\setbox 0=\hbox{$M$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{${\cal M}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Mir­ror op­er­a­tor.

M    
The atomic states or or­bitals with the­o­ret­i­cal Bohr en­ergy $E_3$

$
\setbox 0=\hbox{$m$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

ma­trix    
A ta­ble of num­bers.

As a sim­ple ex­am­ple, a two-di­men­sional (or $2\times2$) ma­trix $A$ is a ta­ble of four num­bers called $a_{11}$, $a_{12}$, $a_{21}$, and $a_{22}$:

\begin{displaymath}
A =
\left(
\begin{array}{ll}
a_{11} & a_{12} \\
a_{21} & a_{22}
\end{array}
\right)
\end{displaymath}

un­like a two-di­men­sional vec­tor $\vec{v}$, which would con­sist of only two num­bers $v_1$ and $v_2$ arranged in a col­umn:

\begin{displaymath}
\vec v =
\left(
\begin{array}{l}
v_1 \\
v_2
\end{array}
\right)
\end{displaymath}

(Such a vec­tor can be seen as a rec­tan­gu­lar ma­trix of size 2 $\times$ 1, but let’s not get into that.) (Note that in quan­tum me­chan­ics, if a vec­tor is writ­ten as a col­umn, con­sid­ered the nor­mal case, it is called a ket vec­tor. If the com­plex con­ju­gates of its num­bers are writ­ten as a row, it is called a bra vec­tor.)

In in­dex no­ta­tion, a ma­trix $A$ is a set of num­bers, or co­ef­fi­cients, $\{a_{ij}\}$ in­dexed by two in­dices. The first in­dex $i$ is the row num­ber at which the co­ef­fi­cient $\{a_{ij}\}$ is found in ma­trix $A$, and the sec­ond in­dex $j$ is the col­umn num­ber. In in­dex no­ta­tion, a ma­trix turns a vec­tor $\vec{v}$ into an­other vec­tor $\vec{w}=A\vec{v}$ ac­cord­ing to the recipe

\begin{displaymath}
w_i = \sum_{\mbox{{\scriptsize all }}j} a_{ij} v_j \quad \mbox{for all $i$}
\end{displaymath}

where $v_j$ stands for “the $j$-th com­po­nent of vec­tor $\vec{v}$,” and $w_i$ for “the $i$-th com­po­nent of vec­tor $\vec{w}$.

As an ex­am­ple, the prod­uct of $A$ and $\vec{v}$ above is by de­f­i­n­i­tion

\begin{displaymath}
\left(
\begin{array}{ll}
a_{11} & a_{12} \\
a_{21} ...
...2} v_2 \\
a_{21} v_1 + a_{22} v_2
\end{array}
\right)
\end{displaymath}

which is just an­other two-di­men­sional ket vec­tor.

Note that in ma­trix mul­ti­pli­ca­tions, like in the ex­am­ple above, in geo­met­ric terms you take dot prod­ucts be­tween the rows of the first fac­tor and the columns of the sec­ond fac­tor.

To mul­ti­ply two ma­tri­ces to­gether, just think of the columns of the sec­ond ma­trix as sep­a­rate vec­tors. For ex­am­ple, to mul­ti­ply two $2\times2$ ma­tri­ces $A$ and $B$ to­gether:

\begin{displaymath}
\left(
\begin{array}{ll}
a_{11} & a_{12} \\
a_{21} ...
...{21} & a_{21} b_{12} + a_{22} b_{22}
\end{array}
\right)
\end{displaymath}

which is an­other two-di­men­sional ma­trix.

(Note that you can­not nor­mally swap the or­der of ma­tric mul­ti­pli­ca­tion. The ma­trix $BA$ is dif­fer­ent from ma­trix $AB$. In the spe­cial case that $AB$ and $BA$ are the same and $A$ and $B$ have com­plete sets of eigen­vec­tors, then they have a com­mon com­plete set of eigen­vec­tors, {D.18}.)

In in­dex no­ta­tion, if $C=AB$, then each co­ef­fi­cient $c_{ij}$ of ma­trix $C$ is given in terms of the co­ef­fi­cients of $A$ and $B$ as

\begin{displaymath}
c_{ij} = \sum_k a_{ik} b_{kj}
\end{displaymath}

Note that the in­dex $k$ that you sum over is the sec­ond of $A$ but the first of $B$. In short, you sum over “neigh­bor­ing in­dices.” Since you sum over all $k$, the re­sult does not de­pend on $k$.

The zero ma­trix, usu­ally called $Z$, is like the num­ber zero; it does not change a ma­trix it is added to. And it turns what­ever it is mul­ti­plied with into zero. A zero ma­trix has every co­ef­fi­cient zero. For ex­am­ple, in two di­men­sions:

\begin{displaymath}
Z =
\left(
\begin{array}{ll}
0 & 0 \\
0 & 0
\end{array}
\right)
\end{displaymath}

A unit, or iden­tity, ma­trix, usu­ally called $I$, is the equiv­a­lent of the num­ber one for ma­tri­ces; it does not change the vec­tor or ma­trix it is mul­ti­plied with. A unit ma­trix is one on its main di­ag­o­nal $i=j$ and zero else­where. The 2 by 2 unit ma­trix is:

\begin{displaymath}
I =
\left(
\begin{array}{ll}
1 & 0 \\
0 & 1
\end{array}
\right)
\end{displaymath}

More gen­er­ally the co­ef­fi­cients, $\{\delta_{ij}\}$, of a unit ma­trix are one if $i$ $\vphantom0\raisebox{1.5pt}{$=$}$ $j$ and zero oth­er­wise.

The trans­pose of a ma­trix $A$, $A^{\rm {T}}$, is what you get if you swap the two in­dices. Graph­i­cally, it turns its rows into its columns and vice versa. The ad­joint or Her­mit­ian ad­joint ma­trix $A^\dagger$ is what you get if you both swap the two in­dices in a ma­trix $A$ and then take the com­plex con­ju­gate of every co­ef­fi­cient. If you want to take a ma­trix to the other side of an in­ner prod­uct, you will need to change it to its Her­mit­ian ad­joint. Her­mit­ian ma­tri­cesare equal to their Her­mit­ian ad­joint, so this does noth­ing for them.

The in­verse of a ma­trix $A$, $A^{-1}$ is a ma­trix so that $A^{-1}A$ equals the iden­tity ma­trix $I$. That is much like the in­verse of a sim­ple num­ber times that num­ber gives one. And, just like the num­ber zero has no in­verse, a ma­trix with zero de­ter­mi­nant has no in­verse. Oth­er­wise, you can swap the or­der; $AA^{-1}$ equals the unit ma­trix too. (For num­bers this is triv­ial, for ma­tri­ces you need to look a bit closer to un­der­stand why it is true.)

See also de­ter­mi­nant and eigen­vec­tor.

met­ric pre­fixes    
In the met­ric sys­tem, the pre­fixes Y, Z, E, P, T, G, M, and k stand for 10$\POW9,{i}$ with $i$ $\vphantom0\raisebox{1.5pt}{$=$}$ 24, 21, 18, 15, 12, 9, 6, and 3, re­spec­tively. Sim­i­larly, d, c, m, $\mu$, n, p, f, a, z, y stand for 10$\POW9,{-i}$ with $i$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1, 2, 3, 6, 9, 12, 15, 18, 21, and 24 re­spec­tively. For ex­am­ple, 1 ns is 10$\POW9,{-9}$ sec­onds. Eng­lish let­ter u is of­ten used as in­stead of greek $\mu$. Names cor­re­spond­ing to the men­tioned pre­fixes Y-k are yotta, zetta, exa, peta, tera, giga, mega, kilo, and cor­re­spond­ing to d-y are deci, centi, milli, mi­cro, nano, pico, femto, atto, zepto, and yocto.

mol­e­c­u­lar mass    
Typ­i­cal ther­mo­dy­nam­ics books for en­gi­neers tab­u­late val­ues of the mol­e­c­u­lar mass, as a nondi­men­sional num­ber. The bot­tom line first: these num­bers should have been called the mo­lar mass of the sub­stance, for the nat­u­rally oc­cur­ring iso­tope ra­tio on earth. And they should have been given units of kg/kmol. That is how you use these num­bers in ac­tual com­pu­ta­tions. So just ig­nore the fact that what these books re­ally tab­u­late is of­fi­cially called the rel­a­tive mol­e­c­u­lar mass for the nat­ural iso­tope ra­tio.

Don’t blame these text­books too much for mak­ing a mess of things. Physi­cists have his­tor­i­cally bandied about a zil­lion dif­fer­ent names for what is es­sen­tially a sin­gle num­ber. Like mol­e­c­u­lar mass, “rel­a­tive mol­e­c­u­lar mass,” mol­e­c­u­lar weight, “atomic mass,” rel­a­tive atomic mass, “atomic weight,” mo­lar mass, “rel­a­tive mo­lar mass,” etcetera are ba­si­cally all the same thing.

All of these have val­ues that equal the mass of a mol­e­cule rel­a­tive to a ref­er­ence value for a sin­gle nu­cleon. So these value are about equal to the num­ber of nu­cle­ons (pro­tons and neu­trons) in the nu­clei of a sin­gle mol­e­cule. (For an iso­tope ra­tio, that be­comes the av­er­age num­ber of nu­cle­ons. Do note that nu­clei are suf­fi­ciently rel­a­tivis­tic that a pro­ton or neu­tron can be no­tice­ably heav­ier in one nu­cleus than an­other, and that neu­trons are a bit heav­ier than pro­tons even in iso­la­tion.) The of­fi­cial ref­er­ence nu­cleon weight is de­fined based on the most com­mon car­bon iso­tope car­bon-12. Since car­bon-12 has 6 pro­tons plus 6 neu­trons, the ref­er­ence nu­cleon weight is taken to be one twelfth of the car­bon-12 atomic weight. That is called the uni­fied atomic mass unit (u) or Dal­ton (Da). The atomic mass unit (amu) is an older vir­tu­ally iden­ti­cal unit, but physi­cists and chemists could never quite agree on what its value was. No kid­ding.

If you want to be po­lit­i­cally cor­rect, the deal is as fol­lows. Mol­e­c­u­lar mass is just what the term says, the mass of a mol­e­cule, in mass units. (I found zero ev­i­dence in ei­ther the IU­PAC Gold Book or NIST SP811 for the claim of Wikipedia that it must al­ways be ex­pressed in u.) Mo­lar mass is just what the words says, the mass of a mole. Of­fi­cial SI units are kg/mol, but you will find it in g/mol, equiv­a­lent to kg/kmol. (You can­not ex­pect enough brains from in­ter­na­tional com­mit­tees to re­al­ize that if you de­fine the kg and not the g as unit of mass, then it would be a smart idea to also de­fine kmol in­stead of mol as unit of par­ti­cle count.) Sim­ply ig­nore rel­a­tive atomic and mol­e­c­u­lar masses, you do not care about them. (I found zero ev­i­dence in ei­ther the IU­PAC Gold Book or NIST SP811 for the claims of Wikipedia that the mol­e­c­u­lar mass can­not be an av­er­age over iso­topes or that the mo­lar mass must be for a nat­ural iso­tope ra­tio. In fact, NIST uses mo­lar mass of car­bon-12 and specif­i­cally in­cludes the pos­si­bil­ity of an av­er­age in the rel­a­tive mol­e­c­u­lar mass.)

See also the atomic mass con­stant $m_{\rm {u}}$.

$
\setbox 0=\hbox{$N$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

N    
May in­di­cate

$
\setbox 0=\hbox{$n$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate: and maybe some other stuff.

n    
May in­di­cate

nat­ural    
Nat­ural num­bers are the num­bers: $1,2,3,4,\ldots$.

nor­mal    
A nor­mal op­er­a­tor or ma­trix is one that has or­tho­nor­mal eigen­func­tions or eigen­vec­tors. Since eigen­vec­tors are not or­tho­nor­mal in gen­eral, a nor­mal op­er­a­tor or ma­trix is ab­nor­mal! An­other ex­am­ple of a highly con­fus­ing term. Such a ma­trix should have been called or­tho­di­ag­o­nal­iz­able or some­thing of the kind. To be fair, the au­thor is not aware of any physi­cists be­ing in­volved in this par­tic­u­lar term; it may be the math­e­mati­cians that are to blame here.

For an op­er­a­tor or ma­trix $A$ to be nor­mal, it must com­mute with its Her­mit­ian ad­joint, $[A,A^\dagger]$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0. Her­mit­ian ma­tri­ces are nor­mal since they are equal to their Her­mit­ian ad­joint. Skew-Her­mit­ian ma­tri­ces are nor­mal since they are equal to the neg­a­tive of their Her­mit­ian ad­joint. Uni­tary ma­tri­ces are nor­mal be­cause they are the in­verse of their Her­mit­ian ad­joint.

O    
May in­di­cate the ori­gin of the co­or­di­nate sys­tem.

op­po­site    
The op­po­site of a num­ber $a$ is $-a$. In other words, it is the ad­di­tive in­verse.

$
\setbox 0=\hbox{$P$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{${\cal P}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Par­ti­cle ex­change op­er­a­tor. Ex­changes the po­si­tions and spins of two iden­ti­cal par­ti­cles.

$
\setbox 0=\hbox{${\mathscr P}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Peltier co­ef­fi­cient.

P    
Of­ten used to in­di­cate a state with one unit of or­bital an­gu­lar mo­men­tum.

$
\setbox 0=\hbox{$p$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

p    
May in­di­cate

per­pen­dic­u­lar bi­sec­tor    
For two given points $P$ and $Q$, the per­pen­dic­u­lar bi­sec­tor con­sists of all points $R$ that are equally far from $P$ as they are from $Q$. In two di­men­sions, the per­pen­dic­u­lar bi­sec­tor is the line that passes through the point ex­actly half way in be­tween $P$ and $Q$, and that is or­thog­o­nal to the line con­nect­ing $P$ and $Q$. In three di­men­sions, the per­pen­dic­u­lar bi­sec­tor is the plane that passes through the point ex­actly half way in be­tween $P$ and $Q$, and that is or­thog­o­nal to the line con­nect­ing $P$ and $Q$. In vec­tor no­ta­tion, the per­pen­dic­u­lar bi­sec­tor of points $P$ and $Q$ is all points $R$ whose ra­dius vec­tor ${\skew0\vec r}$ sat­is­fies the equa­tion:

\begin{displaymath}
({\skew0\vec r}-{\skew0\vec r}_P)\cdot({\skew0\vec r}_Q-{\...
..._Q-{\skew0\vec r}_P)\cdot({\skew0\vec r}_Q-{\skew0\vec r}_P)
\end{displaymath}

(Note that the halfway point ${\skew0\vec r}-{\skew0\vec r}_P$ $\vphantom0\raisebox{1.5pt}{$=$}$ ${\textstyle\frac{1}{2}}({\skew0\vec r}_Q-{\skew0\vec r}_P)$ is in­cluded in this for­mula, as is the half way point plus any vec­tor that is nor­mal to $({\skew0\vec r}_Q-{\skew0\vec r}_P)$.)

phase an­gle    
Any com­plex num­ber can be writ­ten in po­lar form as $c$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\vert c\vert e^{{\rm i}\alpha}$ where both the mag­ni­tude $\vert c\vert$ and the phase an­gle $\alpha$ are real num­bers. Note that when the phase an­gle varies from zero to $2\pi$, the com­plex num­ber $c$ varies from pos­i­tive real to pos­i­tive imag­i­nary to neg­a­tive real to neg­a­tive imag­i­nary and back to pos­i­tive real. When the com­plex num­ber is plot­ted in the com­plex plane, the phase an­gle is the di­rec­tion of the num­ber rel­a­tive to the ori­gin. The phase an­gle $\alpha$ is of­ten called the ar­gu­ment, but so is about every­thing else in math­e­mat­ics, so that is not very help­ful.

In com­plex time-de­pen­dent waves of the form $e^{{\rm i}({\omega}t-\phi)}$, and its real equiv­a­lent $\cos({\omega}t-\phi)$, the phase an­gle $\phi$ gives the an­gu­lar ar­gu­ment of the wave at time zero.

pho­ton    
Unit of elec­tro­mag­netic ra­di­a­tion (which in­cludes light, x-rays, mi­crowaves, etcetera). A pho­ton has a en­ergy $\hbar\omega$, where $\omega$ is its an­gu­lar fre­quency, and a wave length $2{\pi}c$/$\omega$ where $c$ is the speed of light.

po­ten­tial    
In or­der to op­ti­mize con­fu­sion, pretty much every­thing in physics that is scalar is called po­ten­tial. Po­ten­tial en­ergy is rou­tinely con­cisely re­ferred to as po­ten­tial. It is the en­ergy that a par­ti­cle can pick up from a force field by chang­ing its po­si­tion. It is in Joule. But an elec­tric po­ten­tial is taken to be per unit charge, which gives it units of volts. Then there are ther­mo­dy­namic po­ten­tials like the chem­i­cal po­ten­tial.

$
\setbox 0=\hbox{$p_x$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Lin­ear mo­men­tum in the $x$-di­rec­tion. (In the one-di­men­sional cases at the end of the un­steady evo­lu­tion chap­ter, the $x$ sub­script is omit­ted.) Com­po­nents in the $y$- and $z$-di­rec­tions are $p_y$ and $p_z$. Clas­si­cal New­ton­ian physics has $p_x$ $\vphantom0\raisebox{1.5pt}{$=$}$ $mu$ where $m$ is the mass and $u$ the ve­loc­ity in the $x$-di­rec­tion. In quan­tum me­chan­ics, the pos­si­ble val­ues of $p_x$ are the eigen­val­ues of the op­er­a­tor ${\widehat p}_x$ which equals $\hbar\partial$/${\rm i}\partial{x}$. (But which be­comes canon­i­cal mo­men­tum in a mag­netic field.)

$
\setbox 0=\hbox{$Q$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate

$
\setbox 0=\hbox{$q$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{$R$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{${\cal R}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Ro­ta­tion op­er­a­tor.

$
\setbox 0=\hbox{$\Re$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The real part of a com­plex num­ber. If $c$ $\vphantom0\raisebox{1.5pt}{$=$}$ $c_r+{{\rm i}}c_i$ with $c_r$ and $c_i$ real num­bers, then $\Re(c)$ $\vphantom0\raisebox{1.5pt}{$=$}$ $c_r$. Note that $c+c^*$ $\vphantom0\raisebox{1.5pt}{$=$}$ $2\Re(c)$.

$
\setbox 0=\hbox{$r$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{${\skew0\vec r}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The po­si­tion vec­tor. In Carte­sian co­or­di­nates $(x,y,z)$ or $x{\hat\imath}+y{\hat\jmath}+z{\hat k}$. In spher­i­cal co­or­di­nates $r\hat\imath_r$. Its three Carte­sian com­po­nents may be in­di­cated by $r_1,r_2,r_3$ or by $x,y,z$ or by $x_1,x_2,x_3$.

rec­i­p­ro­cal    
The rec­i­p­ro­cal of a num­ber $a$ is 1/$a$. In other words, it is the mul­ti­plica­tive in­verse.

rel­a­tiv­ity    
The spe­cial the­ory of rel­a­tiv­ity ac­counts for the ex­per­i­men­tal ob­ser­va­tion that the speed of light $c$ is the same in all lo­cal co­or­di­nate sys­tems. It nec­es­sar­ily drops the ba­sic con­cepts of ab­solute time and length that were cor­ner stones in New­ton­ian physics.

Al­bert Ein­stein should be cred­ited with the bold­ness to squarely face up to the un­avoid­able where oth­ers wa­vered. How­ever, he should also be cred­ited for the bold­ness of swip­ing the ba­sic ideas from Lorentz and Poin­caré with­out giv­ing them proper, or any, credit. The ev­i­dence is very strong he was aware of both works, and his var­i­ous ar­gu­ments are al­most car­bon copies of those of Poin­caré, but in his pa­per it looks like it all came from Ein­stein, with the ex­is­tence of the ear­lier works not men­tioned. (Note that the gen­eral the­ory of rel­a­tiv­ity, which is of no in­ter­est to this book, is al­most surely prop­erly cred­ited to Ein­stein. But he was a lot less hun­gry then.)

Rel­a­tiv­ity im­plies that a length seen by an ob­server mov­ing at a speed $v$ is shorter than the one seen by a sta­tion­ary ob­server by a fac­tor $\sqrt{1-(v/c)^2}$ as­sum­ing the length is in the di­rec­tion of mo­tion. This is called Lorentz-Fitzger­ald con­trac­tion. It makes galac­tic travel some­what more con­ceiv­able be­cause the size of the galaxy will con­tract for an as­tro­naut in a rocket ship mov­ing close to the speed of light. Rel­a­tiv­ity also im­plies that the time that an event takes seems to be slower by a fac­tor $1/\sqrt{1-(v/c)^2}$ if the event is seen by an ob­server in mo­tion com­pared to the lo­ca­tion where the event oc­curs. That is called time di­la­tion. Some high-en­ergy par­ti­cles gen­er­ated in space move so fast that they reach the sur­face of the earth though this takes much more time than the par­ti­cles would last at rest in a lab­o­ra­tory. The de­cay time in­creases be­cause of the mo­tion of the par­ti­cles. (Of course, as far as the par­ti­cles them­selves see it, the dis­tance to travel is a lot shorter than it seems to be to earth. For them, it is a mat­ter of length con­trac­tion.)

The fol­low­ing for­mu­lae give the rel­a­tivis­tic mass, mo­men­tum, and ki­netic en­ergy of a par­ti­cle in mo­tion:

\begin{displaymath}
m= \frac{m_0}{\sqrt{1-(v/c)^2}}
\qquad
p = m v
\qquad
T = mc^2 - m_0c^2
\end{displaymath}

where $m_0$ is the rest mass of the par­ti­cle, i.e.  the mass as mea­sured by an ob­server to whom the par­ti­cle seems at rest. The for­mula for ki­netic en­ergy re­flects the fact that even if a par­ti­cle is at rest, it still has an amount of build-in en­ergy equal to $m_0c^2$ left. The to­tal en­ergy of a par­ti­cle in empty space, be­ing ki­netic and rest mass en­ergy, is given by

\begin{displaymath}
E = m c^2 = \sqrt{(m_0c^2)^2 + c^2p^2}
\end{displaymath}

as can be ver­i­fied by sub­sti­tut­ing in the ex­pres­sion for the mo­men­tum, in terms of the rest mass, and then tak­ing both terms in­side the square root un­der a com­mon de­nom­i­na­tor. For small lin­ear mo­men­tum $p$, this can be ap­prox­i­mated as $\frac12m_0v^2$.

Rel­a­tiv­ity seemed quite a dra­matic de­par­ture of New­ton­ian physics when it de­vel­oped. Then quan­tum me­chan­ics started to emerge...

rot    
The rot of a vec­tor $\vec{v}$ is de­fined as $\mathop{\rm curl}\nolimits \vec{v}$ $\vphantom0\raisebox{1.5pt}{$\equiv$}$ $\mathop{\rm {rot}}\vec{v}$ $\vphantom0\raisebox{1.5pt}{$\equiv$}$ $\nabla$ $\times$ $\vec{v}$.

$
\setbox 0=\hbox{$S$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{${\cal S}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The ac­tion in­te­gral of La­grangian me­chan­ics, {A.1}

$
\setbox 0=\hbox{${\mathscr S}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
See­beck co­ef­fi­cient.

S    
Of­ten used to in­di­cate a state of zero or­bital an­gu­lar mo­men­tum.

$
\setbox 0=\hbox{$s$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

s    
May in­di­cate:

scalar    
A quan­tity that is not a vec­tor, a quan­tity that is just a sin­gle num­ber.

$
\setbox 0=\hbox{$\sin$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The sine func­tion, a pe­ri­odic func­tion os­cil­lat­ing be­tween 1 and -1 as shown in [41, pp. 40-]. Good to re­mem­ber: $\cos^2\alpha+\sin^2\alpha$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1 and $\sin2\alpha$ $\vphantom0\raisebox{1.5pt}{$=$}$ $2\sin\alpha\cos\alpha$ and $\cos2\alpha$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\cos^2\alpha-\sin^2\alpha$.

so­le­noidal    
A vec­tor $\vec{v}$ is so­le­noidal if its di­ver­gence $\nabla\cdot\vec{v}$ is zero.

spec­trum    
In this book, a spec­trum nor­mally means a plot of en­ergy lev­els along the ver­ti­cal axis. Of­ten, the hor­i­zon­tal co­or­di­nate is used to in­di­cate a sec­ond vari­able, such as the den­sity of states or the par­ti­cle ve­loc­ity.

For light (pho­tons), a spec­trum can be ob­tained ex­per­i­men­tally by send­ing the light through a prism. This sep­a­rates the col­ors in the light, and each color means a par­tic­u­lar en­ergy of the pho­tons.

The word spec­trum is also of­ten used in a more gen­eral math­e­mat­i­cal sense, but not in this book as far as I can re­mem­ber.

spher­i­cal co­or­di­nates    
The spher­i­cal co­or­di­nates $r$, $\theta$, and $\phi$ of an ar­bi­trary point P are de­fined as

Fig­ure N.3: Spher­i­cal co­or­di­nates of an ar­bi­trary point P.
\begin{figure}
\centering
\setlength{\unitlength}{1pt}
\begin{picture}(...
...0,0){$\phi$}}
\put(40,120){\makebox(0,0){P}}
\end{picture}
\end{figure}

In Carte­sian co­or­di­nates, the unit vec­tors in the $x$, $y$, and $z$ di­rec­tions are called ${\hat\imath}$, ${\hat\jmath}$, and ${\hat k}$. Sim­i­larly, in spher­i­cal co­or­di­nates, the unit vec­tors in the $r$, $\theta$, and $\phi$ di­rec­tions are called ${\hat\imath}_r$, ${\hat\imath}_\theta$, and ${\hat\imath}_\phi$. Here, say, the $\theta$ di­rec­tion is de­fined as the di­rec­tion of the change in po­si­tion if you in­crease $\theta$ by an in­fin­i­tes­i­mally small amount while keep­ing $r$ and $\varphi$ the same. Note there­fore in par­tic­u­lar that the di­rec­tion of ${\hat\imath}_r$ is the same as that of ${\skew0\vec r}$; ra­di­ally out­ward.

An ar­bi­trary vec­tor $\vec{v}$ can be de­com­posed in com­po­nents $v_r$, $v_\theta$, and $v_\phi$ along these unit vec­tors. In par­tic­u­lar

\begin{displaymath}
\vec v \equiv v_r {\hat\imath}_r + v_\theta {\hat\imath}_\theta + v_\phi {\hat\imath}_\phi
\end{displaymath}

Re­call from cal­cu­lus that in spher­i­cal co­or­di­nates, a vol­ume in­te­gral of an ar­bi­trary func­tion $f$ takes the form

\begin{displaymath}
\int f { \rm d}^3{\skew0\vec r}= \int\int\int f r^2 \sin\theta { \rm d}r {\rm d}\theta {\rm d}\phi
\end{displaymath}

In other words, the vol­ume el­e­ment in spher­i­cal co­or­di­nates is

\begin{displaymath}
{\rm d}V = {\rm d}^3 {\skew0\vec r}= r^2 \sin\theta { \rm d}r {\rm d}\theta {\rm d}\phi
\end{displaymath}

Of­ten it is con­ve­nient of think of vol­ume in­te­gra­tions as a two-step process: first per­form an in­te­gra­tion over the an­gu­lar co­or­di­nates $\theta$ and $\phi$. Phys­i­cally, that in­te­grates over spher­i­cal sur­faces. Then per­form an in­te­gra­tion over $r$ to in­te­grate all the spher­i­cal sur­faces to­gether. The com­bined in­fin­i­tes­i­mal an­gu­lar in­te­gra­tion el­e­ment

\begin{displaymath}
{\rm d}\Omega = \sin\theta {\rm d}\theta {\rm d}\phi
\end{displaymath}

is called the in­fin­i­tes­i­mal solid an­gle ${\rm d}\Omega$. In two-di­men­sional po­lar co­or­di­nates $r$ and $\theta$, the equiv­a­lent would be the in­fin­i­tes­i­mal po­lar an­gle ${\rm d}\theta$. Re­call that ${\rm d}\theta$, (in proper ra­di­ans of course), equals the ar­clength of an in­fin­i­tes­i­mal part of the cir­cle of in­te­gra­tion di­vided by the cir­cle ra­dius. Sim­i­larly ${\rm d}\Omega$ is the sur­face of an in­fin­i­tes­i­mal part of the sphere of in­te­gra­tion di­vided by the square sphere ra­dius.

See the $\nabla$ en­try for the gra­di­ent op­er­a­tor and Lapla­cian in spher­i­cal co­or­di­nates.

Stokes' The­o­rem    
This the­o­rem, first de­rived by Kelvin and first pub­lished by some­one else I can­not re­call, says that for any rea­son­ably smoothly vary­ing vec­tor $\vec{v}$,

\begin{displaymath}
\int_A \left(\nabla \times \vec v\right) { \rm d}A
=
\oint \vec v \cdot {\rm d}\vec r
\end{displaymath}

where the first in­te­gral is over any smooth sur­face area $A$ and the sec­ond in­te­gral is over the edge of that sur­face. How did Stokes get his name on it? He tor­tured his stu­dents with it, that’s how!

One im­por­tant con­se­quence of the Stokes the­o­rem is for vec­tor fields $\vec{v}$ that are ir­ro­ta­tional, i.e. that have $\nabla$ $\times$ $\vec{v}$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0. Such fields can be writ­ten as

\begin{displaymath}
\vec v = \nabla f \qquad
f({\skew0\vec r}) \equiv \int_{...
...erline{\skew0\vec r}})\cdot{\rm d}{\underline{\skew0\vec r}}
\end{displaymath}

Here ${\skew0\vec r}_{\rm {ref}}$ is the po­si­tion of an ar­bi­trar­ily cho­sen ref­er­ence point, usu­ally the ori­gin. The rea­son the field $\vec{v}$ can be writ­ten this way is the Stokes the­o­rem. Be­cause of the the­o­rem, it does not make a dif­fer­ence along which path from ${\skew0\vec r}_{\rm {ref}}$ to ${\skew0\vec r}$ you in­te­grate. (Any two paths give the same an­swer, as long as $\vec{v}$ is ir­ro­ta­tional every­where in be­tween the paths.) So the de­f­i­n­i­tion of $f$ is un­am­bigu­ous. And you can ver­ify that the par­tial de­riv­a­tives of $f$ give the com­po­nents of $\vec{v}$ by ap­proach­ing the fi­nal po­si­tion ${\skew0\vec r}$ in the in­te­gra­tion from the cor­re­spond­ing di­rec­tion.

sym­me­try    
A sym­me­try is an op­er­a­tion un­der which an ob­ject does not change. For ex­am­ple, a hu­man face is al­most, but not com­pletely, mir­ror sym­met­ric: it looks al­most the same in a mir­ror as when seen di­rectly. The elec­tri­cal field of a sin­gle point charge is spher­i­cally sym­met­ric; it looks the same from what­ever an­gle you look at it, just like a sphere does. A sim­ple smooth glass (like a glass of wa­ter) is cylin­dri­cally sym­met­ric; it looks the same what­ever way you ro­tate it around its ver­ti­cal axis.

$
\setbox 0=\hbox{$T$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{${\cal T}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Trans­la­tion op­er­a­tor that trans­lates a wave func­tion through space. The amount of trans­la­tion is usu­ally in­di­cated by a sub­script.

$
\setbox 0=\hbox{$t$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

tem­per­a­ture    
A mea­sure of the heat mo­tion of the par­ti­cles mak­ing up macro­scopic ob­jects. At ab­solute zero tem­per­a­ture, the par­ti­cles are in the ground state of low­est pos­si­ble en­ergy.

triple prod­uct    
A prod­uct of three vec­tors. There are two dif­fer­ent ver­sions:

$
\setbox 0=\hbox{$U$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{${\cal U}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
The time shift op­er­a­tor: ${\cal U}(\tau,t)$ changes the wave func­tion $\Psi(\ldots;t)$ into $\Psi(\ldots;t+\tau)$. If the Hamil­ton­ian is in­de­pen­dent of time

\begin{displaymath}
{\cal U}(\tau,t) = {\cal U}_\tau = e^{-{\rm i}H \tau/\hbar}
\end{displaymath}

$
\setbox 0=\hbox{$u$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

u    
May in­di­cate the atomic mass con­stant, equiv­a­lent to 1.660 538 92 10$\POW9,{-27}$ kg or 931.494 06 MeV/$c^2$.

$
\setbox 0=\hbox{$V$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{${\cal V}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Vol­ume.

$
\setbox 0=\hbox{$v$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{$\vec{v}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

vec­tor    
Sim­ply put, a list of num­bers. A vec­tor $\vec{v}$ in in­dex no­ta­tion is a set of num­bers $\{v_i\}$ in­dexed by an in­dex $i$. In nor­mal three-di­men­sional Carte­sian space, $i$ takes the val­ues 1, 2, and 3, mak­ing the vec­tor a list of three num­bers, $v_1$, $v_2$, and $v_3$. These num­bers are called the three com­po­nents of $\vec{v}$. The list of num­bers can be vi­su­al­ized as a col­umn, and is then called a ket vec­tor, or as a row, in which case it is called a bra vec­tor. This con­ven­tion in­di­cates how mul­ti­pli­ca­tion should be con­ducted with them. A bra times a ket pro­duces a sin­gle num­ber, the dot prod­uct or in­ner prod­uct of the vec­tors:

\begin{displaymath}
\left(1,3,5\right)\left(\begin{array}{c}7\ 11\ 13\end{array}\right)
= 1\;7 + 3\;11 + 5\;13 = 105
\end{displaymath}

To turn a ket into a bra for pur­poses of tak­ing in­ner prod­ucts, write the com­plex con­ju­gates of its com­po­nents as a row.

For­mal de­f­i­n­i­tions of vec­tors vary, but real math­e­mati­cians will tell you that vec­tors are ob­jects that can be ma­nip­u­lated in cer­tain ways (ad­di­tion and mul­ti­pli­ca­tion by a scalar). Some physi­cists de­fine vec­tors as ob­jects that trans­form in a cer­tain way un­der co­or­di­nate trans­for­ma­tion (one-di­men­sion­al ten­sors); that is not the same thing.

vec­to­r­ial prod­uct    
An vec­to­r­ial prod­uct, or cross prod­uct is a prod­uct of vec­tors that pro­duces an­other vec­tor. If

\begin{displaymath}
\vec c=\vec a\times\vec b,
\end{displaymath}

it means in in­dex no­ta­tion that the $i$-th com­po­nent of vec­tor $\vec{c}$ is

\begin{displaymath}
c_i = a_{{\overline{\imath}}} b_{{\overline{\overline{\ima...
... - a_{{\overline{\overline{\imath}}}}b_{{\overline{\imath}}}
\end{displaymath}

where ${\overline{\imath}}$ is the in­dex fol­low­ing $i$ in the se­quence 123123..., and ${\overline{\overline{\imath}}}$ the one pre­ced­ing it. For ex­am­ple, $c_1$ will equal $a_2b_3-a_3b_2$.

W    
May in­di­cate:

$
\setbox 0=\hbox{$w$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{$\vec{w}$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Generic vec­tor.

$
\setbox 0=\hbox{$X$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Used in this book to in­di­cate a func­tion of $x$ to be de­ter­mined.

$
\setbox 0=\hbox{$x$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{$Y$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Used in this book to in­di­cate a func­tion of $y$ to be de­ter­mined.

$
\setbox 0=\hbox{$Y_l^m$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
Spher­i­cal har­monic. Eigen­func­tion of both an­gu­lar mo­men­tum in the $z$-di­rec­tion and of to­tal square an­gu­lar mo­men­tum.

$
\setbox 0=\hbox{$y$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{$Z$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate:

$
\setbox 0=\hbox{$z$}\kern-.025em\copy0\kern-\wd0 \kern.05em\copy0\kern-\wd0\kern-.025em\raise.0433em\box0 $    
May in­di­cate: