Sub­sec­tions


2.7 Ad­di­tional Points

This sub­sec­tion de­scribes a few fur­ther is­sues of im­por­tance for this book.


2.7.1 Dirac no­ta­tion

Physi­cists like to write in­ner prod­ucts such as $\left\langle\vphantom{Ag}f\hspace{-\nulldelimiterspace}\hspace{.03em}\right.\!\left\vert\vphantom{f}Ag\right\rangle $ in Dirac no­ta­tion:

\begin{displaymath}
{\left\langle f\hspace{0.3pt}\right\vert} A {\left\vert g\r...
...}\hspace{.03em}\right.\!\left\vert\vphantom{f}Ag\right\rangle
\end{displaymath}

since this con­forms more closely to how you would think of it in lin­ear al­ge­bra:

\begin{displaymath}
\begin{array}{ccc}
{\left\langle f\hspace{0.3pt}\right\ver...
...t}{13pt} \mbox{bra} & \mbox{operator} & \mbox{ket}
\end{array}\end{displaymath}

The var­i­ous ad­vanced ideas of lin­ear al­ge­bra can be ex­tended to op­er­a­tors in this way, but they will not be needed in this book.

One thing will be needed in some more ad­vanced ad­denda, how­ever. That is the case that op­er­a­tor $A$ is not Her­mit­ian. In that case, if you want to take $A$ to the other side of the in­ner prod­uct, you need to change it into a dif­fer­ent op­er­a­tor. That op­er­a­tor is called the “Her­mit­ian con­ju­gate” of $A$. In physics, it is al­most al­ways in­di­cated as $A^\dagger$. So, sim­ply by de­f­i­n­i­tion,

\begin{displaymath}
\left\langle\vphantom{Ag}f\hspace{-\nulldelimiterspace}\hsp...
....03em}\right.\!\left\vert\vphantom{A^\dagger f}g\right\rangle
\end{displaymath}

Then there are some more things that this book will not use. How­ever, you will al­most surely en­counter these when you read other books on quan­tum me­chan­ics.

First, the dag­ger is used much like a gen­er­al­iza­tion of com­plex con­ju­gate,

\begin{displaymath}
f^\dagger \equiv f^* \qquad {\left\vert f\right\rangle}^\dagger \equiv {\left\langle f\hspace{0.3pt}\right\vert}
\end{displaymath}

etcetera. Ap­ply­ing a dag­ger a sec­ond time gives the orig­i­nal back. Also, if you work out the dag­ger on a prod­uct, you need to re­verse the or­der of the fac­tors. For ex­am­ple

\begin{displaymath}
\Big(A^\dagger{\left\vert f\right\rangle}\Big)^\dagger {\le...
...ngle f\hspace{0.3pt}\right\vert} A {\left\vert g\right\rangle}
\end{displaymath}

In words, putting $A^\dagger{\left\vert f\right\rangle}$ into the left side of an in­ner prod­uct gives ${\left\langle f\hspace{0.3pt}\right\vert}A$.

The sec­ond point will be il­lus­trated for the case of vec­tors in three di­men­sions. Such a vec­tor can be writ­ten as

\begin{displaymath}
\vec v = {\hat\imath}v_x + {\hat\jmath}v_y + {\hat k}v_z
\end{displaymath}

Here ${\hat\imath}$, ${\hat\jmath}$, and ${\hat k}$ are the three unit vec­tors in the ax­ial di­rec­tions. The com­po­nents $v_x$, $v_y$ and $v_z$ can be found us­ing dot prod­ucts:

\begin{displaymath}
\vec v = {\hat\imath}({\hat\imath}\cdot\vec v) + {\hat\jmath}({\hat\jmath}\cdot\vec v)
+ {\hat k}({\hat k}\cdot\vec v)
\end{displaymath}

Sym­bol­i­cally, you can write this as

\begin{displaymath}
\vec v = ({\hat\imath}{\hat\imath}\cdot + {\hat\jmath}{\hat\jmath}\cdot + {\hat k}{\hat k}\cdot)\vec v
\end{displaymath}

In fact, the op­er­a­tor in paren­the­ses can be de­fined by say­ing that for any vec­tor $\vec{v}$, it gives the ex­act same vec­tor back. Such an op­er­a­tor is called an “iden­tity op­er­a­tor.”

The re­la­tion

\begin{displaymath}
({\hat\imath}{\hat\imath}\cdot + {\hat\jmath}{\hat\jmath}\cdot + {\hat k}{\hat k}\cdot) = 1
\end{displaymath}

is called the “com­plete­ness re­la­tion.” To see why, sup­pose you leave off the third part of the op­er­a­tor. Then

\begin{displaymath}
({\hat\imath}{\hat\imath}\cdot + {\hat\jmath}{\hat\jmath}\cdot) \vec v = {\hat\imath}v_x + {\hat\jmath}v_y
\end{displaymath}

The $z$-​com­po­nent is gone! Now the vec­tor $\vec{v}$ gets pro­jected onto the $x,y$-plane. The op­er­a­tor has be­come a “pro­jec­tion op­er­a­tor” in­stead of an iden­tity op­er­a­tor by not sum­ing over the com­plete set of unit vec­tors.

You will al­most al­ways find these things in terms of bras and kets. To see how that looks, de­fine

\begin{displaymath}
{\hat\imath}\equiv {\left\vert 1\right\rangle} \qquad {\hat...
...ight\rangle}
\qquad \vec v \equiv {\left\vert v\right\rangle}
\end{displaymath}

Then

\begin{displaymath}
{\left\vert v\right\rangle} = {\left\vert 1\right\rangle}{\...
...langle i\hspace{0.3pt}\right\vert} {\left\vert v\right\rangle}
\end{displaymath}

so the com­plete­ness re­la­tion looks like

\begin{displaymath}
\sum_{{\rm all} i} {\left\vert i\right\rangle}{\left\langle i\hspace{0.3pt}\right\vert} = 1
\end{displaymath}

If you do not sum over the com­plete set of kets, you get a pro­jec­tion op­er­a­tor in­stead of an iden­tity one.


2.7.2 Ad­di­tional in­de­pen­dent vari­ables

In many cases, the func­tions in­volved in an in­ner prod­uct may de­pend on more than a sin­gle vari­able $x$. For ex­am­ple, they might de­pend on the po­si­tion $(x,y,z)$ in three-di­men­sion­al space.

The rule to deal with that is to en­sure that the in­ner prod­uct in­te­gra­tions are over all in­de­pen­dent vari­ables. For ex­am­ple, in three spa­tial di­men­sions:

\begin{displaymath}
\langle f \vert g\rangle =
\int_{\mbox{\scriptsize all }x}...
...size all }z}
f^*(x,y,z) g(x,y,z) { \rm d}x {\rm d}y {\rm d}z
\end{displaymath}

Note that the time $t$ is a some­what dif­fer­ent vari­able from the rest, and time is not in­cluded in the in­ner prod­uct in­te­gra­tions.