Sub­sec­tions


7.5 Sym­met­ric Two-State Sys­tems

This sec­tion will look at the sim­plest quan­tum sys­tems that can have non­triv­ial time vari­a­tion. They are called sym­met­ric two-state sys­tems. De­spite their sim­plic­ity, a lot can be learned from them.

Sym­met­ric two-state sys­tems were en­coun­tered be­fore in chap­ter 5.3. They de­scribe such sys­tems as the hy­dro­gen mol­e­cule and mol­e­c­u­lar ion, chem­i­cal bonds, and am­mo­nia. This sec­tion will show that they can also be used as a model for the fun­da­men­tal forces of na­ture. And for the spon­ta­neous emis­sion of ra­di­a­tion by say ex­cited atoms or atomic nu­clei.

Two-state sys­tems are char­ac­ter­ized by just two ba­sic states; these states will be called $\psi_1$ and $\psi_2$. For sym­met­ric two-state sys­tems, these two states must be phys­i­cally equiv­a­lent. Or at least they must have the same ex­pec­ta­tion en­ergy. And the Hamil­ton­ian must be in­de­pen­dent of time.

For ex­am­ple, for the hy­dro­gen mol­e­c­u­lar ion $\psi_1$ is the state where the elec­tron is in the ground state around the first pro­ton. And $\psi_2$ is the state in which it is in the ground state around the sec­ond pro­ton. Since the two pro­tons are iden­ti­cal in their prop­er­ties, there is no phys­i­cal dif­fer­ence be­tween the two states. So they have the same ex­pec­ta­tion en­ergy.

The in­ter­est­ing quan­tum me­chan­ics arises from the fact that the two states $\psi_1$ and $\psi_2$ are not en­ergy eigen­states. The ground state of the sys­tem, call it $\psi_{\rm {gs}}$, is a sym­met­ric com­bi­na­tion of the two states. And there is also an ex­cited en­ergy eigen­state $\psi_{\rm {as}}$ that is an an­ti­sym­met­ric com­bi­na­tion, chap­ter 5.3, {N.11}:

\begin{displaymath}
\psi_{\rm {gs}} = \frac{\psi_1+\psi_2}{\sqrt2}
\qquad
\psi_{\rm {as}} = \frac{\psi_1-\psi_2}{\sqrt2}
\end{displaymath}

The above ex­pres­sions may be in­verted to give the states $\psi_1$ and $\psi_2$ in terms of the en­ergy states:

\begin{displaymath}
\psi_1 = \frac{\psi_{\rm {gs}}+\psi_{\rm {as}}}{\sqrt2}
\qquad
\psi_2 = \frac{\psi_{\rm {gs}}-\psi_{\rm {as}}}{\sqrt2}
\end{displaymath}

It fol­lows that $\psi_1$ and $\psi_2$ are a 50/50 mix­ture of the low and high en­ergy states. That means that they have un­cer­tainty in en­ergy. In par­tic­u­lar they have a 50% chance for the ground state en­ergy $E_{\rm {gs}}$ and a 50% chance for the el­e­vated en­ergy $E_{\rm {as}}$.

That makes their ex­pec­ta­tion en­ergy $\langle{E}\rangle$ equal to the av­er­age of the two en­er­gies, and their un­cer­tainty in en­ergy $\Delta{E}$ equal to half the dif­fer­ence:

\begin{displaymath}
\langle{E}\rangle = \frac{E_{\rm {gs}}+E_{\rm {as}}}{2}
\qquad
\Delta E = \frac{E_{\rm {as}}-E_{\rm {gs}}}{2}
\end{displaymath}

The ques­tion in this sec­tion is how the sys­tem evolves in time. In gen­eral the wave func­tion is, sec­tion 7.1,

\begin{displaymath}
\Psi =
c_{\rm {gs}} e^{-{\rm i}E_{\rm {gs}} t/\hbar} \psi_...
... c_{\rm {as}} e^{-{\rm i}E_{\rm {as}} t/\hbar} \psi_{\rm {as}}
\end{displaymath}

Here $c_{\rm {gs}}$ and $c_{\rm {as}}$ are con­stants that are ar­bi­trary ex­cept for the nor­mal­iza­tion re­quire­ment.

How­ever, this sec­tion will be more con­cerned with what hap­pens to the ba­sic states $\psi_1$ and $\psi_2$, rather than to the en­ergy eigen­states. So, it is de­sir­able to rewrite the wave func­tion above in terms of $\psi_1$ and $\psi_2$ and their prop­er­ties. That pro­duces:

\begin{displaymath}
\Psi = e^{-{\rm i}\langle E \rangle t/\hbar}
\left[
c_{\r...
...rm i}\Delta E t/\hbar} \frac{\psi_1 - \psi_2}{\sqrt2}
\right]
\end{displaymath}

This ex­pres­sion is of the gen­eral form

\begin{displaymath}
\Psi = c_1 \psi_1 + c_2 \psi_2
\end{displaymath}

Ac­cord­ing to the ideas of quan­tum me­chan­ics, $\vert c_1\vert^2$ gives the prob­a­bil­ity that the sys­tem is in state $\psi_1$ and $\vert c_2\vert^2$ that it is in state $\psi_2$.

The most in­ter­est­ing case is the one in which the sys­tem is in the state $\psi_1$ at time zero. In that case the prob­a­bil­i­ties of the states $\psi_1$ and $\psi_2$ vary with time as

\begin{displaymath}
\fbox{$\displaystyle
\vert c_1\vert^2 = \cos^2(\Delta E  ...
...)
\qquad
\vert c_2\vert^2 = \sin^2(\Delta E  t/\hbar)
$} %
\end{displaymath} (7.27)

To ver­ify this, first note from the gen­eral wave func­tion that if the sys­tem is in state $\psi_1$ at time zero, the co­ef­fi­cients $c_{\rm {gs}}$ and $c_{\rm {as}}$ must be equal. Then iden­tify what $c_1$ and $c_2$ are and com­pute their square mag­ni­tudes us­ing the Euler for­mula (2.5).

At time zero, the above prob­a­bil­i­ties pro­duce state $\psi_1$ with 100% prob­a­bil­ity as they should. And so they do when­ever the sine in the sec­ond ex­pres­sions is zero. How­ever, at times at which the co­sine is zero, the sys­tem is fully in state $\psi_2$. It fol­lows that the sys­tem is os­cil­lat­ing be­tween the states $\psi_1$ and $\psi_2$.


Key Points
$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
Sym­met­ric two-state sys­tems are de­scribed by two quan­tum states $\psi_1$ and $\psi_2$ that have the same ex­pec­ta­tion en­ergy $\langle{E}\rangle$.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
The two states have an un­cer­tainty in en­ergy $\Delta{E}$ that is not zero.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
The prob­a­bil­i­ties of the two states are given in (7.27). This as­sumes that the sys­tem is ini­tially in state $\psi_1$.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
The sys­tem os­cil­lates be­tween states $\psi_1$ and $\psi_2$.


7.5.1 A graph­i­cal ex­am­ple

Con­sider a sim­ple ex­am­ple of the os­cil­la­tory be­hav­ior of sym­met­ric two-state sys­tems. The ex­am­ple sys­tem is the par­ti­cle in­side a closed pipe as dis­cussed in chap­ter 3.5. It will be as­sumed that the wave func­tion is of the form

\begin{displaymath}
\Psi=\sqrt{{\textstyle\frac{4}{5}}} e^{-{\rm i}E_{111}t/\hb...
...{\textstyle\frac{1}{5}}} e^{-{\rm i}E_{211}t/\hbar} \psi_{211}
\end{displaymath}

Here $\psi_{111}$ and $\psi_{211}$ are the ground state and the sec­ond low­est en­ergy state, and $E_{111}$ and $E_{211}$ are the cor­re­spond­ing en­er­gies, as given in chap­ter 3.5.

The above wave func­tion is a valid so­lu­tion of the Schrö­din­ger equa­tion since the two terms have the cor­rect ex­po­nen­tial de­pen­dence on time. And since the two terms have dif­fer­ent en­er­gies, there is un­cer­tainty in en­ergy.

The rel­a­tive prob­a­bil­ity to find the par­ti­cle at a given po­si­tion is given by the square mag­ni­tude of the wave func­tion. That works out to

\begin{displaymath}
\vert\Psi\vert^2=\Psi^*\Psi =
{\textstyle\frac{4}{5}} \ver...
...1}\psi_{211}
+ {\textstyle\frac{1}{5}} \vert\psi_{211}\vert^2
\end{displaymath}

Note that this re­sult is time de­pen­dent. If there was no un­cer­tainty in en­ergy, which would be true if $E_{111}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $E_{211}$, the square wave func­tion would be in­de­pen­dent of time.

Fig­ure 7.6: A com­bi­na­tion of two en­ergy eigen­func­tions seen at some typ­i­cal times.
 
a

b

c

d

Move your mouse over any fig­ure to see the an­i­ma­tion. Javascript must be en­abled on your browser. Give it a few sec­onds for the an­i­ma­tion to load, es­pe­cially on a phone line.

The prob­a­bil­ity for find­ing the par­ti­cle is plot­ted at four rep­re­sen­ta­tive times in fig­ure 7.6. Af­ter time (d) the evo­lu­tion re­peats at (a). The wave func­tion blob is slosh­ing back and forth in the pipe. That is much like a clas­si­cal fric­tion­less par­ti­cle with ki­netic en­ergy would bounce back and forth be­tween the ends of the pipe.

In terms of sym­met­ric two-state sys­tems, you can take the state $\psi_1$ to be the one in which the blob is at its left­most po­si­tion, fig­ure 7.6(a). Then $\psi_2$ is the state in which the blob is at its right­most po­si­tion, fig­ure 7.6(c). Note from the fig­ure that these two states are phys­i­cally equiv­a­lent. And they have un­cer­tainty in en­ergy.


Key Points
$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
A graph­i­cal ex­am­ple of a sim­ple two-state sys­tem was give.


7.5.2 Par­ti­cle ex­change and forces

An im­por­tant two-state sys­tem very sim­i­lar to the sim­ple ex­am­ple in the pre­vi­ous sub­sec­tion is the hy­dro­gen mol­e­c­u­lar ion. This ion con­sists of two pro­tons and one elec­tron.

The mol­e­c­u­lar ion can show os­cil­la­tory be­hav­ior very sim­i­lar to that of the ex­am­ple. In par­tic­u­lar, as­sume that the elec­tron is ini­tially in the ground state around the first pro­ton, cor­re­spond­ing to state $\psi_1$. In that case, af­ter some time in­ter­val $\Delta{t}$, the elec­tron will be found in the ground state around the sec­ond pro­ton, cor­re­spond­ing to state $\psi_2$. Af­ter an­other time in­ter­val $\Delta{t}$, the elec­tron will be back around the first pro­ton, and the cy­cle re­peats. In ef­fect, the two pro­tons play catch with the elec­tron!

That may be fun, but there is some­thing more se­ri­ous that can be learned. As is, there is no (sig­nif­i­cant) force be­tween the two pro­tons. How­ever, there is a sec­ond sim­i­lar play-catch so­lu­tion in which the elec­tron is ini­tially around the sec­ond pro­ton in­stead of around the first. If these two so­lu­tions are sym­met­ri­cally com­bined, the re­sult is the ground state of the mol­e­c­u­lar ion. In this state of low­ered en­ergy, the pro­tons are bound to­gether. In other words, there is now a force that holds the two pro­tons to­gether:

If two par­ti­cles play catch, it can pro­duce forces be­tween these two par­ti­cles.

A play catch mech­a­nism as de­scribed above is used in more ad­vanced quan­tum me­chan­ics to ex­plain the forces of na­ture. For ex­am­ple, con­sider the cor­rect, rel­a­tivis­tic, de­scrip­tion of elec­tro­mag­net­ism, given by “quan­tum elec­tro­dy­nam­ics”. In it, the elec­tro­mag­netic in­ter­ac­tion be­tween two charged par­ti­cles comes about largely through processes in which one par­ti­cle cre­ates a pho­ton that the other par­ti­cle ab­sorbs and vice versa. Charged par­ti­cles play catch us­ing pho­tons.

That is much like how the pro­tons in the mol­e­c­u­lar ion get bound to­gether by ex­chang­ing the elec­tron. Note how­ever that the so­lu­tion for the ion was based on the Coulomb po­ten­tial. This po­ten­tial im­plies in­stan­ta­neous in­ter­ac­tion at a dis­tance: if, say, the first pro­ton is moved, the elec­tron and the other pro­ton no­tice this in­stan­ta­neously in the force that they ex­pe­ri­ence. Clas­si­cal rel­a­tiv­ity, how­ever, does not al­low ef­fects that prop­a­gate at in­fi­nite speed. The high­est pos­si­ble prop­a­ga­tion speed is the speed of light. In clas­si­cal elec­tro­mag­net­ics, charged par­ti­cles do not re­ally in­ter­act in­stan­ta­neously. In­stead charged par­ti­cles in­ter­act with the elec­tro­mag­netic field at their lo­ca­tion. The elec­tro­mag­netic field then com­mu­ni­cates this to the other charged par­ti­cles, at the speed of light. The Coulomb po­ten­tial is merely a sim­ple ap­prox­i­ma­tion, for cases in which the par­ti­cle ve­loc­i­ties are much less than the speed of light.

In a rel­a­tivis­tic quan­tum de­scrip­tion, the elec­tro­mag­netic field is quan­tized into pho­tons. (A con­cise in­tro­duc­tion to this ad­vanced topic is in ad­den­dum {A.23}.) Pho­tons are bosons with spin 1. Sim­i­larly to clas­si­cal elec­tro­dy­nam­ics, in the quan­tum de­scrip­tion charged par­ti­cles in­ter­act with pho­tons at their lo­ca­tion. They do not in­ter­act di­rectly with other charged par­ti­cles.

These are three-par­ti­cle in­ter­ac­tions, a bo­son and two fermi­ons. For ex­am­ple, if an elec­tron ab­sorbs a pho­ton, the three par­ti­cles in­volved are the pho­ton, the elec­tron be­fore the ab­sorp­tion, and the elec­tron af­ter the ab­sorp­tion. (Since in rel­a­tivis­tic ap­pli­ca­tions par­ti­cles may be cre­ated or de­stroyed, a par­ti­cle af­ter an in­ter­ac­tion should be counted sep­a­rately from an iden­ti­cal par­ti­cle that may ex­ist be­fore it.)

The ideas of quan­tum elec­tro­dy­nam­ics trace back to the early days of quan­tum me­chan­ics. Un­for­tu­nately, there was the prac­ti­cal prob­lem that the com­pu­ta­tions came up with in­fi­nite val­ues. A the­ory that got around this prob­lem was for­mu­lated in 1948 in­de­pen­dently by Ju­lian Schwinger and Sin-Itiro Tomon­aga. A dif­fer­ent the­ory was pro­posed that same year by Richard Feyn­man based on a more pic­to­r­ial ap­proach. Free­man Dyson showed that the two the­o­ries were in fact equiv­a­lent. Feyn­man, Schwinger, and Tomon­aga re­ceived the No­bel prize in 1965 for this work, Dyson was not in­cluded. (The No­bel prize in physics is lim­ited to a max­i­mum of three re­cip­i­ents.)

Fol­low­ing the ideas of quan­tum elec­tro­dy­nam­ics and pi­o­neer­ing work by Shel­don Glashow, Steven Wein­berg and Ab­dus Salam in 1967 in­de­pen­dently de­vel­oped a par­ti­cle ex­change model for the so called “weak force.” All three re­ceived the No­bel prize for that work in 1979. Ger­ar­dus ’t Hooft and Mar­t­i­nus Velt­man re­ceived the 1999 No­bel Prize for a fi­nal for­mu­la­tion of this the­ory that al­lows mean­ing­ful com­pu­ta­tions.

The weak force is re­spon­si­ble for the beta de­cay of atomic nu­clei, among other things. It is of key im­por­tance for such nu­clear re­ac­tions as the hy­dro­gen fu­sion that keeps our sun go­ing. In weak in­ter­ac­tions, the ex­changed par­ti­cles are not pho­tons, but one of three dif­fer­ent bosons of spin 1: the neg­a­tively charged W$\POW9,{-}$, (think W for weak force), the pos­i­tively charged W$\POW9,{+}$, and the neu­tral Z$\POW9,{0}$ (think Z for zero charge). You might call them the mas­sives be­cause they have a nonzero rest mass, un­like the pho­tons of elec­tro­mag­netic in­ter­ac­tions. In fact, they have gi­gan­tic rest masses. The W$\POW9,{\pm}$ have an ex­per­i­men­tal rest mass en­ergy of about 80 GeV (giga-elec­tron-volt) and the Z$\POW9,{0}$ about 91 GeV. Com­pare that with the rest mass en­ergy of a pro­ton or neu­tron, less than a GeV, or an elec­tron, less than a thou­sandth of a GeV. How­ever, a mem­o­rable name like mas­sives is of course com­pletely un­ac­cept­able in physics. And nei­ther would be weak-force car­ri­ers, be­cause it is ac­cu­rate and to the point. So physi­cists call them the “in­ter­me­di­ate vec­tor bosons.” That is also three words, but com­pletely mean­ing­less to most peo­ple and al­most mean­ing­less to the rest, {A.20}. It meets the re­quire­ments of physics well.

A typ­i­cal weak in­ter­ac­tion might in­volve the cre­ation of say a W$\POW9,{-}$ by a quark in­side a neu­tron and its ab­sorp­tion in the cre­ation of an elec­tron and an an­ti­neu­trino. Now for mas­sive par­ti­cles like the in­ter­me­di­ate vec­tor bosons to be cre­ated out of noth­ing re­quires a gi­gan­tic quan­tum un­cer­tainty in en­ergy. Fol­low­ing the idea of the en­ergy-time equal­ity (7.9), such par­ti­cles can only ex­ist for ex­tremely short times. And that makes the weak force of ex­tremely short range.

The the­ory of “quan­tum chrom­e­dy­nam­ics” de­scribes the so-called “strong force” or “color force.” This force is re­spon­si­ble for such things as keep­ing atomic nu­clei to­gether.

The color force acts be­tween “quarks.” Quarks are the con­stituents of “baryons” like the pro­ton and the neu­tron, and of “mesons” like the pi­ons. In par­tic­u­lar, baryons con­sist of three quarks, while mesons con­sist of a quark and an an­ti­quark. For ex­am­ple, a pro­ton con­sists of two so-called up quarks and a third down quark. Since up quarks have elec­tric charge $\frac23e$ and down quarks $-\frac13e$, the net charge of the pro­ton $\frac23e+\frac23e-\frac13e$ equals $e$. Sim­i­larly, a neu­tron con­sists of one up quark and two down quarks. That makes its net charge $\frac23e-\frac13e-\frac13e$ equal to zero. As an­other ex­am­ple, a so-called $\pi^+$ me­son con­sists of an up quark and an an­ti­d­own quark. An an­tipar­ti­cle has the op­po­site charge from the cor­re­spond­ing par­ti­cle, so the charge of the $\pi^+$ me­son $\frac23e+\frac13e$ equals $e$, the same as the pro­ton. Three an­ti­quarks make up an an­tibaryon. That gives an an­tibaryon the op­po­site charge of the cor­re­spond­ing baryon. More ex­otic baryons and mesons may in­volve the strange, charm, bot­tom, and top fla­vors of quarks. (Yes, there are six of them. You might well ask, “Who or­dered that?” as the physi­cist Rabi did in 1936 upon the dis­cov­ery of the muon, a heav­ier ver­sion of the elec­tron. He did not know the least of it.)

Quarks are fermi­ons with spin $\leavevmode \kern.03em\raise.7ex\hbox{\the\scriptfont0 1}\kern-.2em
/\kern-.21em\lower.56ex\hbox{\the\scriptfont0 2}\kern.05em$ like elec­trons. How­ever, quarks have an ad­di­tional prop­erty called “color charge.” (This color charge has noth­ing to do with the col­ors you can see. There are just a few su­per­fi­cial sim­i­lar­i­ties. Physi­cists love to give com­plete dif­fer­ent things iden­ti­cal names be­cause it pro­motes such hi­lar­i­ous con­fu­sion.) There are three quark col­ors called, you guessed it, red, green and blue. There are also three cor­re­spond­ing an­ti­col­ors called cyan, ma­genta, and yel­low.

Now the elec­tric charge of quarks can be ob­served, for ex­am­ple in the form of the charge of the pro­ton. But their color charge can­not be ob­served in our macro­scopic world. The rea­son is that quarks can only be found in col­or­less com­bi­na­tions. In par­tic­u­lar, in baryons each of the three quarks takes a dif­fer­ent color. (For com­par­i­son, on a video screen full-blast red, green and blue pro­duces a col­or­less white.) Sim­i­larly, in an­tibaryons, each of the an­ti­quarks takes on a dif­fer­ent an­ti­color. In mesons the quark takes on a color and the an­ti­quark the cor­re­spond­ing an­ti­color. (For ex­am­ple on a video screen, if you de­fine anti­green as ma­genta, i.e. full-blast red plus blue, then green and anti­green pro­duces again white.)

Ac­tu­ally, it is a bit more com­pli­cated still than that. If you had a green and ma­genta flag, you might call it color-bal­anced, but you would def­i­nitely not call it col­or­less. At least not in this book. Sim­i­larly, a green-anti­green me­son would not be col­or­less, and such a me­son does not ex­ist. An ac­tual me­son is an quan­tum su­per­po­si­tion of the three pos­si­bil­i­ties red-an­tired, green-anti­green, and blue-an­tiblue. The me­son color state is

\begin{displaymath}
\frac{1}{\sqrt3}(r\bar r + g \bar g + b \bar b)
\end{displaymath}

where a bar in­di­cates an an­ti­color. Note that the quark has equal prob­a­bil­i­ties of be­ing ob­served as red, green, or blue. Sim­i­larly the an­ti­quark has equal prob­a­bil­i­ties of be­ing ob­served an­tired, anti­green, or an­tiblue, but al­ways the an­ti­color of the quark.

In ad­di­tion, the me­son color state above is a one-of-a-kind, or “sin­glet” state. To see why, sup­pose that, say, the fi­nal $b\bar{b}$ term had a mi­nus sign in­stead of a plus sign. Then surely, based on sym­me­try ar­gu­ments, there should also be states where the $g\bar{g}$ or $r\bar{r}$ has the mi­nus sign. And that can­not be true be­cause lin­ear com­bi­na­tions of such states would pro­duce states like the green-anti­green me­son that are not col­or­less. So the only true col­or­less pos­si­bil­ity is the state above, where all three color-an­ti­color states have the same co­ef­fi­cient. (Do re­call that a con­stant of mag­ni­tude one is in­de­ter­mi­nate in quan­tum states. So if all three color-an­ti­color states had a mi­nus sign, it would still be the same state.)

Sim­i­larly, an rgb baryon with the first quark red, the sec­ond green, and the third blue would be color-bal­anced but not col­or­less. So such a baryon does not ex­ist. For baryons there are six dif­fer­ent pos­si­ble color com­bi­na­tions: there are three pos­si­bil­i­ties for which of the three quarks is red, times two pos­si­bil­i­ties which of the re­main­ing two quarks is green. An ac­tual baryon is a quan­tum su­per­po­si­tion of these six pos­si­bil­i­ties. More­over, the com­bi­na­tion is an­ti­sym­met­ric un­der color ex­change:

\begin{displaymath}
\frac{1}{\sqrt6}(rgb - rbg + gbr - grb + brg - bgr)
\end{displaymath}

Equiv­a­lently, the com­bi­na­tion is an­ti­sym­met­ric un­der quark ex­change. That ex­plains why the so-called $\Delta^{++}$ delta baryon can ex­ist. This baryon con­sists of three up quarks in a sym­met­ric spa­tial ground state and a sym­met­ric spin $\leavevmode \kern.03em\raise.7ex\hbox{\the\scriptfont0 3}\kern-.2em
/\kern-.21em\lower.56ex\hbox{\the\scriptfont0 2}\kern.05em$ state, like ${\uparrow}{\uparrow}{\uparrow}$. Be­cause of the an­ti­sym­met­ric color state, the an­ti­sym­metriza­tion re­quire­ments for the three quarks can be sat­is­fied. The color state above is again a sin­glet one. In terms of chap­ter 5.7, it is the unique Slater de­ter­mi­nant that can be formed from three states for three par­ti­cles.

It is be­lieved that baryons and mesons can­not be taken apart into sep­a­rate quarks to study quarks in iso­la­tion. In other words, quarks are sub­ject to “con­fine­ment” in­side col­or­less baryons and mesons. The prob­lem with try­ing to take these apart is that the force be­tween quarks does not be­come zero with dis­tance like other forces. If you try to take a quark out of a baryon or me­son, pre­sum­ably even­tu­ally you will put in enough en­ergy to cre­ate a quark-an­ti­quark pair in be­tween. That kills off the quark sep­a­ra­tion that you thought you had achieved.

The color force be­tween quarks is due to the ex­change of so-called “glu­ons.” Glu­ons are mass­less bosons with spin 1 like pho­tons. How­ever, pho­tons do not carry elec­tric charge. Glu­ons do carry color/an­ti­color com­bi­na­tions. That is one rea­son that quan­tum chrom­e­dy­nam­ics is enor­mously more dif­fi­cult than quan­tum elec­tro­dy­nam­ics. Pho­tons can­not move elec­tric charge from one fermion to the next. But glu­ons al­low the in­ter­change of col­ors be­tween quarks.

Also, be­cause pho­tons have no charge, they do not in­ter­act with other pho­tons. But since glu­ons them­selves carry color, glu­ons do in­ter­act with other glu­ons. In fact, both three-gluon and four-gluon in­ter­ac­tions are pos­si­ble. In prin­ci­ple, this makes it con­ceiv­able that “glue­balls,” col­or­less com­bi­na­tions of glu­ons, might ex­ist. How­ever, at the time of writ­ing, 2012, only baryons, an­tibaryons, and mesons have been solidly es­tab­lished.

Gluon-gluon in­ter­ac­tions are re­lated to an ef­fec­tive strength­en­ing of the color force at larger dis­tances. Or as physi­cists pre­fer to say, to an ef­fec­tive weak­en­ing of the in­ter­ac­tions at short dis­tances called “as­ymp­totic free­dom.” This helps a bit be­cause it al­lows some analy­sis to be done at very short dis­tances, i.e. at very high en­er­gies.

Nor­mally you would ex­pect nine in­de­pen­dent color/an­ti­color gluon states: there are three col­ors times three an­ti­col­ors. But in fact only eight in­de­pen­dent gluon states are be­lieved to ex­ist. Re­call the col­or­less me­son state de­scribed above. If a gluon could be in such a col­or­less state, it would not be sub­ject to con­fine­ment. It could then be ex­changed be­tween dis­tant pro­tons and neu­trons, giv­ing rise to a long-range nu­clear force. Since such a force is not ob­served, it must be con­cluded that glu­ons can­not be in the col­or­less state. So if the nine in­de­pen­dent or­tho­nor­mal color states are taken to be the col­or­less state plus eight more states or­thog­o­nal to it, then only the lat­ter eight states can be ob­serv­able. In terms of sec­tion 7.3, the rel­e­vant sym­me­try of the color force must be SU(3), not U(3).

Many peo­ple con­tributed to the the­ory of quan­tum chrom­e­dy­nam­ics. How­ever Mur­ray Gell-Mann seemed to be in­volved in pretty much every stage. He re­ceived the 1969 No­bel Prize at least in part for his work on quan­tum chrom­e­dy­nam­ics. It is also he who came up with the name quark. The name is re­ally not bad com­pared to many other terms in physics. How­ever, Gell-Mann is also re­spon­si­ble for not spelling color” as “qolor. That would have saved count­less fee­ble ex­pla­na­tions that, “No, this color has ab­solutely noth­ing to do with the color that you see in na­ture.” So far no­body has been able to solve that prob­lem, but David Gross, David Politzer and Frank Wilczek did man­age to dis­cover the as­ymp­totic free­dom men­tioned above. For that they were awarded the 2004 No­bel Prize in Physics.

It may be noted that Gell-Mann ini­tially called the three col­ors red, white, and blue. Just like the col­ors of the US flag, in short. Or of the Nether­lands and Tai­wan, to men­tion a few oth­ers. Huang, [27, p. 167], born in China, with a red and yel­low flag, claims red, yel­low and green are now the con­ven­tional choice. He must live in a world dif­fer­ent from ours. Sorry, but the honor of hav­ing the color-bal­anced, (but not col­or­less), flag goes to Azer­bai­jan.

The force of grav­ity is sup­pos­edly due to the ex­change of par­ti­cles called “gravi­tons.” They should be mass­less bosons with spin 2. How­ever, it is hard to ex­per­i­ment with grav­ity be­cause of its weak­ness on hu­man scales. The gravi­ton re­mains un­con­firmed. Worse, the ex­act place of grav­ity in quan­tum me­chan­ics re­mains very con­tro­ver­sial.


Key Points
$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
The fun­da­men­tal forces are due to the ex­change of par­ti­cles.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
The par­ti­cles are pho­tons for elec­tro­mag­net­ism, in­ter­me­di­ate vec­tor bosons for the weak force, glu­ons for the color force, and pre­sum­ably gravi­tons for grav­ity.


7.5.3 Spon­ta­neous emis­sion

Sym­met­ric two state sys­tems pro­vide the sim­plest model for spon­ta­neous emis­sion of ra­di­a­tion by atoms or atomic nu­clei. The gen­eral ideas are the same whether it is an atom or nu­cleus, and whether the ra­di­a­tion is elec­tro­mag­netic (like vis­i­ble light) or nu­clear al­pha or beta ra­di­a­tion. But to be spe­cific, this sub­sec­tion will use the ex­am­ple of an ex­cited atomic state that de­cays to a lower en­ergy state by re­leas­ing a pho­ton of elec­tro­mag­netic ra­di­a­tion. The con­ser­va­tion laws ap­plic­a­ble to this process were dis­cussed ear­lier in sec­tion 7.4. This sub­sec­tion wants to ex­am­ine the ac­tual me­chan­ics of the emis­sion process.

First, there are some im­por­tant terms and con­cepts that must be men­tioned. You will en­counter them all the time in de­cay processes.

The big thing is that de­cay processes are ran­dom. A typ­i­cal atom in an ex­cited state $\psi_{\rm {H}}$ will af­ter some time tran­si­tion to a state of lower en­ergy $\psi_{\rm {L}}$ while re­leas­ing a pho­ton. But if you take a sec­ond iden­ti­cal atom in the ex­act same ex­cited state, the time af­ter which this atom tran­si­tions will be dif­fer­ent.

Still, the de­cay process is not com­pletely un­pre­dictable. Av­er­ages over large num­bers of atoms have mean­ing­ful val­ues. In par­tic­u­lar, sup­pose that you have a very large num­ber $I$ of iden­ti­cal ex­cited atoms. Then the “de­cay rate” is by de­f­i­n­i­tion

\begin{displaymath}
\fbox{$\displaystyle
\lambda = - \frac{1}{I} \frac{{\rm d}I}{{\rm d}t}
$} %
\end{displaymath} (7.28)

It is the rel­a­tive frac­tion $\vphantom{0}\raisebox{1.5pt}{$-$}$${\rm d}{I}$$\raisebox{.5pt}{$/$}$$I$ of ex­cited atoms that dis­ap­pears per unit time through tran­si­tions to a lower en­ergy state. The de­cay rate has a pre­cise value for a given atomic state. It is not a ran­dom num­ber.

To be pre­cise, the above de­cay rate is bet­ter called the spe­cific de­cay rate. The ac­tual de­cay rate is usu­ally de­fined to be sim­ply $\vphantom{0}\raisebox{1.5pt}{$-$}$${\rm d}{I}$$\raisebox{.5pt}{$/$}$${\rm d}{t}$. But any­way, de­cay rate is not a good term to use in physics. It is much too clear. Some­times the term “spon­ta­neous emis­sion rate” or “tran­si­tion rate” is used, es­pe­cially in the con­text of atoms. But that is even worse. A bet­ter and very pop­u­lar choice is “de­cay con­stant.” But, while con­stant is a term that can mean any­thing, it re­ally is still far too trans­par­ent. How does “dis­in­te­gra­tion con­stant” sound? Es­pe­cially since the atom hardly dis­in­te­grates in the tran­si­tion? Why not call it the [spe­cific] “ac­tiv­ity,” come to think of it? Ac­tiv­ity is an­other of these vague terms. An­other good one is “tran­si­tion prob­a­bil­ity,” be­cause a prob­a­bil­ity should be nondi­men­sion­al and $\lambda$ is per unit time. May as well call it “ra­di­a­tion prob­a­bil­ity” then. Ac­tu­ally, many ref­er­ences will use a bunch of these terms in­ter­change­ably on the same page.

In fact, would it not be a good thing to take the in­verse of the de­cay rate? That al­lows an­other term to be de­fined for es­sen­tially the same thing: the [mean] “life­time” of the ex­cited state:

\begin{displaymath}
\fbox{$\displaystyle
\tau \equiv \frac{1}{\lambda}
$} %
\end{displaymath} (7.29)

Do re­mem­ber that this is not re­ally a life­time. Each in­di­vid­ual atom has its own life­time. (How­ever, if you av­er­age the life­times of a large num­ber of iden­ti­cal atoms, you will in fact get the mean life­time above.)

Also, re­mem­ber, if more than one de­cay process oc­curs for the ex­cited state,

Add de­cay rates, not life­times.
The sum of the de­cay rates gives the to­tal de­cay rate of the atomic state. The rec­i­p­ro­cal of that to­tal is the cor­rect life­time.

Now sup­pose that ini­tially there is a large num­ber $I_0$ of ex­cited atoms. Then the num­ber of ex­cited atoms $I$ left at a later time $t$ is

\begin{displaymath}
\fbox{$\displaystyle
I = I_0 e^{-\lambda t}
$} %
\end{displaymath} (7.30)

So the num­ber of ex­cited atoms left de­cays ex­po­nen­tially in time. To check this ex­pres­sion, just check that it is right at time zero and plug it into the de­f­i­n­i­tion for the de­cay rate.

A quan­tity with a clearer phys­i­cal mean­ing than life­time is the time for about half the nu­clei in a given large sam­ple of ex­cited atoms to de­cay. This time is called the “half-life” $\tau_{1/2}$. From (7.30) and (7.29) above, it fol­low that the half-life is shorter than the life­time by a fac­tor $\ln2$:

\begin{displaymath}
\fbox{$\displaystyle
\tau_{1/2} = \tau \ln 2
$} %
\end{displaymath} (7.31)

Note that $\ln2$ is less than one.

The pur­pose in this sub­sec­tion is now to un­der­stand some of the above con­cepts in de­cays us­ing the model of a sym­met­ric two-state sys­tem.

The ini­tial state $\psi_1$ of the sys­tem is taken to be an atom in a high-en­ergy atomic state $\psi_{\rm {H}}$, fig­ure 7.3. The state seems to be an state of def­i­nite en­ergy. That would make it a sta­tion­ary state, sec­tion 7.1.4, and hence it would not de­cay. How­ever, $\psi_1$ is not re­ally an en­ergy eigen­state, be­cause an atom is al­ways per­turbed by a cer­tain amount of am­bi­ent elec­tro­mag­netic ra­di­a­tion. The ac­tual state $\psi_1$ has there­fore some un­cer­tainty in en­ergy $\Delta{E}$.

The de­cayed state $\psi_2$ con­sists of an atomic state of low­ered en­ergy $\psi_{\rm {L}}$ plus an emit­ted pho­ton. This state seems to have the same com­bined en­ergy as the ini­tial state $\psi_1$. It too, how­ever, is not re­ally an en­ergy eigen­state. Oth­er­wise it would al­ways have ex­isted. In fact, it has the same ex­pec­ta­tion en­ergy and un­cer­tainty in en­ergy as the ini­tial state, sec­tion 7.1.3.

The prob­a­bil­i­ties of the two states were given at the start of this sec­tion. They were:

\begin{displaymath}
\vert c_1\vert^2 = \cos^2(\Delta E  t/\hbar)
\qquad
\vert c_2\vert^2 = \sin^2(\Delta E  t/\hbar) %
\end{displaymath} (7.32)

At time zero, the sys­tem is in state $\psi_1$ for sure, but af­ter a time in­ter­val $\Delta{t}$ it is in state $\psi_2$ for sure. The atom has emit­ted a pho­ton and de­cayed. An ex­pres­sion for the time that this takes can be found by set­ting the an­gle in the sine equal to $\frac12\pi$. That gives:

\begin{displaymath}
\Delta t = {\textstyle\frac{1}{2}} \pi \hbar / \Delta E
\end{displaymath}

But note that there is a prob­lem. Ac­cord­ing to (7.32), af­ter an­other time in­ter­val $\Delta{t}$ the prob­a­bil­i­ties of the two states will re­vert back to the ini­tial ones. That means that the low en­ergy atomic state ab­sorbs the pho­ton again and so re­turns to the ex­cited state!

Ef­fects like that do oc­cur in nu­clear mag­netic res­o­nance, chap­ter 13.6, or for atoms in strong laser light and high vac­uum, [52, pp. 147-152]. But nor­mally, de­cayed atoms stay de­cayed.

To ex­plain that, it must be as­sumed that the state of the sys­tem is mea­sured ac­cord­ing to the rules of quan­tum me­chan­ics, chap­ter 3.4. The macro­scopic sur­round­ings ob­serves that a pho­ton is re­leased well be­fore the orig­i­nal state can be re­stored. In the pres­ence of such sig­nif­i­cant in­ter­ac­tion with the macro­scopic sur­round­ings, the two-state evo­lu­tion as de­scribed above is no longer valid. In fact, the macro­scopic sur­round­ings will have be­come firmly com­mit­ted to the fact that the pho­ton has been emit­ted. Lit­tle chance for the atom to get it back un­der such con­di­tions.

In an im­proved model of the tran­si­tion process, sec­tion 7.6.1, the need for mea­sure­ment re­mains. How­ever, the rea­sons get more com­plex.

In­ter­ac­tions with the sur­round­ings are gener­i­cally called col­li­sions. For ex­am­ple, a real-life atom in a gas will pe­ri­od­i­cally col­lide with neigh­bor­ing atoms and other par­ti­cles. If a process is fast enough that no in­ter­ac­tions with the sur­round­ings oc­cur dur­ing the time in­ter­val of in­ter­est, then the process takes place in the so-called “col­li­sion­less regime.” Nu­clear mag­netic res­o­nance and atoms in strong laser light and high vac­uum may be in this regime.

How­ever, nor­mal atomic de­cays take place in the so-called “col­li­sion-​dom­i­nated regime.” Here col­li­sions with the sur­round­ings oc­cur al­most im­me­di­ately.

To model that, take the time in­ter­val be­tween col­li­sions to be $t_{\rm {c}}$. As­sume that the atom evolves as an un­per­turbed two-state sys­tem un­til time $t_{\rm {c}}$. At that time how­ever, the atom is mea­sured by its sur­round­ings and it is ei­ther found to be in the ini­tial ex­cited state $\psi_1$ or in the de­cayed state with pho­ton $\psi_2$. Ac­cord­ing to the rules of quan­tum me­chan­ics the re­sult is ran­dom. How­ever, they are not com­pletely ran­dom. The prob­a­bil­ity $P_{1\to2}$ for the atom to be found to be de­cayed is the square mag­ni­tude $\vert c_2\vert^2$ of the state $\psi_2$.

That square mag­ni­tude was given in (7.32). But it may be ap­prox­i­mated to:

\begin{displaymath}
P_{1\to2} = \frac{\vert\Delta E\vert^2}{\hbar^2} t_{\rm {c}}^2
\end{displaymath}

This ap­prox­i­mated the sine in (7.32) by its ar­gu­ment, since the time $t_{\rm {c}}$ is as­sumed small enough that the ar­gu­ment is small.

Note that the de­cay process has be­come prob­a­bilis­tic. You can­not say for sure whether the atom will be de­cayed or not at time $t_{\rm {c}}$. You can only give the chances. See chap­ter 8.6 for a fur­ther dis­cus­sion of that philo­soph­i­cal is­sue.

How­ever, if you have not just one ex­cited atom, but a large num­ber $I$ of them, then $P_{1\to2}$ above is the rel­a­tive frac­tion that will be found to be de­cayed at time $t_{\rm {c}}$. The re­main­ing atoms, which are found to be in the ex­cited state, (or rather, have been pushed back into the ex­cited state), start from scratch. Then at time $2t_{\rm {c}}$, a frac­tion $P_{1\to2}$ of these will be found to be de­cayed. And so on. Over time the num­ber $I$ of ex­cited atoms de­creases to zero.

As men­tioned ear­lier, the rel­a­tive frac­tion of ex­cited atoms that dis­ap­pears per unit time is called the de­cay rate $\lambda$. That can be found by sim­ply di­vid­ing the de­cay prob­a­bil­ity $P_{1\to2}$ above by the time $t_{\rm {c}}$ that the evo­lu­tion took. So

\begin{displaymath}
\lambda_{1\to2} = \frac{\vert H_{21}\vert^2}{\hbar^2} t_{\r...
...uad H_{21}= \Delta E = \langle\psi_2\vert H\vert\psi_1\rangle.
\end{displaymath}

Here the un­cer­tainty in en­ergy $\Delta{E}$ was iden­ti­fied in terms of the Hamil­ton­ian $H$ us­ing the analy­sis of chap­ter 5.3.

Physi­cists call $H_{21}$ the “ma­trix el­e­ment.” That is well be­low their usual form, be­cause it re­ally is a ma­trix el­e­ment. But be­fore you start se­ri­ously doubt­ing the ca­pa­bil­ity of physi­cists to in­vari­ably come up with con­fus­ing terms, note that there are lots of dif­fer­ent ma­tri­ces in any ad­vanced phys­i­cal analy­sis. So the name does not give its se­cret away to non­spe­cial­ists. To en­force that, many physi­cists write ma­trix el­e­ments in the form $M_{21}$, be­cause, hey, the word ma­trix starts with an m. That hides the fact that it is an el­e­ment of a Hamil­ton­ian ma­trix pretty well.

The good news is that the as­sump­tion of col­li­sions has solved the prob­lem of de­cayed atoms un­de­cay­ing again. Also, the de­cay process is now prob­a­bilis­tic. And the de­cay rate $\lambda_{1\to2}$ above is a nor­mal num­ber, not a ran­dom one.

Un­for­tu­nately, there are a cou­ple of ma­jor new prob­lems. One prob­lem is that the state $\psi_2$ has one more par­ti­cle than state $\psi_1$; the emit­ted pho­ton. That makes it im­pos­si­ble to eval­u­ate the ma­trix el­e­ment us­ing non­rel­a­tivis­tic quan­tum me­chan­ics as cov­ered in this book. Non­rel­a­tivis­tic quan­tum me­chan­ics does not al­low for new par­ti­cles to be cre­ated or old ones to be de­stroyed. To eval­u­ate the ma­trix el­e­ment, you need rel­a­tivis­tic quan­tum me­chan­ics. Sec­tion 7.8 will even­tu­ally man­age to work around that lim­i­ta­tion us­ing a dirty trick. Ad­den­dum {A.24} gives the ac­tual rel­a­tivis­tic de­riva­tion of the ma­trix el­e­ment. How­ever, to re­ally un­der­stand that ad­den­dum, you may have to read a cou­ple of oth­ers.

An even big­ger prob­lem is that the de­cay rate above is pro­por­tional to the col­li­sion time $t_{\rm {c}}$. That makes it com­pletely de­pen­dent on the de­tails of the sur­round­ings of the atom. But that is wrong. Atoms have very spe­cific de­cay rates. These rates are the same un­der a wide va­ri­ety of en­vi­ron­men­tal con­di­tions.

The ba­sic prob­lem is that in re­al­ity there is not just a sin­gle de­cay process for an ex­cited atom; there are in­fi­nitely many. The de­riva­tion above as­sumed that the pho­ton has an en­ergy ex­actly given by the dif­fer­ence be­tween the atomic states. How­ever, there is un­cer­tainty in en­ergy one way or the other. De­cays that pro­duce pho­tons whose fre­quency is ever so slightly dif­fer­ent will oc­cur too. To deal with that com­pli­ca­tion, asym­met­ric two-state sys­tems must be con­sid­ered. That is done in the next sec­tion.

Fi­nally, a few words should prob­a­bly be said about what col­li­sions re­ally are. Darn. Typ­i­cally, they are pic­tured as atomic col­li­sions. But that may be in a large part be­cause atomic col­li­sions are quite well un­der­stood from clas­si­cal physics. Atomic col­li­sions do oc­cur, and def­i­nitely need to be taken into ac­count, like later in the de­riva­tions of {D.41}. But in the above de­scrip­tion, col­li­sions take on a sec­ond role as do­ing quan­tum me­chan­i­cal mea­sure­ments. In that sec­ond role, a col­li­sion has oc­curred if the sys­tem has been mea­sured to be in one state or the other. Fol­low­ing the analy­sis of chap­ter 8.6, mea­sure­ment should be taken to mean that the sur­round­ings has be­come firmly com­mit­ted that the sys­tem has de­cayed. In prin­ci­ple, that does not re­quire any ac­tual col­li­sion with the atom; the sur­round­ings could sim­ply ob­serve that the pho­ton is present. The bad news is that the en­tire process of mea­sure­ment is re­ally not well un­der­stood at all. In any case, the bot­tom line to re­mem­ber is that col­li­sions do not nec­es­sar­ily rep­re­sent what you would in­tu­itively call col­li­sions. Their dual role is to rep­re­sent the typ­i­cal mo­ment that the sur­round­ings com­mits it­self that a tran­si­tion has oc­curred.


Key Points
$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
The two-state sys­tem pro­vides a model for the de­cay of ex­cited atoms or nu­clei.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
In­ter­ac­tion with the sur­round­ings is needed to make the de­cay per­ma­nent. That makes de­cays prob­a­bilis­tic.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
The [spe­cific] de­cay rate, $\lambda$ is the rel­a­tive frac­tion of par­ti­cles that de­cays per unit time. Its in­verse is the mean life­time $\tau$ of the par­ti­cles. The half-life $\tau_{1/2}$ is the time it takes for half the par­ti­cles in a big sam­ple to de­cay. It is shorter than the mean life­time by a fac­tor $\ln2$.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
Al­ways add de­cay rates, not life­times.