Sub­sec­tions


7.6 Asym­met­ric Two-State Sys­tems

Two-state sys­tems are quan­tum sys­tems for which just two states $\psi_1$ and $\psi_2$ are rel­e­vant. If the two states have dif­fer­ent ex­pec­ta­tion en­ergy, or if the Hamil­ton­ian de­pends on time, the two-state sys­tem is asym­met­ric. Such sys­tems must be con­sid­ered to fix the prob­lems in the de­scrip­tion of spon­ta­neous emis­sion that turned up in the pre­vi­ous sec­tion.

The wave func­tion of a two state sys­tem is of the form

\begin{displaymath}
\Psi = c_1 \psi_1 + c_2 \psi_2
\end{displaymath} (7.33)

where $\vert c_1\vert^2$ and $\vert c_2\vert^2$ are the prob­a­bil­i­ties that the sys­tem is in state $\psi_1$, re­spec­tively $\psi_2$.

The co­ef­fi­cients $c_1$ and $c_2$ evolve in time ac­cord­ing to

\begin{displaymath}
\fbox{$\displaystyle
{\rm i}\hbar \dot c_1 = \langle{E_1}\...
... i}\hbar \dot C_2 = H_{21} c_1 + \langle{E}_2\rangle c_2
$} %
\end{displaymath} (7.34)

where

\begin{displaymath}
\langle{E}_1\rangle = \langle\psi_1\vert H\psi_1\rangle, \q...
...\quad
\langle{E}_2\rangle = \langle\psi_2\vert H\psi_2\rangle
\end{displaymath}

with $H$ the Hamil­ton­ian. The Hamil­ton­ian co­ef­fi­cients $\langle{E}_1\rangle$ and $\langle{E}_2\rangle$ are the ex­pec­ta­tion en­er­gies of states $\psi_1$ and $\psi_2$. The Hamil­ton­ian co­ef­fi­cients $H_{12}$ and $H_{21}$ are com­plex con­ju­gates. Ei­ther one is of­ten re­ferred to as the ma­trix el­e­ment. To de­rive the above evo­lu­tion equa­tions, plug the two-state wave func­tion $\Psi$ into the Schrö­din­ger equa­tion and take in­ner prod­ucts with $\langle\psi_1\vert$ and $\langle\psi_2\vert$, us­ing or­tho­nor­mal­ity of the states.

It will be as­sumed that the Hamil­ton­ian is in­de­pen­dent of time. In that case the evo­lu­tion equa­tions can be solved an­a­lyt­i­cally. To do so, the analy­sis of chap­ter 5.3 can be used to find the en­ergy eigen­states and then the so­lu­tion is given by the Schrö­din­ger equa­tion, sec­tion 7.1.2. How­ever, the fi­nal so­lu­tion is messy. The dis­cus­sion here will re­strict it­self to some gen­eral ob­ser­va­tions about it.

It will be as­sumed that the so­lu­tion starts out in the state $\psi_1$. That means that ini­tially $\vert c_1\vert^2$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1 and $\vert c_2\vert^2$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0. Then in the sym­met­ric case dis­cussed in the pre­vi­ous sec­tion, the sys­tem os­cil­lates be­tween the two states. But that re­quires that the states have the same ex­pec­ta­tion en­ergy.

This sec­tion ad­dresses the asym­met­ric case, in which there is a nonzero dif­fer­ence $E_{21}$ be­tween the two ex­pec­ta­tion en­er­gies:

\begin{displaymath}
\fbox{$\displaystyle
E_{21} \equiv \langle{E}_2\rangle - \langle{E}_1\rangle
$} %
\end{displaymath} (7.35)

In the asym­met­ric case, the sys­tem never gets into state $\psi_2$ com­pletely. There is al­ways some prob­a­bil­ity for state $\psi_1$ left. That can be seen from en­ergy con­ser­va­tion: the ex­pec­ta­tion value of en­ergy must stay the same dur­ing the evo­lu­tion, and it would not if the sys­tem went fully into state 2. How­ever, the sys­tem will pe­ri­od­i­cally re­turn fully to the state $\psi_1$. That is all that will be said about the ex­act so­lu­tion here.

The re­main­der of this sec­tion will use an ap­prox­i­ma­tion called “time-​de­pen­dent per­tur­ba­tion the­ory.” It as­sumes that the sys­tem stays close to a given state. In par­tic­u­lar, it will be as­sumed that the sys­tem starts out in state $\psi_1$ and stays close to it.

That as­sump­tion re­sults in the fol­low­ing prob­a­bil­ity for the sys­tem to be in the state $\psi_2$, {D.38}:

\begin{displaymath}
\fbox{$\displaystyle
\vert c_2\vert^2 \approx \left(\frac{...
...ht)^2
\frac{\sin^2(E_{21}t/2\hbar)}{(E_{21}t/2\hbar)^2}
$} %
\end{displaymath} (7.36)

For this ex­pres­sion to be a valid ap­prox­i­ma­tion, the par­en­thet­i­cal ra­tio must be small. Note that the fi­nal fac­tor shows the ef­fect of the asym­me­try of the two state sys­tem; $E_{21}$ is the dif­fer­ence in ex­pec­ta­tion en­ergy be­tween the states. For a sym­met­ric two-state sys­tem, the fi­nal fac­tor would be 1, (us­ing l’Hôpital).


Key Points
$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
If the states in a two-state sys­tem have dif­fer­ent ex­pec­ta­tion en­er­gies, the sys­tem is asym­met­ric.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
If the sys­tem is ini­tially in the state $\psi_1$, it will never fully get into the state $\psi_2$.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
If the sys­tem is ini­tially in the state $\psi_1$ and re­mains close to it, then the prob­a­bil­ity of the state $\psi_2$ is given by (7.36)


7.6.1 Spon­ta­neous emis­sion re­vis­ited

De­cay of ex­cited atomic or nu­clear states was ad­dressed in the pre­vi­ous sec­tion us­ing sym­met­ric two-state sys­tems. But there were some is­sues. They can now be ad­dressed.

The ex­am­ple is again an ex­cited atomic state that tran­si­tions to a lower en­ergy state by emit­ting a pho­ton. The state $\psi_1$ is the ex­cited atomic state. The state $\psi_2$ is the atomic state of low­ered en­ergy plus the emit­ted pho­ton. These states seem states of def­i­nite en­ergy, but if they re­ally were, there would not be any de­cay. En­ergy states are sta­tion­ary. There is a slight un­cer­tainty in en­ergy in the states.

Since there is, clearly it does not make much sense to say that the ini­tial and fi­nal ex­pec­ta­tion en­er­gies must be the same ex­actly.

In de­cay processes, a bit of en­ergy slop $E_{21}$ must be al­lowed be­tween the ini­tial and fi­nal ex­pec­ta­tion val­ues of en­ergy.
In prac­ti­cal terms, that means that the en­ergy of the emit­ted pho­ton can vary a bit. So its fre­quency can vary a bit.

Now in in­fi­nite space, the pos­si­ble pho­ton fre­quen­cies are in­fi­nitely close to­gether. So you are now sud­denly deal­ing with not just one pos­si­ble de­cay process, but in­fi­nitely many. That would re­quire messy, poorly jus­ti­fied math­e­mat­ics full of so-called delta func­tions.

In­stead, in this sub­sec­tion it will be as­sumed that the atom is not in in­fi­nite space, but in a very large pe­ri­odic box, chap­ter 6.17. The de­cay rate in in­fi­nite space can then be found by tak­ing the limit that the box size be­comes in­fi­nite. The ad­van­tage of a fi­nite box is that the pho­ton fre­quen­cies, and so the cor­re­spond­ing en­er­gies, are dis­crete. So you can sum over them rather than in­te­grate.

Each pos­si­ble pho­ton state cor­re­sponds to a dif­fer­ent fi­nal state $\psi_2$, each with its own co­ef­fi­cient $c_2$. The square mag­ni­tude of that co­ef­fi­cient gives the prob­a­bil­ity that the sys­tem can be found in that state $\psi_2$. And in the ap­prox­i­ma­tion of time-de­pen­dent per­tur­ba­tion the­ory, the co­ef­fi­cients $c_2$ do not in­ter­act; the square mag­ni­tude of each is given by (7.36). The to­tal prob­a­bil­ity that the sys­tem can be found in some de­cayed state at a time $t_{\rm {c}}$ is then

\begin{displaymath}
P_{1\to{\rm all }2} = \sum_{\rm all states 2}
\left(\fr...
...sin^2(E_{21}t_{\rm {c}}/2\hbar)}{(E_{21}t_{\rm {c}}/2\hbar)^2}
\end{displaymath}

The time $t_{\rm {c}}$ will again model the time be­tween col­li­sions, in­ter­ac­tions with the sur­round­ings that mea­sure whether the atom has de­cayed or not. The de­cay rate, the num­ber of tran­si­tions per unit time, is found from di­vid­ing by the time:

\begin{displaymath}
\lambda = \sum_{\rm all states 2}
\frac{\vert H_{21}\ver...
...sin^2(E_{21}t_{\rm {c}}/2\hbar)}{(E_{21}t_{\rm {c}}/2\hbar)^2}
\end{displaymath}

The fi­nal fac­tor in the sum for the de­cay rate de­pends on the en­ergy slop $E_{21}$. This fac­tor is plot­ted graph­i­cally in fig­ure 7.7. No­tice that only a lim­ited range around the point of zero slop con­tributes much to the de­cay rate. The spikes in the fig­ure are in­tended to qual­i­ta­tively in­di­cate the dis­crete pho­ton fre­quen­cies that are pos­si­ble in the box that the atom is in. If the box is ex­tremely big, then these spikes will be ex­tremely close to­gether.

Fig­ure 7.7: En­ergy slop di­a­gram.
\begin{figure}\centering
\setlength{\unitlength}{1pt}
\begin{picture}(405,78...
...\rm {c}}/2\hbar)}
{(E_{21}t_{\rm {c}}/2\hbar)^2}$}}
\end{picture}
\end{figure}

Now sup­pose that you plot the en­ergy slop di­a­gram against the ac­tual pho­ton en­ergy in­stead of the scaled en­ergy slop $E_{21}t_{\rm {c}}$$\raisebox{.5pt}{$/$}$$2\hbar$. Then the cen­ter of the di­a­gram will be at the nom­i­nal en­ergy of the emit­ted pho­ton and $E_{21}$ will be the de­vi­a­tion from that nom­i­nal en­ergy. The spike at the cen­ter then rep­re­sents the tran­si­tion of atoms where the pho­ton comes out with ex­actly the nom­i­nal en­ergy. And those sur­round­ing spikes whose height is not neg­li­gi­ble rep­re­sent slightly dif­fer­ent pho­ton en­er­gies that have a rea­son­able prob­a­bil­ity of be­ing ob­served. So the en­ergy slop di­a­gram, plot­ted against pho­ton en­ergy, graph­i­cally rep­re­sents the un­cer­tainty in en­ergy of the fi­nal state that will be ob­served.

Nor­mally, the ob­served un­cer­tainty in en­ergy is very small in phys­i­cal terms. The en­ergy of the emit­ted pho­ton is al­most ex­actly the nom­i­nal one; that al­lows spec­tral analy­sis to iden­tify atoms so well. So the en­tire di­a­gram fig­ure 7.7 is ex­tremely nar­row hor­i­zon­tally when plot­ted against the pho­ton en­ergy.

Fig­ure 7.8: Schema­tized en­ergy slop di­a­gram.
\begin{figure}\centering
\setlength{\unitlength}{1pt}
\begin{picture}(405,87...
...kebox(0,0)[b]{1}}
\put(112,18){\makebox(0,0)[b]{0}}
\end{picture}
\end{figure}

That sug­gests that you can sim­plify things by re­plac­ing the en­ergy slop di­a­gram by the schema­tized one of fig­ure 7.8. This di­a­gram is zero if the en­ergy slop is greater than $\pi\hbar$$\raisebox{.5pt}{$/$}$$t_{\rm {c}}$, and oth­er­wise it is one. And it in­te­grates to the same value as the orig­i­nal func­tion. So, if the spikes are very closely spaced, they still sum to the same value as be­fore. To be sure, if the square ma­trix el­e­ment $\vert H_{21}\vert^2$ var­ied non­lin­early over the typ­i­cal width of the di­a­gram, the tran­si­tion rate would now sum to some­thing else. But it should not; if the vari­a­tion in pho­ton en­ergy is neg­li­gi­ble, then so should the one in the ma­trix el­e­ment be.

Us­ing the schema­tized en­ergy slop di­a­gram, you only need to sum over the states whose spikes are equal to 1. That are the states 2 whose ex­pec­ta­tion en­ergy is no more than $\pi\hbar$$\raisebox{.5pt}{$/$}$$t_{\rm {c}}$ dif­fer­ent from the ini­tial ex­pec­ta­tion en­ergy. And in­side this sum­ma­tion range, the fi­nal fac­tor can be dropped be­cause it is now 1. That gives:

\begin{displaymath}
\lambda =
\sum_{\scriptstyle \strut {\rm all states 2 w...
..._{\rm {c}}}
\frac{\vert H_{21}\vert^2}{\hbar^2} t_{\rm {c}} %
\end{displaymath} (7.37)

This can be cleaned up fur­ther, as­sum­ing that $H_{21}$ is con­stant and can be taken out of the sum:

\begin{displaymath}
\fbox{$\displaystyle
\lambda = 2 \pi \frac{\vert H_{21}\vert^2}{\hbar}
\frac{{\rm d}N}{{\rm d}\langle E_2\rangle}
$} %
\end{displaymath} (7.38)

This for­mula is known as “Fermi’s golden rule.” The fi­nal fac­tor is the num­ber of pho­ton states per unit en­ergy range. It is to be eval­u­ated at the nom­i­nal pho­ton en­ergy. The for­mula sim­ply ob­serves that the num­ber of terms in the sum is the num­ber of pho­ton states per unit en­ergy range times the en­ergy range. The equa­tion is con­sid­ered to orig­i­nate from Dirac, but Fermi is the one who named it “golden rule num­ber two.”

Ac­tu­ally, the orig­i­nal sum (7.37) may be eas­ier to han­dle in prac­tice since the num­ber of pho­ton states per unit en­ergy range is not needed. But Fermi’s rule is im­por­tant be­cause it shows that the big prob­lem of the pre­vi­ous sec­tion with de­cays has been re­solved. The de­cay rate does no longer de­pend on the time be­tween col­li­sions $t_{\rm {c}}$. Atoms can have spe­cific val­ues for their de­cay rates de­spite the minute de­tails of their sur­round­ings. Shorter col­li­sion times do pro­duce less tran­si­tions per unit time for a given state. But they also al­low more slop in en­ergy, so the num­ber of states that achieve a sig­nif­i­cant amount of tran­si­tions per unit time goes up. The net ef­fect is that the de­cay rate stays the same, though the un­cer­tainty in en­ergy goes up.

The other prob­lem re­mains; the eval­u­a­tion of the ma­trix el­e­ment $H_{21}$ re­quires rel­a­tivis­tic quan­tum me­chan­ics. But it is not hard to guess the gen­eral ideas. When the size of the pe­ri­odic box that holds the sys­tem in­creases, the elec­tro­mag­netic field of the pho­tons de­creases; they have the same en­ergy in a larger vol­ume. That re­sults in smaller val­ues for the ma­trix el­e­ment $H_{21}$. On the other hand, the num­ber of pho­tons per unit en­ergy range ${\rm d}{N}$$\raisebox{.5pt}{$/$}$${\rm d}\langle{E}_2\rangle$ in­creases, chap­ter 6.3. The net re­sult will be that the de­cay rate re­mains fi­nite when the box be­comes in­fi­nite.

That is ver­i­fied by the rel­a­tivis­tic analy­sis in ad­den­dum {A.24}. That ad­den­dum com­pletes the analy­sis in this sec­tion by com­put­ing the ma­trix el­e­ment us­ing rel­a­tivis­tic quan­tum me­chan­ics. Us­ing a de­scrip­tion in terms of pho­ton states of def­i­nite lin­ear mo­men­tum, the ma­trix el­e­ment is in­versely pro­por­tional to the vol­ume of the box, but the den­sity of states is di­rectly pro­por­tional to it. (It is some­what dif­fer­ent us­ing a de­scrip­tion in terms of pho­ton states of def­i­nite an­gu­lar mo­men­tum, {A.25}. But the idea re­mains the same.)

One prob­lem of sec­tion 7.5.3 that has now dis­ap­peared is the pho­ton be­ing re­ab­sorbed again. For each in­di­vid­ual tran­si­tion process, the in­ter­ac­tion is too weak to pro­duce a fi­nite re­ver­sal time. But quan­tum mea­sure­ment re­mains re­quired to ex­plain the ex­per­i­ments. The time-de­pen­dent per­tur­ba­tion the­ory used does not ap­ply if the quan­tum sys­tem is al­lowed to evolve undis­turbed over a time long enough for a sig­nif­i­cant tran­si­tion prob­a­bil­ity (to any state) to evolve, {D.38}. That would af­fect the spe­cific de­cay rate. If you are merely in­ter­ested in the av­er­age emis­sion and ab­sorp­tion of a large num­ber of atoms, it is not a big prob­lem. Then you can sub­sti­tute a clas­si­cal de­scrip­tion in terms of ran­dom col­li­sions for the quan­tum mea­sure­ment process. That will be done in de­riva­tion {D.41}. But to de­scribe what hap­pens to in­di­vid­ual atoms one at a time, while still ex­plain­ing the ob­served sta­tis­tics of many of such in­di­vid­ual atoms, is an­other mat­ter.

So far it has been as­sumed that there is only one atomic ini­tial state of in­ter­est and only one fi­nal state. How­ever, ei­ther state might have a net an­gu­lar mo­men­tum quan­tum num­ber $j$ that is not zero. In that case, there are $2j$ + 1 atomic states that dif­fer only in mag­netic quan­tum num­ber. The mag­netic quan­tum num­ber de­scribes the com­po­nent of the an­gu­lar mo­men­tum in the cho­sen $z$-​di­rec­tion. Now if the atom is in empty space, the di­rec­tion of the $z$-​axis should not make a dif­fer­ence. Then these $2j$ + 1 states will have the same en­ergy. So you can­not in­clude one and not the other. If this hap­pens to the ini­tial atomic state, you will need to av­er­age the de­cay rates over the mag­netic states. The phys­i­cal rea­son is that if you have a large num­ber $I$ of ex­cited atoms in the given en­ergy state, their mag­netic quan­tum num­bers will be ran­domly dis­trib­uted. So the av­er­age de­cay rate of the to­tal sam­ple is the av­er­age over the ini­tial mag­netic quan­tum num­bers. But if it hap­pens to the fi­nal state, you have to sum over the fi­nal mag­netic quan­tum num­bers. Each fi­nal mag­netic quan­tum num­ber gives an ini­tial ex­cited atom one more state that it can de­cay to. The gen­eral rule is:

Sum over the fi­nal atomic states, then av­er­age over the ini­tial atomic states.
The av­er­ag­ing over the ini­tial states is typ­i­cally triv­ial. With­out a pre­ferred di­rec­tion, the de­cay rate will not de­pend on the ini­tial ori­en­ta­tion.

It is in­ter­est­ing to ex­am­ine the lim­i­ta­tions of the analy­sis in this sub­sec­tion. First, time-de­pen­dent per­tur­ba­tion the­ory has to be valid. It might seem that the re­quire­ment of (7.36) that $H_{21}t_{\rm {c}}$$\raisebox{.5pt}{$/$}$$\hbar$ is small is au­to­mat­i­cally sat­is­fied, be­cause the ma­trix el­e­ment $H_{21}$ goes to zero for in­fi­nite box size. But then the num­ber of states 2 goes to in­fin­ity. And if you look a bit closer at the analy­sis, {D.38}, the re­quire­ment is re­ally that there is lit­tle prob­a­bil­ity of any tran­si­tion in time in­ter­val $t_{\rm {c}}$. So the time be­tween col­li­sions must be small com­pared to the life­time of the state. With typ­i­cal life­times in the range of nanosec­onds, atomic col­li­sions are typ­i­cally a few or­ders of mag­ni­tude more rapid. How­ever, that de­pends on the rel­a­tive vac­uum.

Sec­ond, the en­ergy slop di­a­gram fig­ure 7.7 has to be nar­row on the scale of the pho­ton en­ergy. It can be seen that this is true if the time be­tween col­li­sions $t_{\rm {c}}$ is large com­pared to the in­verse of the pho­ton fre­quency. For emis­sion of vis­i­ble light, that means that the col­li­sion time must be large when ex­pressed in fem­tosec­onds. Col­li­sions be­tween atoms will eas­ily meet that re­quire­ment.

The width of the en­ergy slop di­a­gram fig­ure 7.7 should give the ob­served vari­a­tion $E_{21}$ in the en­ergy of the fi­nal state. The di­a­gram shows that roughly

\begin{displaymath}
E_{21} t_{\rm {c}} \sim \pi \hbar
\end{displaymath}

Note that this takes the form of the all-pow­er­ful en­ergy-time un­cer­tainty equal­ity (7.9). To be sure, the equal­ity above in­volves the ar­ti­fi­cial time be­tween col­li­sions, or mea­sure­ments, $t_{\rm {c}}$. But you could as­sume that this time is com­pa­ra­ble to the mean life­time $\tau$ of the state. Es­sen­tially that sup­poses that in­ter­ac­tions with the sur­round­ings are in­fre­quent enough that the atomic evo­lu­tion can evolve undis­turbed for about the typ­i­cal de­cay time. But that na­ture will def­i­nitely com­mit it­self whether or not a de­cay has oc­curred as soon as there is a fairly rea­son­able prob­a­bil­ity that a pho­ton has been emit­ted.

That ar­gu­ment then leads to the de­f­i­n­i­tion of the typ­i­cal un­cer­tainty in en­ergy, or width,of a state as $\Gamma$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\hbar$$\raisebox{.5pt}{$/$}$$\tau$, as men­tioned in sec­tion 7.4.1. In ad­di­tion, if there are fre­quent in­ter­ac­tions be­tween the atom and its sur­round­ings, the shorter col­li­sion time $t_{\rm {c}}$ should be ex­pected to in­crease the un­cer­tainty in en­ergy to more than the width.

Note that the wavy na­ture of the en­ergy slop di­a­gram fig­ure 7.7 is due to the as­sump­tion that the time be­tween col­li­sions is al­ways the same. If you start av­er­ag­ing over a more phys­i­cal ran­dom set of col­li­sion times, the waves will smooth out. The ac­tual en­ergy slop di­a­gram as usu­ally given is of the form

\begin{displaymath}
\frac{1}{1+(E_{21}/\Gamma)^2} %
\end{displaymath} (7.39)

That is com­monly called a [Cauchy] “Lorentz[ian] pro­file” or dis­tri­b­u­tion or func­tion, or a “Breit-Wigner dis­tri­b­u­tion.” Hey, don’t blame the mes­sen­ger. In any case, it still has the same in­verse qua­dratic de­cay for large en­ergy slop as the di­a­gram fig­ure 7.7. That means that if you start com­put­ing the stan­dard de­vi­a­tion in en­ergy, you end up with in­fin­ity. That would be a real prob­lem for ver­sions of the en­ergy-time re­la­tion­ship like the one of Man­delsh­tam and Tamm. Such ver­sions take the un­cer­tainty in en­ergy to be the stan­dard de­vi­a­tion in en­ergy. But it is no prob­lem for the all-pow­er­ful en­ergy-time un­cer­tainty equal­ity (7.9), be­cause the stan­dard de­vi­a­tion in en­ergy is not needed.


Key Points
$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
Some en­ergy slop oc­curs in de­cays.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
Tak­ing that into ac­count, mean­ing­ful de­cay rates may be com­puted fol­low­ing Fermi’s golden rule.