Sub­sec­tions


A.11 Ther­mo­elec­tric ef­fects

This note gives ad­di­tional in­for­ma­tion on ther­mo­elec­tric ef­fects.


A.11.1 Peltier and See­beck co­ef­fi­cient ball­parks

The ap­prox­i­mate ex­pres­sions for the semi­con­duc­tor Peltier co­ef­fi­cients come from [29]. Straub et al (App. Phys. Let. 95, 052107, 2009) note that to bet­ter ap­prox­i­ma­tion, $\frac32{k_{\rm B}}T$ should be $(\frac52+r){k_{\rm B}}T$ with $r$ typ­i­cally $-\frac12$. Also, a phonon con­tri­bu­tion should be added.

The es­ti­mate for the Peltier co­ef­fi­cient of a metal as­sumes that the elec­trons form a free-elec­tron gas. The con­duc­tion will be as­sumed to be in the $x$-​di­rec­tion. To ball­park the Peltier co­ef­fi­cient re­quires the av­er­age charge flow per elec­tron $\overline{-ev_x}$ and the av­er­age en­ergy flow per elec­tron $\overline{{\vphantom' E}^{\rm p}{}v_x}$. Here $v_x$ is the elec­tron ve­loc­ity in the $x$-​di­rec­tion, $\vphantom{0}\raisebox{1.5pt}{$-$}$$e$ the elec­tron charge, ${\vphantom' E}^{\rm p}$ the elec­tron en­ergy, and an over­line is used to in­di­cate an av­er­age over all elec­trons. To find ball­parks for the two av­er­ages, as­sume the model of con­duc­tion of the free-elec­tron gas as given in chap­ter 6.20. The con­duc­tion oc­curred since the Fermi sphere got dis­placed a bit to­wards the right in the wave num­ber space fig­ure 6.17. Call the small amount of dis­place­ment $k_{\rm {d}}$. As­sume for sim­plic­ity that in a co­or­di­nate sys­tem $k_xk_yk_z$ with ori­gin at the cen­ter of the dis­placed Fermi sphere, the oc­cu­pa­tion of the sin­gle-par­ti­cle states by elec­trons is still ex­actly given by the equi­lib­rium Fermi-Dirac dis­tri­b­u­tion. How­ever, due to the dis­place­ment $k_{\rm {d}}$ along the $k_x$ axis, the ve­loc­i­ties and en­er­gies of the sin­gle-par­ti­cle states are now given by

\begin{displaymath}
v_x = \frac{\hbar}{m}(k_x+k_{\rm {d}})
\qquad
{\vphantom'...
...ac{\hbar^2}{2m}\left(k^2+ 2k_xk_{\rm {d}}+k_{\rm {d}}^2\right)
\end{displaymath}

To sim­plify the no­ta­tions, the above ex­pres­sions will be ab­bre­vi­ated to

\begin{displaymath}
v_x = C_v(k_x+k_{\rm {d}})
\qquad
{\vphantom' E}^{\rm p}= C_E(k^2+ 2k_xk_{\rm {d}}+k_{\rm {d}}^2)
\end{displaymath}

In this no­ta­tion, the av­er­age charge and en­ergy flows per elec­tron be­come

\begin{displaymath}
\overline{-ev_x} = \overline{-eC_v(k_x+k_{\rm {d}})}
\qqua...
...e{C_E(k^2+ 2k_xk_{\rm {d}}+k_{\rm {d}}^2)C_v(k_x+k_{\rm {d}})}
\end{displaymath}

Next note that the av­er­ages in­volv­ing odd pow­ers of $k_x$ are zero, be­cause for every state of pos­i­tive $k_x$ in the Fermi sphere there is a cor­re­spond­ing state of neg­a­tive $k_x$. Also the con­stants, in­clud­ing $k_{\rm {d}}$, can be taken out of the av­er­ages. So the flows sim­plify to

\begin{displaymath}
\overline{-ev_x} = -e C_v k_{\rm {d}}
\qquad
\overline{{\...
... p}v_x} =
C_E(2\overline{k_x^2}+\overline{k^2})C_vk_{\rm {d}}
\end{displaymath}

where the term cu­bi­cally small in $k_{\rm {d}}$ was ig­nored. Now by sym­me­try the av­er­ages of $k_x^2$, $k_y^2$, and $k_z^2$ are equal, so each must be one third of the av­er­age of $k^2$. And $C_E$ times the av­er­age of $k^2$ is the av­er­age en­ergy per elec­tron ${\vphantom' E}^{\rm p}_{\rm {ave}}$ in the ab­sence of con­duc­tion. Also, by de­f­i­n­i­tion $C_vk_{\rm {d}}$ is the drift ve­loc­ity $v_{\rm {d}}$ that pro­duces the cur­rent. There­fore:

\begin{displaymath}
\overline{-ev_x} = -e v_{\rm {d}}
\qquad
\overline{v_x{\v...
...yle\frac{5}{3}} {\vphantom' E}^{\rm p}_{\rm {ave}} v_{\rm {d}}
\end{displaymath}

Note that if you would sim­ply have ball­parked the av­er­age of $v_x{\vphantom' E}^{\rm p}$ as the av­er­age of $v_x$ times the av­er­age of ${\vphantom' E}^{\rm p}$ you would have missed the fac­tor 5/3. That would pro­duce a Peltier co­ef­fi­cient that would be gi­gan­ti­cally wrong.

To get the heat flow, the en­ergy must be taken rel­a­tive to the Fermi level $\mu$. In other words, the en­ergy flow $\overline{v_x\mu}$ must be sub­tracted from $\overline{v_x{\vphantom' E}^{\rm p}}$. The Peltier co­ef­fi­cient is the ra­tio of that heat flow to the charge flow:

\begin{displaymath}
{\mathscr P}= \frac{\overline{v_x({\vphantom' E}^{\rm p}-\m...
...xtstyle\frac{5}{3}}{\vphantom' E}^{\rm p}_{\rm {ave}}-\mu}{-e}
\end{displaymath}

If you plug in the ex­pres­sions for the av­er­age en­ergy per elec­tron and the chem­i­cal po­ten­tial found in de­riva­tion {D.62}, you get the Peltier ball­park listed in the text.

To get See­beck co­ef­fi­cient ball­parks, sim­ply di­vide the Peltier co­ef­fi­cients by the ab­solute tem­per­a­ture. That works be­cause of Kelvin’s sec­ond re­la­tion­ship dis­cussed be­low. To get the See­beck co­ef­fi­cient ball­park for a metal di­rectly from the See­beck ef­fect, equate the in­crease in elec­tro­sta­tic po­ten­tial en­ergy of an elec­tron mi­grat­ing from hot to cold to the de­crease in av­er­age elec­tron ki­netic en­ergy. Us­ing the av­er­age ki­netic en­ergy of de­riva­tion {D.62}:

\begin{displaymath}
- e { \rm d}\varphi = - { \rm d}\frac{\pi^2}{4} \frac{(k_{\rm B}T)^2}{{\vphantom' E}^{\rm p}_{\rm {F}}}
\end{displaymath}

Di­vide by $e{ \rm d}{T}$ to get the See­beck co­ef­fi­cient.


A.11.2 Fig­ure of merit

To com­pare ther­mo­elec­tric ma­te­ri­als, an im­por­tant quan­tity is the fig­ure of merit of the ma­te­r­ial. The fig­ure of merit is by con­ven­tion writ­ten as $M^2$ where

\begin{displaymath}
M = {\mathscr P}\sqrt{\frac{\sigma}{\kappa T}}
\end{displaymath}

The tem­per­a­ture $T$ of the ma­te­r­ial should typ­i­cally be taken as the av­er­age tem­per­a­ture in the de­vice be­ing ex­am­ined. The rea­son that $M$ is im­por­tant has to do with units. Num­ber $M$ is nondi­men­sion­al, it has no units. In SI units, the Peltier co­ef­fi­cient ${\mathscr P}$ is in volts, the elec­tri­cal con­duc­tiv­ity $\sigma$ in am­pere/volt-me­ter, the tem­per­a­ture in Kelvin, and the ther­mal con­duc­tiv­ity $\kappa$ in watt/Kelvin-me­ter with watt equal to volt am­pere. That makes the com­bi­na­tion above nondi­men­sion­al.

To see why that is rel­e­vant, sup­pose you have a ma­te­r­ial with a low Peltier co­ef­fi­cient. You might con­sider com­pen­sat­ing for that by, say, scal­ing up the size of the ma­te­r­ial or the cur­rent through it. And maybe that does give you a bet­ter de­vice than you would get with a ma­te­r­ial with a higher Peltier co­ef­fi­cient. Maybe not. How do you know?

di­men­sion­al analy­sis can help an­swer that ques­tion. It says that nondi­men­sion­al quan­ti­ties de­pend only on nondi­men­sion­al quan­ti­ties. For ex­am­ple, for a Peltier cooler you might de­fine an ef­fi­ciency as the heat re­moved from your ice cubes per unit elec­tri­cal en­ergy used. That is a nondi­men­sion­al num­ber. It will not de­pend on, say, the ac­tual size of the semi­con­duc­tor blocks, but it will de­pend on such nondi­men­sion­al pa­ra­me­ters as their shape, and their size rel­a­tive to the over­all de­vice. Those are within your com­plete con­trol dur­ing the de­sign of the cooler. But the ef­fi­ciency will also de­pend on the nondi­men­sion­al fig­ure of merit $M$ above, and there you are lim­ited to the avail­able ma­te­ri­als. Hav­ing a ma­te­r­ial with a higher fig­ure of merit would give you a higher ther­mo­elec­tric ef­fect for the same losses due to elec­tri­cal re­sis­tance and heat leaks.

To be sure, it is some­what more com­pli­cated than that be­cause two dif­fer­ent ma­te­ri­als are in­volved. That makes the ef­fi­ciency de­pend on at least two nondi­men­sion­al fig­ures of merit, one for each ma­te­r­ial. And it might also de­pend on other nondi­men­sion­al num­bers that can be formed from the prop­er­ties of the ma­te­ri­als. For ex­am­ple, the ef­fi­ciency of a sim­ple ther­mo­elec­tric gen­er­a­tor turns out to de­pend on a net fig­ure of merit given by, [9],

\begin{displaymath}
M_{\rm net} =
M_{\rm A}
\frac{\sqrt{\kappa_{\rm {A}}/\sig...
...}}/\sigma_{\rm {A}}}+\sqrt{\kappa_{\rm {B}}/\sigma_{\rm {B}}}}
\end{displaymath}

It shows that the fig­ures of merit $M_{\rm {A}}$ and $M_{\rm {B}}$ of the two ma­te­ri­als get mul­ti­plied by nondi­men­sion­al frac­tions. These frac­tions are in the range from 0 to 1, and they sum to one. To get the best ef­fi­ciency, you would like $M_{\rm {A}}$ to be as large pos­i­tive as pos­si­ble, and $M_{\rm {B}}$ as large neg­a­tive as pos­si­ble. That is as noted in the text. But all else be­ing the same, the ef­fi­ciency also de­pends to some ex­tent on the nondi­men­sion­al frac­tions mul­ti­ply­ing $M_{\rm {A}}$ and $M_{\rm {B}}$. It helps if the ma­te­r­ial with the larger fig­ure of merit $\vert M\vert$ also has the larger ra­tio of $\kappa$$\raisebox{.5pt}{$/$}$$\sigma$. If say $M_{\rm {A}}$ ex­ceeds $-M_{\rm {B}}$ for the best ma­te­ri­als A and B, then you could po­ten­tially re­place B by a cheaper ma­te­r­ial with a much lower fig­ure of merit, as long as that re­place­ment ma­te­r­ial has a very low value of $\kappa$$\raisebox{.5pt}{$/$}$$\sigma$ rel­a­tive to A. In gen­eral, the more nondi­men­sion­al num­bers there are that are im­por­tant, the harder it is to an­a­lyze the ef­fi­ciency the­o­ret­i­cally.


A.11.3 Phys­i­cal See­beck mech­a­nism

The given qual­i­ta­tive de­scrip­tion of the See­beck mech­a­nism is very crude. For ex­am­ple, for semi­con­duc­tors it ig­nores vari­a­tions in the num­ber of charge car­ri­ers. Even for a free-elec­tron gas model for met­als, there may be vari­a­tions in charge car­rier den­sity that off­set ve­loc­ity ef­fects. Worse, for met­als it ig­nores the ex­clu­sion prin­ci­ple that re­stricts the mo­tion of the elec­trons. And it ig­nores that the hot­ter side does not just have elec­trons with higher en­ergy rel­a­tive to the Fermi level than the colder side, it also has elec­trons with lower en­ergy that can be ex­cited to move. If the lower en­ergy elec­trons have a larger mean free path, they can come from larger dis­tances than the higher en­ergy ones. And for metal elec­trons in a lat­tice, the ve­loc­ity might eas­ily go down with en­ergy in­stead of up. That is read­ily ap­pre­ci­ated from the spec­tra in chap­ter 6.22.2.

For a much more de­tailed de­scrip­tion, see “Ther­mo­elec­tric Ef­fects in Met­als: Ther­mo­cou­ples” by S. O. Kasap, 2001. This pa­per is avail­able on the web for per­sonal study. It in­cludes ac­tual data for met­als com­pared to the sim­ple the­ory.


A.11.4 Full ther­mo­elec­tric equa­tions

To un­der­stand the Peltier, See­beck, and Thom­son ef­fects more pre­cisely, the full equa­tions of heat and charge flow are needed. That is clas­si­cal ther­mo­dy­nam­ics, not quan­tum me­chan­ics. How­ever, stan­dard un­der­grad­u­ate ther­mo­dy­nam­ics classes do not cover it, and even the thick stan­dard un­der­grad­u­ate text books do not pro­vide much more than a su­per­fi­cial men­tion that ther­mo­elec­tric ef­fects ex­ist. There­fore this sub­sec­tion will de­scribe the equa­tions of ther­mo­electrics in a nut­shell.

The dis­cus­sion will be one-di­men­sion­al. Think of a bar of ma­te­r­ial aligned in the $x$-​di­rec­tion. If the full three-di­men­sion­al equa­tions of charge and heat flow are needed, for isotropic ma­te­ri­als you can sim­ply re­place the $x$ de­riv­a­tives by gra­di­ents.

Heat flow is pri­mar­ily dri­ven by vari­a­tions in tem­per­a­ture, and elec­tric cur­rent by vari­a­tions in the chem­i­cal po­ten­tial of the elec­trons. The ques­tion is first of all what is the pre­cise re­la­tion be­tween those vari­a­tions and the heat flow and cur­rent that they cause.

Now the mi­cro­scopic scales that gov­ern the mo­tion of atoms and elec­trons are nor­mally ex­tremely small. There­fore an atom or elec­tron sees only a very small por­tion of the macro­scopic tem­per­a­ture and chem­i­cal po­ten­tial dis­tri­b­u­tions. The atoms and elec­trons do no­tice that the dis­tri­b­u­tions are not con­stant, oth­er­wise they would not con­duct heat or cur­rent at all. But they see so lit­tle of the dis­tri­b­u­tions that to them they ap­pear to vary lin­early with po­si­tion. As a re­sult it is sim­ple gra­di­ents, i.e. first de­riv­a­tives, of the tem­per­a­ture and po­ten­tial dis­tri­b­u­tions that drive heat flow and cur­rent in com­mon solids. Sym­bol­i­cally:

\begin{displaymath}
q = f_1\left(\frac{{\rm d}T}{{\rm d}x},\frac{{\rm d}\varphi...
...{\rm d}T}{{\rm d}x},\frac{{\rm d}\varphi_\mu}{{\rm d}x}\right)
\end{displaymath}

Here $q$ is the heat flux den­sity;flux” is a fancy word for flow and the qual­i­fier den­sity in­di­cates that it is per unit cross-sec­tional area of the bar. Sim­i­larly $j$ is the cur­rent den­sity, the cur­rent per unit cross-sec­tional area. If you want, it is the charge flux den­sity. Fur­ther $T$ is the tem­per­a­ture, and $\varphi_\mu$ is the chem­i­cal po­ten­tial $\mu$ per unit elec­tron charge $\vphantom{0}\raisebox{1.5pt}{$-$}$$e$. That in­cludes the elec­tro­sta­tic po­ten­tial (sim­ply put, the volt­age) as well as an in­trin­sic chem­i­cal po­ten­tial of the elec­trons. The un­known func­tions $f_1$ and $f_2$ will be dif­fer­ent for dif­fer­ent ma­te­ri­als and dif­fer­ent con­di­tions.

The above equa­tions are not valid if the tem­per­a­ture and po­ten­tial dis­tri­b­u­tions change non­triv­ially on mi­cro­scopic scales. For ex­am­ple, shock waves in su­per­sonic flows of gases are ex­tremely thin; there­fore you can­not use equa­tions of the type above for them. An­other ex­am­ple is highly rar­efied flows, in which the mol­e­cules move long dis­tances with­out col­li­sions. Such ex­treme cases can re­ally only be an­a­lyzed nu­mer­i­cally and they will be ig­nored here. It is also as­sumed that the ma­te­ri­als main­tain their in­ter­nal in­tegrity un­der the con­duc­tion processes.

Un­der nor­mal con­di­tions, a fur­ther ap­prox­i­ma­tion can be made. The func­tions $f_1$ and $f_2$ in the ex­pres­sions for the heat flux and cur­rent den­si­ties would surely de­pend non­lin­early on their two ar­gu­ments if these would ap­pear fi­nite on a mi­cro­scopic scale. But on a mi­cro­scopic scale, tem­per­a­ture and po­ten­tial hardly change. (Su­per­sonic shock waves and sim­i­lar are again ex­cluded.) There­fore, the gra­di­ents ap­pear small in mi­cro­scopic terms. And if that is true, func­tions $f_1$ and $f_2$ can be lin­earized us­ing Tay­lor se­ries ex­pan­sion. That gives:

\begin{displaymath}
q = A_{11}\frac{{\rm d}T}{{\rm d}x} + A_{12}\frac{{\rm d}\v...
...{{\rm d}T}{{\rm d}x}+A_{22}\frac{{\rm d}\varphi_\mu}{{\rm d}x}
\end{displaymath}

The four co­ef­fi­cients $A_{..}$ will nor­mally need to be de­ter­mined ex­per­i­men­tally for a given ma­te­r­ial at a given tem­per­a­ture. The prop­er­ties of solids vary nor­mally lit­tle with pres­sure.

By con­ven­tion, the four co­ef­fi­cients are rewrit­ten in terms of four other, more in­tu­itive, ones:

\begin{displaymath}
\fbox{$\displaystyle
q = -(\kappa+{\mathscr P}{\mathscr S}...
...T}{{\rm d}x}
-\sigma\frac{{\rm d}\varphi_\mu}{{\rm d}x}
$} %
\end{displaymath} (A.31)

This de­fines the heat con­duc­tiv­ity $\kappa$, the elec­tri­cal con­duc­tiv­ity $\sigma$, the See­beck co­ef­fi­cient ${\mathscr S}$ and the Peltier co­ef­fi­cient ${\mathscr P}$ of the ma­te­r­ial. (The signs of the Peltier and See­beck co­ef­fi­cients vary con­sid­er­ably be­tween ref­er­ences.)

If con­di­tions are isother­mal, the sec­ond equa­tion is Ohm’s law for a unit cube of ma­te­r­ial, with $\sigma$ the usual con­duc­tiv­ity, the in­verse of the re­sis­tance of the unit cube. The See­beck ef­fect cor­re­sponds to the case that there is no cur­rent. In that case, the sec­ond equa­tion pro­duces

\begin{displaymath}
\fbox{$\displaystyle
\frac{{\rm d}\varphi_\mu}{{\rm d}x} = - {\mathscr S}\frac{{\rm d}T}{{\rm d}x}
$} %
\end{displaymath} (A.32)

To see what this means, in­te­grate this along a closed cir­cuit all the way from lead 1 of a volt­meter through a sam­ple to the other lead 2. That gives
\begin{displaymath}
\varphi_{\mu,2} - \varphi_{\mu,1} = - \int_1^2 {\mathscr S}{\rm d}T
\end{displaymath} (A.33)

As­sum­ing that the two leads of the volt­meter are at the same tem­per­a­ture, their in­trin­sic chem­i­cal po­ten­tials are the same. In that case, the dif­fer­ence in po­ten­tials is equal to the dif­fer­ence in elec­tro­sta­tic po­ten­tials. In other words, the in­te­gral gives the dif­fer­ence be­tween the volt­ages in­side the two leads. And that is the volt­age that will be dis­played by the volt­meter.

It is of­ten con­ve­nient to ex­press the heat flux den­sity $q$ in terms of the cur­rent den­sity in­stead of the gra­di­ent of the po­ten­tial $\varphi_\mu$. Elim­i­nat­ing this gra­di­ent from the equa­tions (A.31) pro­duces

\begin{displaymath}
\fbox{$\displaystyle
q = -\kappa\frac{{\rm d}T}{{\rm d}x} + {\mathscr P}j
$} %
\end{displaymath} (A.34)

In case there is no cur­rent, this is the well-known Fourier’s law of heat con­duc­tion, with $\kappa$ the usual ther­mal con­duc­tiv­ity. Note that the heat flux den­sity is of­ten sim­ply called the heat flux, even though it is per unit area. In the pres­ence of cur­rent, the heat flux den­sity is aug­mented by the Peltier ef­fect, the sec­ond term.

The to­tal en­ergy flow­ing through the bar is the sum of the ther­mal heat flux and the en­ergy car­ried along by the elec­trons:

\begin{displaymath}
j_E = q + j \varphi_\mu
\end{displaymath}

If the en­ergy flow is con­stant, the same en­ergy flows out of a piece ${\rm d}{x}$ of the bar as flows into it. Oth­er­wise the neg­a­tive $x$-​de­riv­a­tive of the en­ergy flux den­sity gives the net en­ergy ac­cu­mu­la­tion $\dot{e}$ per unit vol­ume:

\begin{displaymath}
\dot e = -\frac{{\rm d}j_E}{{\rm d}x}
= - \frac{{\rm d}q}{{\rm d}x} - j \frac{{\rm d}\varphi_\mu}{{\rm d}x}
\end{displaymath}

where it was as­sumed that the elec­tric cur­rent is con­stant as it must be for a steady state. Of course, in a steady state any nonzero $\dot{e}$ must be re­moved through heat con­duc­tion through the sides of the bar of ma­te­r­ial be­ing tested, or through some al­ter­na­tive means. Sub­sti­tut­ing in from (A.34) for $q$ and from the sec­ond of (A.31) for the gra­di­ent of the po­ten­tial gives:

\begin{displaymath}
\dot e =
\frac{{\rm d}}{{\rm d}x} \left(\kappa \frac{{\rm ...
...r K}\equiv \frac{{\rm d}{\mathscr P}}{{\rm d}T} - {\mathscr S}
\end{displaymath}

The fi­nal term in the en­ergy ac­cu­mu­la­tion is the Thom­son ef­fect or Kelvin heat. The Kelvin (Thom­son) co­ef­fi­cient ${\mathscr K}$ can be cleaned up us­ing the sec­ond Kelvin re­la­tion­ship given in a later sub­sec­tion.

The equa­tions (A.31) are of­ten said to be rep­re­sen­ta­tive of non­equi­lib­rium ther­mo­dy­nam­ics. How­ever, they cor­re­spond to a van­ish­ingly small per­tur­ba­tion from ther­mo­dy­nam­i­cal equi­lib­rium. The equa­tions would more cor­rectly be called quasi-equi­lib­rium ther­mo­dy­nam­ics. Non­equi­lib­rium ther­mo­dy­nam­ics is what you have in­side a shock wave.


A.11.5 Charge lo­ca­tions in ther­mo­electrics

The state­ment that the charge den­sity is neu­tral in­side the ma­te­r­ial comes from [[8]].

A sim­pli­fied macro­scopic de­riva­tion can be given based on the ther­mo­elec­tric equa­tions (A.31). The de­riva­tion as­sumes that the tem­per­a­ture and chem­i­cal po­ten­tial are al­most con­stant. That means that de­riv­a­tives of ther­mo­dy­namic quan­ti­ties and elec­tric po­ten­tial are small. That makes the heat flux and cur­rent also small.

Next, in three di­men­sions re­place the $x$ de­riv­a­tives in the ther­mo­elec­tric equa­tions (A.31) by the gra­di­ent op­er­a­tor $\nabla$. Now un­der steady-state con­di­tions, the di­ver­gence of the cur­rent den­sity must be zero, or there would be an un­steady lo­cal ac­cu­mu­la­tion or de­ple­tion of net charge, chap­ter 13.2. Sim­i­larly, the di­ver­gence of the heat flux den­sity must be zero, or there would be an ac­cu­mu­la­tion or de­ple­tion of ther­mal en­ergy. (This ig­nores lo­cal heat gen­er­a­tion as an ef­fect that is qua­drat­i­cally small for small cur­rents and heat fluxes.)

There­fore, tak­ing the di­ver­gence of the equa­tions (A.31) and ig­nor­ing the vari­a­tions of the co­ef­fi­cients, which give again qua­drat­i­cally small con­tri­bu­tions, it fol­lows that the Lapla­cians of both the tem­per­a­ture and the chem­i­cal po­ten­tial are zero.

Now the chem­i­cal po­ten­tial in­cludes both the in­trin­sic chem­i­cal po­ten­tial and the ad­di­tional elec­tro­sta­tic po­ten­tial. The in­trin­sic chem­i­cal po­ten­tial de­pends on tem­per­a­ture. Us­ing again the as­sump­tion that qua­drat­i­cally small terms can be ig­nored, the Lapla­cian of the in­trin­sic po­ten­tial is pro­por­tional to the Lapla­cian of the tem­per­a­ture and there­fore zero.

Then the Lapla­cian of the elec­tro­sta­tic po­ten­tial must be zero too, to make the Lapla­cian of the to­tal po­ten­tial zero. And that then im­plies the ab­sence of net charge in­side the ma­te­r­ial ac­cord­ing to Maxwell’s first equa­tion, chap­ter 13.2. Any net charge must ac­cu­mu­late at the sur­faces.


A.11.6 Kelvin re­la­tion­ships

This sub­sec­tion gives an ex­pla­na­tion of the de­f­i­n­i­tion of the ther­mal heat flux in ther­mo­electrics. It also ex­plains that the Kelvin (or Thom­son) re­la­tion­ships are a spe­cial case of the more gen­eral On­sager rec­i­p­ro­cal re­la­tions. If you do not know what ther­mo­dy­nam­i­cal en­tropy is, you should not be read­ing this sub­sec­tion. Not be­fore read­ing chap­ter 11, at least.

For sim­plic­ity, the dis­cus­sion will again as­sume one-di­men­sion­al con­duc­tion of heat and cur­rent. The phys­i­cal pic­ture is there­fore con­duc­tion along a bar aligned in the $x$-​di­rec­tion. It will be as­sumed that the bar is in a steady state, in other words, that the tem­per­a­ture and chem­i­cal po­ten­tial dis­tri­b­u­tions, heat flux and cur­rent through the bar all do not change with time.

Fig­ure A.1: Analy­sis of con­duc­tion.
\begin{figure}\centering
\setlength{\unitlength}{1pt}
\begin{picture}(360,14...
...mu_2 = \mu + \frac{{\rm d}\mu}{{\rm d}x} {\rm d}x$}}
\end{picture}
\end{figure}

The pri­mary ques­tion is what is go­ing on in a sin­gle short seg­ment ${\rm d}{x}$ of such a bar. Here ${\rm d}{x}$ is as­sumed to be small on a macro­scopic scale, but large on a mi­cro­scopic scale. To an­a­lyze the seg­ment, imag­ine it taken out of the bar and sand­wiched be­tween two big ide­al­ized reser­voirs 1 and 2 of the same ma­te­r­ial, as shown in fig­ure A.1. The ide­al­ized reser­voirs are as­sumed to re­main at uni­form, ther­mo­dy­nam­i­cally re­versible, con­di­tions. Reser­voir 1 is at the con­sid­ered time at the same tem­per­a­ture and chem­i­cal po­ten­tial as the start of the seg­ment, and reser­voir 2 at the same tem­per­a­ture and chem­i­cal po­ten­tial as the end of the seg­ment. The reser­voirs are as­sumed to be big enough that their prop­er­ties change slowly in time. There­fore it is as­sumed that their time vari­a­tions do not have an ef­fect on what hap­pens in­side the bar seg­ment at the con­sid­ered time. For sim­plic­ity, it will also be as­sumed that the ma­te­r­ial con­sists of a sin­gle par­ti­cle type. Some of these par­ti­cles are al­lowed to move through the bar seg­ment from reser­voir 1 to reser­voir 2.

In other words, there is a flow, or flux, of par­ti­cles through the bar seg­ment. The cor­re­spond­ing par­ti­cle flux den­sity $j_I$ is the par­ti­cle flow per unit area. For sim­plic­ity, it will be as­sumed that the bar has unit area. Then there is no dif­fer­ence be­tween the par­ti­cle flow and the par­ti­cle flux den­sity. Note that the same flow of par­ti­cles $j_I$ must en­ter the bar seg­ment from reser­voir 1 as must exit from the seg­ment into reser­voir 2. If that was not the case, there would be a net ac­cu­mu­la­tion or de­ple­tion of par­ti­cles in­side the bar seg­ment. That is not pos­si­ble, be­cause the bar seg­ment is as­sumed to be in a steady state. There­fore the flow of par­ti­cles through the bar seg­ment de­creases the num­ber of par­ti­cles $I_1$ in reser­voir 1, but in­creases the num­ber $I_2$ in reser­voir 2 cor­re­spond­ingly:

\begin{displaymath}
j_I = - \frac{{\rm d}I_1}{{\rm d}t} = \frac{{\rm d}I_2}{{\rm d}t}
\end{displaymath}

Fur­ther, due to the en­ergy car­ried along by the mov­ing par­ti­cles, as well as due to ther­mal heat flow, there will be a net en­ergy flow $j_E$ through the bar seg­ment. Like the par­ti­cle flow, the en­ergy flow comes out of reser­voir 1 and goes into reser­voir 2:

\begin{displaymath}
j_E = - \frac{{\rm d}E_1}{{\rm d}t} = \frac{{\rm d}E_2}{{\rm d}t}
\end{displaymath}

Here $E_1$ is the to­tal en­ergy in­side reser­voir 1, and $E_2$ that in­side reser­voir 2. It is as­sumed that the reser­voirs are kept at con­stant vol­ume and are ther­mally in­su­lated ex­cept at the junc­tion with the bar, so that no en­ergy is added due to pres­sure work or heat con­duc­tion else­where. Sim­i­larly, the sides of the bar seg­ment are as­sumed ther­mally in­su­lated.

One ques­tion is how to de­fine the heat flux through the bar seg­ment. In the ab­sence of par­ti­cle mo­tion, the sec­ond law of ther­mo­dy­nam­ics al­lows an un­am­bigu­ous an­swer. The heat flux $q$ through the bar en­ters reser­voir 2, and the sec­ond law of ther­mo­dy­nam­ics then says:

\begin{displaymath}
q_2 = T_2\frac{{\rm d}S_2}{{\rm d}t}
\end{displaymath}

Here $S_2$ is the en­tropy of the reser­voir 2. In the pres­ence of par­ti­cles mov­ing through the bar, the de­f­i­n­i­tion of ther­mal en­ergy, and so the cor­re­spond­ing heat flux, be­comes more am­bigu­ous. The par­ti­cles also carry along non­ther­mal en­ergy. The ques­tion then be­comes what should be counted as ther­mal en­ergy, and what as non­ther­mal. To re­solve that, the heat flux into reser­voir 2 will be de­fined by the ex­pres­sion above. Note that the heat flux out of reser­voir 1 might be slightly dif­fer­ent be­cause of vari­a­tions in en­ergy car­ried by the par­ti­cles. It is the to­tal en­ergy flow $j_E$, not the heat flow $q$, that must be ex­actly con­stant.

To un­der­stand the re­la­tion­ship be­tween heat flux and en­ergy flux more clearly, some ba­sic ther­mo­dy­nam­ics can be used. See chap­ter 11.12 for more de­tails, in­clud­ing gen­er­al­iza­tion to more than one par­ti­cle type. A com­bi­na­tion of the first and sec­ond laws of ther­mo­dy­nam­ics pro­duces

\begin{displaymath}
T { \rm d}\bar s = { \rm d}\bar e + P { \rm d}\bar v
\qquad
S = \bar sI \quad E = \bar e I \quad V = \bar v I
\end{displaymath}

in which $\bar{s}$, $\bar{e}$, and $\bar{v}$ are the en­tropy, in­ter­nal en­ergy, and vol­ume per par­ti­cle, and $P$ is the pres­sure. That can be used to rewrite the de­riv­a­tive of en­tropy in the de­f­i­n­i­tion of the heat flux above:

\begin{displaymath}
T { \rm d}S = T {\rm d}(\bar s I) = T ({\rm d}\bar s) I + ...
... = ({\rm d}\bar e + P { \rm d}\bar v) I + T \bar s ({\rm d}I)
\end{displaymath}

That can be rewrit­ten as

\begin{displaymath}
T { \rm d}S = {\rm d}E + P {\rm d}V - (\bar e + P\bar v -T\bar s) {\rm d}I
\end{displaymath}

as can be ver­i­fied by writ­ing $E$ and $V$ as $\bar{e}I$ and $\bar{v}I$ and dif­fer­en­ti­at­ing out. The par­en­thet­i­cal ex­pres­sion in the above equa­tion is in ther­mo­dy­nam­ics known as the Gibbs free en­ergy. Chap­ter 11.13 ex­plains that it is the same as the chem­i­cal po­ten­tial $\mu$ in the dis­tri­b­u­tion laws. There­fore:

\begin{displaymath}
T { \rm d}S = {\rm d}E + P {\rm d}V - \mu {\rm d}I
\end{displaymath}

(Chap­ter 11.13 does not in­clude an ad­di­tional elec­tro­sta­tic en­ergy due to an am­bi­ent elec­tric field. But an in­trin­sic chem­i­cal po­ten­tial can be de­fined by sub­tract­ing the elec­tro­sta­tic po­ten­tial en­ergy. The cor­re­spond­ing in­trin­sic en­ergy also ex­cludes the elec­tro­sta­tic po­ten­tial en­ergy. That makes the ex­pres­sion for the chem­i­cal po­ten­tial the same in terms of in­trin­sic quan­ti­ties as in terms of non­in­trin­sic ones. See also the dis­cus­sion in chap­ter 6.14.)

Us­ing the above ex­pres­sion for the change in en­tropy in the de­f­i­n­i­tion of the heat flux gives, not­ing that the vol­ume is con­stant,

\begin{displaymath}
q_2 = \frac{{\rm d}E_2}{{\rm d}t} - \mu_2 \frac{{\rm d}I_2}{{\rm d}t}
= j_E - \mu_2 j_I
\end{displaymath}

It can be con­cluded from this that the non­ther­mal en­ergy car­ried along per par­ti­cle is $\mu$. The rest of the net en­ergy flow is ther­mal en­ergy.

The Kelvin re­la­tion­ships are re­lated to the net en­tropy gen­er­ated by the seg­ment of the bar. The sec­ond law im­plies that ir­re­versible processes al­ways in­crease the net en­tropy in the uni­verse. And by de­f­i­n­i­tion, the com­plete sys­tem fig­ure A.1 ex­am­ined here is iso­lated. It does not ex­change work nor heat with its sur­round­ings. There­fore, the en­tropy of this sys­tem must in­crease in time due to ir­re­versible processes. More specif­i­cally, the net sys­tem en­tropy must go up due to the ir­re­versible heat con­duc­tion and par­ti­cle trans­port in the seg­ment of the bar. The reser­voirs are taken to be ther­mo­dy­nam­i­cally re­versible; they do not cre­ate en­tropy out of noth­ing. But the heat con­duc­tion in the bar is ir­re­versible; it goes from hot to cold, not the other way around, in the ab­sence of other ef­fects. Sim­i­larly, the par­ti­cle trans­port goes from higher chem­i­cal po­ten­tial to lower.

While the con­duc­tion processes in the bar cre­ate net en­tropy, the en­tropy of the bar still does not change. The bar is as­sumed to be in a steady state. In­stead the en­tropy cre­ated in the bar causes a net in­crease in the com­bined en­tropy of the reser­voirs. Specif­i­cally,

\begin{displaymath}
\frac{{\rm d}S_{\rm net}}{{\rm d}t} = \frac{{\rm d}S_2}{{\rm d}t} + \frac{{\rm d}S_1}{{\rm d}t}
\end{displaymath}

By de­f­i­n­i­tion of the heat flux,

\begin{displaymath}
\frac{{\rm d}S_{\rm net}}{{\rm d}t} = \frac{q_2}{T_2} - \frac{q_1}{T_1}
\end{displaymath}

Sub­sti­tut­ing in the ex­pres­sion for the heat flux in terms of the en­ergy and par­ti­cle fluxes gives

\begin{displaymath}
\frac{{\rm d}S_{\rm net}}{{\rm d}t} =
\left(\frac{1}{T_2} ...
...ght) -
\left(\frac{1}{T_1} j_E - \frac{\mu_1}{T_1} j_I\right)
\end{displaymath}

Since the area of the bar is one, its vol­ume is ${\rm d}{x}$. There­fore, the en­tropy gen­er­a­tion per unit vol­ume is:
\begin{displaymath}
\frac{1}{{\rm d}x} \frac{{\rm d}S_{\rm net}}{{\rm d}t} =
\...
...d}1/T}{{\rm d}x} j_E + \frac{{\rm d}{- \mu/T}}{{\rm d}x} j_I %
\end{displaymath} (A.35)

That used that any ex­pres­sion of the form $(f_2-f_1)$$\raisebox{.5pt}{$/$}$${\rm d}{x}$ is by de­f­i­n­i­tion the de­riv­a­tive of $f$.

The above ex­pres­sion for the en­tropy gen­er­a­tion im­plies that a nonzero de­riv­a­tive of 1$\raisebox{.5pt}{$/$}$$T$ must cause an en­ergy flow of the same sign. Oth­er­wise the en­tropy of the sys­tem would de­crease if the de­riv­a­tive in the sec­ond term is zero. Sim­i­larly, a nonzero de­riv­a­tive of $\vphantom{0}\raisebox{1.5pt}{$-$}$$\mu$$\raisebox{.5pt}{$/$}$$T$ must cause a par­ti­cle flow of the same sign. Of course, that does not ex­clude that the de­riv­a­tive of 1$\raisebox{.5pt}{$/$}$$T$ may also cause a par­ti­cle flow as a sec­ondary ef­fect, or a de­riv­a­tive of $\vphantom{0}\raisebox{1.5pt}{$-$}$$\mu$$\raisebox{.5pt}{$/$}$$T$ an en­ergy flow. Us­ing the same rea­son­ing as in an ear­lier sub­sec­tion gives:

\begin{displaymath}
j_E = L_{11} \frac{{\rm d}1/T}{{\rm d}x} + L_{12} \frac{{\r...
... d}1/T}{{\rm d}x}
+ L_{22} \frac{{\rm d}{-\mu/T}}{{\rm d}x} %
\end{displaymath} (A.36)

where the $L_{..}$ are again co­ef­fi­cients to be de­ter­mined ex­per­i­men­tally. But in this case, the co­ef­fi­cients $L_{11}$ and $L_{22}$ must nec­es­sar­ily be pos­i­tive. That can pro­vide a san­ity check on the ex­per­i­men­tal re­sults. It is an ad­van­tage gained from tak­ing the flows and de­riv­a­tives di­rectly from the equa­tion of en­tropy gen­er­a­tion. In fact, some­what stronger con­straints ap­ply. If the ex­pres­sions for $j_E$ and $j_I$ are plugged into the ex­pres­sion for the en­tropy gen­er­a­tion, the re­sult must be pos­i­tive re­gard­less of what the val­ues of the de­riv­a­tives are. That re­quires not just that $L_{11}$ and $L_{22}$ are pos­i­tive, but also that the av­er­age of $L_{12}$ and $L_{21}$ is smaller in mag­ni­tude than the geo­met­ric av­er­age of $L_{11}$ and $L_{22}$.

The so-called On­sager rec­i­p­ro­cal re­la­tions pro­vide a fur­ther, and much more spe­cific con­straint. They say that the co­ef­fi­cients of the sec­ondary ef­fects, $L_{12}$ and $L_{21}$, must be equal. In the terms of lin­ear al­ge­bra, ma­trix $L_{..}$ must be sym­met­ric and pos­i­tive def­i­nite. In real life, it means that only three, not four co­ef­fi­cients have to be de­ter­mined ex­per­i­men­tally. That is very use­ful be­cause the ex­per­i­men­tal de­ter­mi­na­tion of sec­ondary ef­fects is of­ten dif­fi­cult.

The On­sager re­la­tions re­main valid for much more gen­eral sys­tems, in­volv­ing flows of other quan­ti­ties. Their va­lid­ity can be ar­gued based on ex­per­i­men­tal ev­i­dence, or also the­o­ret­i­cally based on the sym­me­try of the mi­cro­scopic dy­nam­ics with re­spect to time re­ver­sal. If there is a mag­netic field in­volved, a co­ef­fi­cient $L_{ij}$ will only equal $L_{ji}$ af­ter the mag­netic field has been re­versed: time re­ver­sal causes the elec­trons in your elec­tro­mag­net to go around the op­po­site way. A sim­i­lar ob­ser­va­tion holds if Cori­o­lis forces are a fac­tor in a ro­tat­ing sys­tem.

The equa­tions (A.36) for $j_E$ and $j_I$ above can read­ily be con­verted into ex­pres­sions for the heat flux den­sity $q$ $\vphantom0\raisebox{1.5pt}{$=$}$ $j_E-{\mu}j_I$ and the cur­rent den­sity $j$ $\vphantom0\raisebox{1.5pt}{$=$}$ $-ej_I$. If you do so, then dif­fer­en­ti­ate out the de­riv­a­tives, and com­pare with the ther­mo­elec­tric equa­tions (A.31) given ear­lier, you find that the On­sager re­la­tion $L_{12}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $L_{21}$ trans­lates into the sec­ond Kelvin re­la­tion

\begin{displaymath}
{\mathscr P}={\mathscr S}T
\end{displaymath}

That al­lows you to clean up the Kelvin co­ef­fi­cient to the first Kelvin re­la­tion­ship:

\begin{displaymath}
{\mathscr K}\equiv \frac{{\rm d}{\mathscr P}}{{\rm d}T} - {...
...hscr S}}{{\rm d}T}
= \frac{{\rm d}{\mathscr S}}{{\rm d}\ln T}
\end{displaymath}

It should be noted that while the sec­ond Kelvin re­la­tion­ship is named af­ter Kelvin, he never gave a valid proof of the re­la­tion­ship. Nei­ther did many other au­thors that tried. It was On­sager who first suc­ceeded in giv­ing a more or less con­vinc­ing the­o­ret­i­cal jus­ti­fi­ca­tion. Still, the most con­vinc­ing sup­port for the rec­i­p­ro­cal re­la­tions re­mains the over­whelm­ing ex­per­i­men­tal data. See Miller (Chem. Rev. 60, 15, 1960) for ex­am­ples. There­fore, the rec­i­p­ro­cal re­la­tion­ships are com­monly seen as an ad­di­tional ax­iom to be added to ther­mo­dy­nam­ics to al­low quasi-equi­lib­rium sys­tems to be treated.