11.12 The New Vari­ables

The new kid on the block is the en­tropy $S$. For an adi­a­batic sys­tem the en­tropy is al­ways in­creas­ing. That is highly use­ful in­for­ma­tion, if you want to know what ther­mo­dy­nam­i­cally sta­ble fi­nal state an adi­a­batic sys­tem will set­tle down into. No need to try to fig­ure out the com­pli­cated time evo­lu­tion lead­ing to the fi­nal state. Just find the state that has the high­est pos­si­ble en­tropy $S$, that will be the sta­ble fi­nal state.

But a lot of sys­tems of in­ter­est are not well de­scribed as be­ing adi­a­batic. A typ­i­cal al­ter­na­tive case might be a sys­tem in a rigid box in an en­vi­ron­ment that is big enough, and con­ducts heat well enough, that it can at all times be taken to be at the same tem­per­a­ture $T_{\rm {surr}}$. Also as­sume that ini­tially the sys­tem it­self is in some state 1 at the am­bi­ent tem­per­a­ture $T_{\rm {surr}}$, and that it ends up in a state 2 again at that tem­per­a­ture. In the evo­lu­tion from 1 to 2, how­ever, the sys­tem tem­per­a­ture could be be dif­fer­ent from the sur­round­ings, or even un­de­fined, no ther­mal equi­lib­rium is as­sumed. The first law, en­ergy con­ser­va­tion, says that the heat $Q_{12}$ added to the sys­tem from the sur­round­ings equals the change in in­ter­nal en­ergy $E_2-E_1$ of the sys­tem. Also, the en­tropy change in the isother­mal en­vi­ron­ment will be $-Q_{12}$$\raisebox{.5pt}{$/$}$$T_{\rm {surr}}$, so the sys­tem en­tropy change $S_2-S_1$ must be at least $Q_{12}$$\raisebox{.5pt}{$/$}$$T_{\rm {surr}}$ in or­der for the net en­tropy in the uni­verse not to de­crease. From that it can be seen by sim­ply writ­ing it out that the “Helmholtz free en­ergy”

\begin{displaymath}
\fbox{$\displaystyle
F = E - TS
$} %
\end{displaymath} (11.21)

is smaller for the fi­nal sys­tem 2 than for the start­ing one 1. In par­tic­u­lar, if the sys­tem ends up into a sta­ble fi­nal state that can no longer change, it will be the state of small­est pos­si­ble Helmholtz free en­ergy. So, if you want to know what will be the fi­nal fate of a sys­tem in a rigid, heat con­duct­ing, box in an isother­mal en­vi­ron­ment, just find the state of low­est pos­si­ble Helmholtz en­ergy. That will be the one.

A slightly dif­fer­ent ver­sion oc­curs even more of­ten in real ap­pli­ca­tions. In these the sys­tem is not in a rigid box, but in­stead its sur­face is at all times ex­posed to am­bi­ent at­mos­pheric pres­sure. En­ergy con­ser­va­tion now says that the heat added $Q_{12}$ equals the change in in­ter­nal en­ergy $E_2-E_1$ plus the work done ex­pand­ing against the at­mos­pheric pres­sure, which is $P_{\rm {surr}}(V_2-V_1)$. As­sum­ing that both the ini­tial state 1 and fi­nal state 2 are at am­bi­ent at­mos­pheric pres­sure, as well as at am­bi­ent tem­per­a­ture as be­fore, then it is seen that the quan­tity that de­creases is the “Gibbs free en­ergy”

\begin{displaymath}
\fbox{$\displaystyle
G = H - TS
$} %
\end{displaymath} (11.22)

in terms of the en­thalpy $H$ de­fined as $H$ $\vphantom0\raisebox{1.5pt}{$=$}$ $E+PV$. As an ex­am­ple, phase equi­lib­ria are at the same pres­sure and tem­per­a­ture. In or­der for them to be sta­ble, the phases need to have the same spe­cific Gibbs en­ergy. Oth­er­wise all par­ti­cles would end up in what­ever phase has the lower Gibbs en­ergy. Sim­i­larly, chem­i­cal equi­lib­ria are of­ten posed at an am­bi­ent pres­sure and tem­per­a­ture.

There are a num­ber of dif­fer­en­tial ex­pres­sions that are very use­ful in do­ing ther­mo­dy­nam­ics. The pri­mary one is ob­tained by com­bin­ing the dif­fer­en­tial first law (11.11) with the dif­fer­en­tial sec­ond law (11.19) for re­versible processes:

\begin{displaymath}
\fbox{$\displaystyle
{\rm d}E = T { \rm d}S - P { \rm d}V
$} %
\end{displaymath} (11.23)

This no longer in­volves the heat trans­ferred from the sur­round­ings, just state vari­ables of the sys­tem it­self. The equiv­a­lent one us­ing the en­thalpy $H$ in­stead of the in­ter­nal en­ergy $E$ is
\begin{displaymath}
\fbox{$\displaystyle
{\rm d}H = T { \rm d}S + V { \rm d}P
$} %
\end{displaymath} (11.24)

The dif­fer­en­tials of the Helmholtz and Gibbs free en­er­gies are, af­ter clean­ing up with the two ex­pres­sions im­me­di­ately above:

\begin{displaymath}
\fbox{$\displaystyle
{\rm d}F = - S { \rm d}T - P { \rm d}V
$} %
\end{displaymath} (11.25)

and
\begin{displaymath}
\fbox{$\displaystyle
{\rm d}G = - S { \rm d}T + V { \rm d}P
$} %
\end{displaymath} (11.26)

Ex­pres­sion (11.25) shows that the work ob­tain­able in an isother­mal re­versible process is given by the de­crease in Helmholtz free en­ergy. That is why Helmholtz called it “free en­ergy” in the first place. The Gibbs free en­ergy is ap­plic­a­ble to steady flow de­vices such as com­pres­sors and tur­bines; the first law for these de­vices must be cor­rected for the “flow work” done by the pres­sure forces on the sub­stance en­ter­ing and leav­ing the de­vice. The ef­fect is to turn $P{ \rm d}{}V$ into $\vphantom{0}\raisebox{1.5pt}{$-$}$$V{ \rm d}{}P$ as the dif­fer­en­tial for the ac­tual work ob­tain­able from the de­vice. (This as­sumes that the ki­netic and/or po­ten­tial en­ergy that the sub­stance picks up while go­ing through the de­vice is a not a fac­tor.)

Maxwell noted that, ac­cord­ing to the to­tal dif­fer­en­tial of cal­cu­lus, the co­ef­fi­cients of the dif­fer­en­tials in the right hand sides of (11.23) through (11.26) must be the par­tial de­riv­a­tives of the quan­tity in the left hand side:

 $\displaystyle \displaystyle
\left(\frac{\partial E}{\partial S}\right)_V =\phan...
...artial T}{\partial V}\right)_S
=
- \left(\frac{\partial P}{\partial S}\right)_V$      (11.27)
 $\displaystyle \displaystyle
\left(\frac{\partial H}{\partial S}\right)_P =\phan...
...{\partial P}\right)_S
=
\phantom{-}\left(\frac{\partial V}{\partial S}\right)_P$      (11.28)
 $\displaystyle \displaystyle
\left(\frac{\partial F}{\partial T}\right)_V = - S
...
...{\partial V}\right)_T
=
\phantom{-}\left(\frac{\partial P}{\partial T}\right)_V$      (11.29)
 $\displaystyle \displaystyle
\left(\frac{\partial G}{\partial T}\right)_P = - S
...
...tial S}{\partial P}\right)_T
=
- \left(\frac{\partial V}{\partial T}\right)_P%
$      (11.30)

The fi­nal equa­tion in each line can be ver­i­fied by sub­sti­tut­ing in the pre­vi­ous two and not­ing that the or­der of dif­fer­en­ti­a­tion does not make a dif­fer­ence. Those are called the “Maxwell re­la­tions.” They have a lot of prac­ti­cal uses. For ex­am­ple, ei­ther of the fi­nal equa­tions in the last two lines al­lows the en­tropy to be found if the re­la­tion­ship be­tween the nor­mal vari­ables $P$, $V$, and $T$ is known, as­sum­ing that at least one data point at every tem­per­a­ture is al­ready avail­able. Even more im­por­tant from an ap­plied point of view, the Maxwell re­la­tions al­low what­ever data you find about a sub­stance in lit­er­a­ture to be stretched thin. Ap­prox­i­mate the de­riv­a­tives above with dif­fer­ence quo­tients, and you can com­pute a host of in­for­ma­tion not ini­tially in your ta­ble or graph.

There are two even more re­mark­able re­la­tions along these lines. They fol­low from di­vid­ing (11.23) and (11.24) by $T$ and re­ar­rang­ing so that $S$ be­comes the quan­tity dif­fer­en­ti­ated. That pro­duces

\begin{displaymath}
\begin{array}[b]{r}
\displaystyle
\left(\frac{\partial S}...
... \left(\frac{\partial P/T}{\partial T}\right)_V
\end{array} %
\end{displaymath} (11.31)


\begin{displaymath}
\begin{array}[b]{r}
\displaystyle
\left(\frac{\partial S}...
... \left(\frac{\partial V/T}{\partial T}\right)_P
\end{array} %
\end{displaymath} (11.32)

What is so re­mark­able is the fi­nal equa­tion in each case: they do not in­volve en­tropy in any way, just the nor­mal vari­ables $P$, $V$, $T$, $H$, and $E$. Merely be­cause en­tropy ex­ists, there must be re­la­tion­ships be­tween these vari­ables which seem­ingly have ab­solutely noth­ing to do with the sec­ond law.

As an ex­am­ple, con­sider an ideal gas, more pre­cisely, any sub­stance that sat­is­fies the ideal gas law

\begin{displaymath}
\fbox{$\displaystyle
Pv=RT \quad\mbox{with}\quad R = \frac...
...{u}}}{M}
\quad R_{\rm{u}} = \mbox{8.314 472 kJ/kmol K}
$} %
\end{displaymath} (11.33)

The con­stant $R$ is called the spe­cific gas con­stant; it can be com­puted from the ra­tio of the Boltz­mann con­stant $k_{\rm B}$ and the mass of a sin­gle mol­e­cule $m$. Al­ter­na­tively, it can be com­puted from the “uni­ver­sal gas con­stant” $R_{\rm {u}}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $I_{\rm A}k_{\rm B}$ and the mo­lar mass $M$ $\vphantom0\raisebox{1.5pt}{$=$}$ $I_{\rm A}{}m$. For an ideal gas like that, the equa­tions above show that the in­ter­nal en­ergy and en­thalpy are func­tions of tem­per­a­ture only. And then so are the spe­cific heats $C_v$ and $C_p$, be­cause those are their tem­per­a­ture de­riv­a­tives:
\begin{displaymath}
\fbox{$\displaystyle
\mbox{For ideal gases:}\quad
e,h,C_v,C_p=e,h,C_v,C_p(T)
\quad
C_P = C_v + R
$} %
\end{displaymath} (11.34)

(The fi­nal re­la­tion is be­cause $C_P$ $\vphantom0\raisebox{1.5pt}{$=$}$ ${\rm d}{h}$$\raisebox{.5pt}{$/$}$${\rm d}{T}$ $\vphantom0\raisebox{1.5pt}{$=$}$ ${\rm d}(e+Pv)$$\raisebox{.5pt}{$/$}$${\rm d}{T}$ with ${\rm d}{e}$$\raisebox{.5pt}{$/$}$${\rm d}{T}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $C_v$ and $Pv$ $\vphantom0\raisebox{1.5pt}{$=$}$ $RT$.) Ideal gas ta­bles can there­fore be tab­u­lated by tem­per­a­ture only, there is no need to in­clude a sec­ond in­de­pen­dent vari­able. You might think that en­tropy should be tab­u­lated against both vary­ing tem­per­a­ture and vary­ing pres­sure, be­cause it does de­pend on both pres­sure and tem­per­a­ture. How­ever, the Maxwell equa­tion (11.30) may be used to find the en­tropy at any pres­sure as long as it is listed for just one pres­sure, say for one bar.

There is a sleeper among the Maxwell equa­tions; the very first one, in (11.27). Turned on its head, it says that

\begin{displaymath}
\fbox{$\displaystyle
\frac{1}{T} =
\left(
\frac{\partial...
...right)_{V{\rm and other external parameters fixed}}
$} %
\end{displaymath} (11.35)

This can be used as a de­f­i­n­i­tion of tem­per­a­ture. Note that in tak­ing the de­riv­a­tive, the vol­ume of the box, the num­ber of par­ti­cles, and other ex­ter­nal pa­ra­me­ters, like maybe an ex­ter­nal mag­netic field, must be held con­stant. To un­der­stand qual­i­ta­tively why the above de­riv­a­tive de­fines a tem­per­a­ture, con­sider two sys­tems $A$ and $B$ for which $A$ has the larger tem­per­a­ture ac­cord­ing to the de­f­i­n­i­tion above. If these two sys­tems are brought into ther­mal con­tact, then net messi­ness in­creases when en­ergy flows from high tem­per­a­ture sys­tem $A$ to low tem­per­a­ture sys­tem $B$, be­cause sys­tem $B$, with the higher value of the de­riv­a­tive, in­creases its en­tropy more than $A$ de­creases its.

Of course, this new de­f­i­n­i­tion of tem­per­a­ture is com­pletely con­sis­tent with the ideal gas one; it was de­rived from it. How­ever, the new de­f­i­n­i­tion also works fine for neg­a­tive tem­per­a­tures. As­sume a sys­tem $A$ has a neg­a­tive tem­per­a­ture ac­cord­ing to he de­f­i­n­i­tion above. Then its messi­ness (en­tropy) in­creases if it gives up heat. That is in stark con­trast to nor­mal sub­stances at pos­i­tive tem­per­a­tures that in­crease in messi­ness if they take in heat. So as­sume that sys­tem $A$ is brought into ther­mal con­tact with a nor­mal sys­tem $B$ at a pos­i­tive tem­per­a­ture. Then $A$ will give off heat to $B$, and both sys­tems in­crease their messi­ness, so every­one is happy. It fol­lows that $A$ will give off heat how­ever hot is the nor­mal sys­tem it is brought into con­tact with. While the tem­per­a­ture of $A$ may be neg­a­tive, it is hot­ter than any sub­stance with a nor­mal pos­i­tive tem­per­a­ture!

And now the big ques­tion: what is that “chem­i­cal po­ten­tial” you hear so much about? Noth­ing new, re­ally. For a pure sub­stance with a sin­gle con­stituent like this chap­ter is sup­posed to dis­cuss, the chem­i­cal po­ten­tial is just the spe­cific Gibbs free en­ergy on a mo­lar ba­sis, $\bar\mu$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\bar{g}$. More gen­er­ally, if there is more than one con­stituent the chem­i­cal po­ten­tial $\bar\mu_c$ of each con­stituent $c$ is best de­fined as

\begin{displaymath}
\fbox{$\displaystyle
\bar\mu_c \equiv
\left(
\frac{\partial G}{\partial \bar\imath_c}
\right)_{P,T}
$} %
\end{displaymath} (11.36)

(If there is only one con­stituent, then $G$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\bar\imath\bar{g}$ and the de­riv­a­tive does in­deed pro­duce $\bar{g}$. Note that an in­ten­sive quan­tity like $\bar{g}$, when con­sid­ered to be a func­tion of $P$, $T$, and $\bar\imath$, only de­pends on the two in­ten­sive vari­ables $P$ and $T$, not on the amount of par­ti­cles $\bar\imath$ present.) If there is more than one con­stituent, and as­sum­ing that their Gibbs free en­er­gies sim­ply add up, as in

\begin{displaymath}
G = \bar\imath_1 \bar g_1 + \bar\imath \bar g_2 + \ldots
= \sum_c \bar\imath_c \bar g_c,
\end{displaymath}

then the chem­i­cal po­ten­tial $\bar\mu_c$ of each con­stituent is sim­ply the mo­lar spe­cific Gibbs free en­ergy $\bar{g}_c$ of that con­stituent,

The par­tial de­riv­a­tives de­scribed by the chem­i­cal po­ten­tials are im­por­tant for fig­ur­ing out the sta­ble equi­lib­rium state a sys­tem will achieve in an isother­mal, iso­baric, en­vi­ron­ment, i.e. in an en­vi­ron­ment that is at con­stant tem­per­a­ture and pres­sure. As noted ear­lier in this sec­tion, the Gibbs free en­ergy must be as small as it can be in equi­lib­rium at a given tem­per­a­ture and pres­sure. Now ac­cord­ing to cal­cu­lus, the full dif­fer­en­tial for a change in Gibbs free en­ergy is

\begin{displaymath}
{\rm d}G(P,T,\bar\imath_1,\bar\imath_2,\ldots) =
\frac{\pa...
...tial G}{\partial \bar\imath_2} { \rm d}\bar\imath_2
+ \ldots
\end{displaymath}

The first two par­tial de­riv­a­tives, which keep the num­ber of par­ti­cles fixed, were iden­ti­fied in the dis­cus­sion of the Maxwell equa­tions as $\vphantom{0}\raisebox{1.5pt}{$-$}$$S$ and $V$; also the par­tial de­riv­a­tives with re­spect to the num­bers of par­ti­cles of the con­stituent have been de­fined as the chem­i­cal po­ten­tials $\bar\mu_c$. There­fore more shortly,
\begin{displaymath}
\fbox{$\displaystyle
{\rm d}G =
- S { \rm d}T + V { \rm...
... + V { \rm d}P + \sum_c \bar\mu_c { \rm d}\bar\imath_c
$} %
\end{displaymath} (11.37)

This gen­er­al­izes (11.26) to the case that the num­bers of con­stituents change. At equi­lib­rium at given tem­per­a­ture and pres­sure, the Gibbs en­ergy must be min­i­mal. It means that ${\rm d}{G}$ must be zero when­ever ${\rm d}{T}$ $\vphantom0\raisebox{1.5pt}{$=$}$ ${\rm d}{P}$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0, re­gard­less of any in­fin­i­tes­i­mal changes in the amounts of the con­stituents. That gives a con­di­tion on the frac­tions of the con­stituents present.

Note that there are typ­i­cally con­straints on the changes ${\rm d}\bar\imath_c$ in the amounts of the con­stituents. For ex­am­ple, in a liq­uid-va­por “phase equi­lib­rium,” any ad­di­tional amount of par­ti­cles ${\rm d}\bar\imath_{\rm {f}}$ that con­denses to liq­uid must equal the amount $-{\rm d}\bar\imath_{\rm {g}}$ of par­ti­cles that dis­ap­pears from the va­por phase. (The sub­scripts fol­low the un­for­tu­nate con­ven­tion liq­uid=fluid=f and va­por=gas=g. Don’t ask.) Putting this re­la­tion in (11.37) it can be seen that the liq­uid and va­por phase must have the same chem­i­cal po­ten­tial, $\bar\mu_{\rm {f}}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\bar\mu_{\rm {g}}$. Oth­er­wise the Gibbs free en­ergy would get smaller when more par­ti­cles en­ter what­ever is the phase of low­est chem­i­cal po­ten­tial and the sys­tem would col­lapse com­pletely into that phase alone.

The equal­ity of chem­i­cal po­ten­tials suf­fices to de­rive the fa­mous Clau­sius-Clapey­ron equa­tion re­lat­ing pres­sure changes un­der two-phase, or “sat­u­rated,” con­di­tions to the cor­re­spond­ing tem­per­a­ture changes. For, the changes in chem­i­cal po­ten­tials must be equal too, ${\rm d}\mu_{\rm {f}}$ $\vphantom0\raisebox{1.5pt}{$=$}$ ${\rm d}\mu_{\rm {g}}$, and sub­sti­tut­ing in the dif­fer­en­tial (11.26) for the Gibbs free en­ergy, tak­ing it on a mo­lar ba­sis since $\bar\mu$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\bar{g}$,

\begin{displaymath}
- \bar s_{\rm {f}} {\rm d}T + \bar v_{\rm {f}} {\rm d}P
=
- \bar s_{\rm {g}} {\rm d}T + \bar v_{\rm {g}} {\rm d}P
\end{displaymath}

and re­ar­rang­ing gives the Clau­sius-Clapey­ron equa­tion:

\begin{displaymath}
\frac{{\rm d}P}{{\rm d}T} = \frac{s_{\rm {g}} - s_{\rm {f}}}{v_{\rm {g}} - v_{\rm {f}}}
\end{displaymath}

Note that since the right-hand side is a ra­tio, it does not make a dif­fer­ence whether you take the en­tropies and vol­umes on a mo­lar ba­sis or on a mass ba­sis. The mass ba­sis is shown since that is how you will typ­i­cally find the en­tropy and vol­ume tab­u­lated. Typ­i­cal en­gi­neer­ing ther­mo­dy­namic text­books will also tab­u­late $s_{\rm {fg}}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $s_{\rm {g}}-s_{\rm {f}}$ and $v_{\rm {fg}}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $v_{\rm {g}}-v_{\rm {f}}$, mak­ing the for­mula above very con­ve­nient.

In case your ta­bles do not have the en­tropies of the liq­uid and va­por phases, they of­ten still have the “la­tent heat of va­por­iza­tion,” also known as “en­thalpy of va­por­iza­tion” or sim­i­lar, and in en­gi­neer­ing ther­mo­dy­nam­ics books typ­i­cally in­di­cated by $h_{\rm {fg}}$. That is the dif­fer­ence be­tween the en­thalpy of the sat­u­rated liq­uid and va­por phases, $h_{\rm {fg}}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $h_{\rm {g}}-h_{\rm {f}}$. If sat­u­rated liq­uid is turned into sat­u­rated va­por by adding heat un­der con­di­tions of con­stant pres­sure and tem­per­a­ture, (11.24) shows that the change in en­thalpy $h_{\rm {g}}-h_{\rm {f}}$ equals $T(s_{\rm {g}}-s_{\rm {f}})$. So the Clau­sius-Clapey­ron equa­tion can be rewrit­ten as

\begin{displaymath}
\fbox{$\displaystyle
\frac{{\rm d}P}{{\rm d}T} = \frac{h_{\rm{fg}}}{T(v_{\rm{g}} - v_{\rm{f}})}
$} %
\end{displaymath} (11.38)

Be­cause $T{ \rm d}{s}$ is the heat added, the phys­i­cal mean­ing of the la­tent heat of va­por­iza­tion is the heat needed to turn sat­u­rated liq­uid into sat­u­rated va­por while keep­ing the tem­per­a­ture and pres­sure con­stant.

For chem­i­cal re­ac­tions, like maybe

\begin{displaymath}
2H_2 + O_2 \Longleftrightarrow 2 H_2O,
\end{displaymath}

the changes in the amounts of the con­stituents are re­lated as

\begin{displaymath}
{\rm d}\bar\imath_{{\rm H}_2} = - 2 {\rm d}\bar r \quad
{\...
... \quad
{\rm d}\bar\imath_{{\rm H}_2{\rm O}} = 2 {\rm d}\bar r
\end{displaymath}

where ${\rm d}\bar{r}$ is the ad­di­tional num­ber of times the for­ward re­ac­tion takes place from the start­ing state. The con­stants $\vphantom{0}\raisebox{1.5pt}{$-$}$2, $\vphantom{0}\raisebox{1.5pt}{$-$}$1, and 2 are called the “sto­i­chio­met­ric co­ef­fi­cients.” They can be used when ap­ply­ing the con­di­tion that at equi­lib­rium, the change in Gibbs en­ergy due to an in­fin­i­tes­i­mal amount of fur­ther re­ac­tions ${\rm d}\bar{r}$ must be zero.

How­ever, chem­i­cal re­ac­tions are of­ten posed in a con­text of con­stant vol­ume rather than con­stant pres­sure, for one be­cause it sim­pli­fies the re­ac­tion kine­mat­ics. For con­stant vol­ume, the Helmholtz free en­ergy must be used in­stead of the Gibbs one. Does that mean that a sec­ond set of chem­i­cal po­ten­tials is needed to deal with those prob­lems? For­tu­nately, the an­swer is no, the same chem­i­cal po­ten­tials will do for Helmholtz prob­lems. To see why, note that by de­f­i­n­i­tion $F$ $\vphantom0\raisebox{1.5pt}{$=$}$ $G-PV$, so ${\rm d}{F}$ $\vphantom0\raisebox{1.5pt}{$=$}$ ${\rm d}{G}-P{\rm d}{V}-V{\rm d}{P}$, and sub­sti­tut­ing for ${\rm d}{G}$ from (11.37), that gives

\begin{displaymath}
\fbox{$\displaystyle
{\rm d}F =
- S { \rm d}T - P { \rm...
...T -P { \rm d}V + \sum_c \bar\mu_c { \rm d}\bar\imath_c
$} %
\end{displaymath} (11.39)

Un­der isother­mal and con­stant vol­ume con­di­tions, the first two terms in the right hand side will be zero and $F$ will be min­i­mal when the dif­fer­en­tials with re­spect to the amounts of par­ti­cles add up to zero.

Does this mean that the chem­i­cal po­ten­tials are also spe­cific Helmholtz free en­er­gies, just like they are spe­cific Gibbs free en­er­gies? Of course the an­swer is no, and the rea­son is that the par­tial de­riv­a­tives of $F$ rep­re­sented by the chem­i­cal po­ten­tials keep ex­ten­sive vol­ume $V$, in­stead of in­ten­sive mo­lar spe­cific vol­ume $\bar{v}$ con­stant. A sin­gle-con­stituent mo­lar spe­cific Helmholtz en­ergy $\bar{f}$ can be con­sid­ered to be a func­tion $\bar{f}(T,\bar{v})$ of tem­per­a­ture and mo­lar spe­cific vol­ume, two in­ten­sive vari­ables, and then $F$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\bar\imath\bar{f}(T,\bar{v})$, but $\Big(\partial\bar\imath\bar{f}(T,V/\bar\imath)/\partial\bar\imath\Big)_{TV}$ does not sim­ply pro­duce $\bar{f}$, even if $\Big(\partial\bar\imath\bar{g}(T,P)/\partial\bar\imath\Big)_{TP}$ pro­duces $\bar{g}$.