You are on page 1of 66

Lecture notes on Statistical Physics

by Peter Johansson Department of Natural Sciences Orebro University August 25, 2005

Contents
1 Introduction 2 Basics: energy, entropy, and temperature 2.1 Introductory example . . . . . . . . . 2.2 Multiplicity and entropy . . . . . . . 2.3 Two systems in thermal contact . . . . 2.4 Thermal equilibrium and temperature 2.5 Problems for Chapter 2 . . . . . . . . 1 2 2 3 5 6 7 9 9 11 14 15 19 23 23 24 26 28 30 30 31 33 35 36 41 41 42 43 45 46 48 51

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

3 The Boltzmann distribution 3.1 Derivation of the Boltzmann distribution 3.2 Application to a harmonic oscillator . . 3.3 Two-dimensional harmonic oscillator . 3.4 Solved examples . . . . . . . . . . . . 3.5 Problems for Chapter 3 . . . . . . . . . 4 Thermal radiation 4.1 Photon modes . . . . . . 4.2 Stefan-Boltzmann law . . 4.3 The Planck radiation law 4.4 Problems for Chapter 4 .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

5 The ideal gas 5.1 The Boltzmann distribution applied to a particle in a box 5.2 Boltzmann distribution and thermodynamics . . . . . . . 5.3 Many particles in a boxthe ideal gas . . . . . . . . . . 5.4 Heat and work in statistical physics . . . . . . . . . . . . 5.5 Problems for Chapter 5 . . . . . . . . . . . . . . . . . . 6 Chemical potential and Gibbs distribution 6.1 Chemical equilibrium and chemical potential . . 6.2 Chemical potential and thermodynamics . . . . . 6.3 Internal and external chemical potential . . . . . 6.4 Derivation of the Gibbs distribution . . . . . . . 6.5 The Fermi-Dirac and Bose-Einstein distributions 6.6 The Fermi gas . . . . . . . . . . . . . . . . . . . 6.7 Problems for Chapter 6 . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

CONTENTS
7 Phase transitions 7.1 Introduction . . . . . . . . . . . . . 7.2 Phase diagrams . . . . . . . . . . . 7.3 Applications to practical situations . 7.4 The van der Waals equation of state 7.5 Problems for Chapter 7 . . . . . . . Appendix

2 53 53 54 57 58 60 61

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

A Summation of geometric series 61 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

Chapter 1 Introduction
These lecture notes are intended to be used in the course on Statistical Physics at Orebro University. The course starts with a refresher of classical thermodynamics where University Physics by H. Benson (J. Wiley & Sons, 1996) is used. The reader should have some basic knowledge of classical thermodynamics, and classical mechanics. No prior, detailed knowledge of quantum mechanics is needed, even though almost all students following the course have taken a previous course on quantum mechanics; the basic facts about the energy levels etc. of various models, such as the harmonic oscillator and the particle in a box are introduced in the text as we go along. The primary source of inspiration in writing these notes has been Thermal Physics, by C. Kittel and H. Kroemer (W. H. Freeman, 1980). I recommend this book wholeheartedly to anyone who wants to further his or her knowledge of the subject.

Chapter 2 Basics: energy, entropy, and temperature


2.1 Introductory example
Statistical physics deals with the properties of systems that are so large that it is both impossible and meaningless to try to nd out and describe the exact state of the system. In chemistry and solid state physics, to take a couple of examples, one often is faced with systems that contain 10 particles or so. It is then out of the question to ask exactly how all these particles move or in what quantum state they are. It is, on the other hand, perfectly reasonable to try to answer questions like: What is the average energy of a particle in the system? or What is the average number of particles in a unit volume?

1st sequence

H H H H H H H H H H H H H H H H H H H H H H H H H

2nd sequence

T H H H T H T H H H T H T H H T T H H T H H T H T

Figure 2.1: Two possible results of an experiment with 25 coin tosses. To try to establish a link between statistics and physics, let us rst consider an example that has no direct connection with physics. We toss a coin 25 times. Each time the result can be either heads (H) or tails (T). Two possible outcomes of such an experiment are illustrated in Fig. 2.1. Which of these results is more probable than the other? Actually, the answer to this question depends on how we view the result; how much information about the result do we want to retain? On one hand, the two results are two unique sequences of and , thus even if the rst sequence is special and it seems unlikely that it should occur, the second sequence is equally special and unlikely. At a detailed, microscopic level the two sequences are equally probable. If we instead look at things from a macroscopic point of view, just counting how many times the coin toss gave the result , then the second result (15 ) is much more probable than the rst result (25 ). The two probabilities are respectively,
1 0 ( & 2))' $ "  %#!         

(2.1)

CHAPTER 2. BASICS: ENERGY, ENTROPY, AND TEMPERATURE

3 in a row, and (2.2)

since there are in total

tacitly assumed that the coin is perfect so that and events are equally probable. By nding analogies with the physical world the above example can tell us a lot about what statistical physics is. In physics one can make two kinds of measurements. It is in some cases possible to measure the exact microscopic state of a system, for example an atom. This kind of measurement corresponds to observing the exact sequence of and in the coin toss experiment. However, for a system with 10 atoms, microscopic measurements that tell us everything about the state of every single atom are no longer possible. What we can do is, for example, to put a thermometer into a glass of water and thereby measure the average energy of the water molecules. This kind of experiment corresponds to counting the number of in the coin toss experiment. In statistical physics one makes the assumption that all microscopic states that a system can attain (given certain constraints such as energy conservation) are equally probable. From this very fundamental assumption one can derive all the laws of statistical physics and thermodynamics. One consequence of this is that when a macroscopic measurement is made, some results are enormously more probable than others.

2.2 Multiplicity and entropy


We will now consider a physical model system and from there develop an understanding of some of the basic quantities in statistical physics and thermodynamics. The model system consists of N two-level systems, i.e. quantum systems that can be in either of two different quantum states with energies (the ground state) and (the only excited state). In the real world the two-level systems could be atoms with a spin that can point either up or down. If these atoms are placed in a magnetic eld , the spins (just like small magnets) nd it energetically favorable to align themselves with the external magnetic eld because this results in a lower energy. To be specic, the spin of a two-level system can take the two values (spin-up) and , respectively. Assuming that the magnetic eld points upwards the corresponding energies can be written where is a constant, thus and . This is illustrated in Fig. 2.2.
B field spinup (m=1/2) 1 0 spindown (m=1/2) 1 0


5 64

7 4 @ 5 CBA94

We now consider a larger system consisting of point upwards and points downwards; of course

spins (two-level systems). Of these . The total, net spin of the

Figure 2.2: The two different states of a 1/2 spin in an external magnetic eld corresponding energy levels.

( & )3 

, and the

!

( & "  2'1 0

&

because in this case there are

different sequences giving 15

 
&

 & 

 

 


& '

  

 


 )'%  ( & "

 

 

 7 84


 

possible outcomes and only one of these gives us 25

. Of course, we have

$

" # 

CHAPTER 2. BASICS: ENERGY, ENTROPY, AND TEMPERATURE

4 .

system is just half the difference between the number of spins up and down,
1

and the total internal energy of the system is




Multiplicity. In the following it will be of considerable interest to know how many different microscopic states that yield a certain value for the total spin . This is in principal the same thing as asking how many different sequences of heads and tails give a certain number of heads in the coin toss experiment. We will call this quantity the multiplicity function and denote it with . If the total number of spins is and of them points down then must point upwards. To determine in how many ways to select down-spins among the spins one reasons in the following way. Number all the spins from 1 through and select one of them that should point downwardsthere are different spins to choose between. Then choose a second down-spin; this time there are only different possibilities to choose between. Moreover, we could have chosen the two down-spins in the opposite order and come to the same result, therefore there are

different ways of choosing 2 down-spins. Generalizing the argument to any number of downspins one nds that there are

In Table 2.1 we display calculated values for for the case . Even for such a small the multiplicity becomes very large for situations with equal or nearly equal numbers of spins up and down. Entropy. The multiplicity tells how many different states are available to our system given certain macroscopic constraints on and . Thus, is a measure of uncertainty or, in other words, disorder. In classical thermodynamics the entropy , is a measure of disorder and it turns out that the two are related through
& 

"

4 @ 

different possibilities. This can be reexpressed in terms of

and the total spin

4 7 4 " 4

7 7 4 " 4 C4 4

5 84

4

7C4 4

7 4 " 5 4  " ( )&

7 4

0 ' &

" 4

$ %"

7 4 " 5 4




$"   %#!

" 4 4

7 45 4

"

 

4 

 7 4  2     @ 7 4 " 4 22  " 4 4  

7 94 4
& 2



( & )'"

" #

5 4 " 4 5 4 4

7 C4 1 4 

0 ' &

$ %"


"

0 1 20 19 1 40

10 10

20 0

30 -10

39 -19 40

40 -20 1

as (2.7)

7 64

&

Table 2.1: Calculated values for the multiplicity for

and different values for

7 4 1 4

7 4 4
1

(2.3)

(2.4)

(2.5)

(2.6)

(2.8)

CHAPTER 2. BASICS: ENERGY, ENTROPY, AND TEMPERATURE


 $

where J/K is the Boltzmann constant. (Note that the universal gas constant where is Avogadros number.) In other words, multiplicity, which at rst sight appears to be a rather abstract concept, has a direct connection with classical thermodynamics. We note that our spin system has zero entropy ( ) in case all the spins are aligned with ( ), or anti-aligned with ( ) the magnetic eld, since in these two cases.

2.3 Two systems in thermal contact


We now consider a situation where two systems are in thermal contact. This means that they can exchange energy with each other. The energy of one system increases while that of the other decreases so that the total energy is conserved. We will use the spin two-level systems as models also for this discussion. Suppose we have two spin systems with and spins, respectively. The total energy is such that in total are in the excited state, whereas are in the ground state. Figure 2.3 shows such a system with and .

Energy exchange can take place here

Subsystem 1 N1 spins
7 4 7 4 4 4 7 94

Subsystem 2 N2 spins

Figure 2.3: A spin model system divided into two subsystems that are in thermal contact. We now want to see how the spin-downs are typically divided between the two subsystems. The multiplicity of states yielding spin-downs in subsystem 1 is


(2.9)

In Fig. 2.4 we have plotted the results for the multiplicity as a function of for two cases, (shown in panel a), and (panel b), respectively. In both cases of the total number of spins are in the high-energy state. The two diagrams show , as one that the multiplicity (for the system as a whole) reaches a maximum when may expect. However, the multiplicities for values near the expected are also rather large. There are uctuations in the value of . With an increasing number of spins in the system the uctuations, in absolute numbers, become larger; the width of the peak (at half maximum) in panel (a) is , whereas it is in panel (b). However, more importantly, the relative

7 4

7 4

7 4

7 4 7 4 4 " 4 "

&



4 "

& &

7 4 4

7 4

7 4

7 4

5 84

 7 4

4
4

( ' & 

7 4

&

!

& '

&

" # 

 

$  4

CHAPTER 2. BASICS: ENERGY, ENTROPY, AND TEMPERATURE


8 7 6
-192

6 (a) (b) 5 N1=400 N2=400 3 N=200

Multiplicity x 10-17

4 3 2 1 0 0

Multiplicity x 10 5 10 N1 15 20

N1=40 N2=40 N=20

0 80 90 100 N1 110 120

Figure 2.4: The multiplicity calculated for two different model spin systems. width decreases when the total number of spins increases. Actually the width of the peak of the multiplicity curve scales as , and consequently the relative width scales as . If we extend this reasoning to an everyday-situation where the thermodynamic systems we encounter contain atoms or so, we nd that the relative uctuations for various quantities are extremely small, namely of the order of . Therefore, provided that a macroscopic system, for example a liter of water, is in thermodynamic equilibrium (i.e. has the same temperature throughout) energy is practically homogeneously distributed in it. As we have already seen considering the model spin system, and will see several times later, things are not quite so simple in the microscopic world. If a system contains just a few atoms there will be large uctuations in quantities such as the total energy, and one can only discuss the energy content in statistical terms such as expectation values etc.

2.4 Thermal equilibrium and temperature


From the plots above we can also draw another interesting conclusion. A large system in thermal equilibrium ends up in a macroscopic state near the most likely one. By this we mean that since the relative uctuations are very small in a large system, it is extremely unlikely to measure values for macroscopic quantities that differ from those corresponding to the maximum multiplicity. Thus, in mathematical terms thermal equilibrium means that the multiplicity should have a maximum. This fact will lead us to a theoretical denition of temperature. The starting point for this denition is Eq. (2.9). Of course this expression reaches a maximum when the derivative with respect to vanishes,
 

(2.10)

(While of course is an integer, we can treat it as a continuous variable provided the system we look at contains many spins.) Now we rewrite this expression so that we can make better contact with physical quantities such as entropy and energy. We rst use the denition of entropy , to write
1

(2.11)

$


 !  7

4  @ 7 4 

    

&

( ' &

7 4

7 4 

7 4

  !

7 4 

& '

$"

7 4


7 4

CHAPTER 2. BASICS: ENERGY, ENTROPY, AND TEMPERATURE


(note that the subsystem entropies add) which yields the derivative


respectively, i.e. the energies are proportional to the number of down-spins, the condition for maximum multiplicity can be written


This last condition should characterize thermal equilibrium, which we know means that , which is equal in two systems have the same temperature. Therefore the derivative the two subsystems at maximum multiplicity must have some relation to the temperature. To get a theoretical denition that is consistent with experimental denitions and our everyday experiences of what temperature is, one must choose

as the denition of absolute temperature (measured in Kelvin). In this way, (i) temperatures will normally be positive, and (ii) in case thermal equilibrium is not yet established, energy will ow from the subsystem with the higher temperature to the one with the lower temperature. Suppose that the temperature of subsystem 1 is higher than that of subsystem 2, . Now if a small amount of energy , is transferred from subsystem 1 to subsystem 2 there is a change in the total entropy ,

Thus, when energy ows from higher to lower temperature, there is an increase in entropy, as there should be according to the second law of thermodynamics. Due to the close relation between entropy and multiplicity, increase of entropy means that the system goes from a less likely, to a more likely, macroscopic state.

2.5 Problems for Chapter 2


4
, where

   

2.1 Entropy and temperature Suppose the multiplicity cles.




is a constant and

the number of parti-

&

4 " 7 4  ( 2&

@  "

7 4

since (Note that of the two subsystems can be written

.) Also using the fact that the energies

7 4

7 4

This vanishes when the expression in parenthesis is 0,

 !  7

4  @ 7 4  

7 4 " 7 4

&

  


7 4

7 4 "

7 4

4 " 7 4  ( )&

@ @

 @

7 4


7 4

7 4

" 

"  7 4 7 4

7 4

(2.12)

(2.13)

(2.14)

(2.15)

(2.16)

(2.17)

CHAPTER 2. BASICS: ENERGY, ENTROPY, AND TEMPERATURE

2.2 The meaning of never It is sometimes said that six monkeys, set to type on word processors for millions of years would eventually write all the books in the British Museum. This statement does not make sense, for it gives misleading conclusions about very, very large numbers. Could all the monkeys in the world have typed out a single specied book in the age of the universe? Suppose that 10 monkeys have been seated at word processors throughout the age of the universe, 10 s. This number of monkeys is almost twice that of the present human population. We suppose that a monkey can hit 10 keys per second. A keyboard may have 44 keys; we accept lowercase letters in place of capital letters. Assuming that Shakespeares Hamlet has 10 characters, will the monkeys ever write Hamlet? (a) Show that the probability that any given sequence of 10 characters typed at random will come out in the correct sequence (Hamlet) is of the order of
 

where we have used that . (b) Show that the probability that a monkey-Hamlet will be typed in the age of the universe is approximately . The probability of a monkey-Hamlet is therefore zero in any operational sense. The statement found at the beginning of this problem is nonsense; one book, much less a library, will never occur in the total literary production of the monkeys. 2.3 Actual variation of the number of states Consider any macroscopic system at room temperature. (a) Use the denition of absolute temperature to nd the relative increase of the number of accessible states when the energy of the system is increased by 1 meV. (b) Suppose that the same system absorbs a photon of visible light with wavelength 500 nm. By what factor does the number of accessible states increase in this case?
 

& ( '

!         

 


"

! 4

( ' &

(a) Show that (b) Show that

. is negative. This form of

actually applies to an ideal gas.

Chapter 3 The Boltzmann distribution


3.1 Derivation of the Boltzmann distribution
The discussion in Chapter 2 mainly focused on a closed model system. Energy could be exchanged between subparts of the system, but we did not consider the possibility of exchanging energy with the external world at large. A system that cannot exchange energy with the surroundings is in statistical physics described by the so called microcanonical ensemble. Put briey, the microcanonical ensemble means that the system can only be in states which have a certain total energy consistent with the initial conditions of the system and the energy principle (energy conservation). In physics and in daily life we are normally much more interested in systems that can interact with the surroundings and change (increase or decrease) their energy. In statistical physics the probability distribution of nding such a system with a particular energy is described by the so called Boltzmann distribution. The Boltzmann distribution is also called the canonical ensemble; it is bigger than the microcanonical ensemble. We will now show how one can derive the Boltzmann distribution. The starting point is similar to the one we took in Chapter 2 when introducing the concept of temperature. We consider a closed, very large system and divide it into two parts. However, this time we do not split it into two equally large parts. Instead we make one subsystem small, this is the system we are actually interested in, and the other subsystem, the reservoir , is very large, much larger than . A schematic illustration is given in Fig. 3.1.

R Energy: U g( U
TOT

TOT

S 1

R Energy: U g( U

TOT

1) states

TOT

2) states

Figure 3.1: The total system + can be divided into a reservoir and a small system that are in thermal contact and exchange energy with each other. Depending on the exact state of the energy and hence the multiplicity of the reservoir changes. Typically the Boltzmann distribution is applied to situations where the system is truly microscopic, for example an atom. We assume, furthermore that we at least in principle know all the quantized energy levels that the system can be found in. The reservoir , on the 9

S 2

CHAPTER 3. THE BOLTZMANN DISTRIBUTION


10

other hand, is so large that no matter which state system is in, the repercussions on are completely negligible. We also assume that the interaction with the reservoir is weak and therefore does not change the energy levels of appreciably. The basic question is What is the probability of nding the system in a particular state with a certain energy? To answer this we must calculate the multiplicity for the total system given the constraint that the small system has a certain energy. Let us look at a concrete example where we have a total system with a total energy illustrated in Fig. 3.1. We label two of the states that can be found in 1 and 2, and their respective energies are and . We denote the corresponding multiplicities (for the total system) and . Since both the states 1 and 2 are completely unique the multiplicities will be given by the multiplicities for the reservoir in the two situations, thus

All states that are available to the total system are equally probablethis is the fundamental assumption of statistical physics. Therefore the ratio between the probabilities of nding the system in either state 1 or 2 is given by the ratio between the multiplicities,


We proceed by reexpressing the multiplicities in terms of the entropy, because this will make it easier to get a physically meaningful result. We know that for any system, and then . Applying this to Eq. (3.2) we get

As we saw when introducing the denition of temperature, the entropy can be viewed as a function of the internal energy , thus


Furthermore, since the reservoir is very large, and are very close to , and the two different entropies can be calculated using a Taylor expansion around where the reservoir temperature emerges as a key quantity,

With the aid of Eq. (3.3) the ratio between the two different probabilities can be written


The exponential functions appearing in Eq. (3.7) are called Boltzmann factors.

"   

()($'&$ %   "  

5  1 6 342 0 ( 5 3  1 6 42  0

and


"

"   

$"

 " 

" " !  "   $  "   

()($'&$ %  "   

 

         

 " 

  

(3.1)

(3.2)

(3.3)

(3.4)

(3.5)

(3.6)

(3.7)

CHAPTER 3. THE BOLTZMANN DISTRIBUTION

11

How can we calculate the probability of nding a system in a certain state? Equation (3.7) shows that the probability of nding a system in a certain state is proportional to that states Boltzmann factor. Therefore, we get the probability of nding a certain state by dividing its Boltzmann factor by the sum of all the Boltzmann factors of states that the system can attain. The probability of nding the system in a state is

The sum in the denominator runs over all states that can be found in. This sum is called the partition function (tillst ndssumma). It is denoted and plays a very important role in a statistical physics. We can thus rewrite Eq. (3.8) as

Equation (3.9) is undoubtedly the most important result presented in these lecture notes. The way the partition function is dened, the sum of the probabilities of all the states equals 1,
  

This is of course a necessary requirement for a set of probabilities. Physically Eq. (3.9) roughly speaking means that the probability of nding a system thermally excited to a particular state with energy above the ground state depends on how large is compared with . If the energy difference is comparable with or smaller than the probability of thermal excitation is large. In the opposite case, , the excitation probability becomes exponentially small.

3.2 Application to a harmonic oscillator


Harmonic oscillators are found everywhere in physics. You have already encountered several examples of this in classical mechanics (mass at the end of a spring, the simple pendulum, etc.). But systems in which kinetic and potential energy are alternately transformed into each other while a system performs some kind of cyclic motion are also very common at a microscopic scale. A diatomic molecule like CO (carbon monoxide) is perhaps the simplest example of this (see Fig. 3.2). The distance between the two atomic nuclei in the equilibrium state of such a molecule gives a minimum for the total energy of the nuclei and the electrons of the molecule. However, if the molecule is disturbed so that the nuclei starts to move relative to each other their kinetic energy is used to increase the total potential energy of the molecule by either compressing or stretching the bond between the atoms. Eventually all the kinetic energy of the nuclei is transformed to potential energy and the nuclei will start moving in the other direction. In this way the bond length oscillates, and the molecular vibration works as a harmonic oscillator.

 

5 3   0  1

where
 

 1

3  0 (  5 3    0 1

5 3   0  1

5 3   0  1




  


(3.8)

(3.9)

(3.10)

(3.11)

CHAPTER 3. THE BOLTZMANN DISTRIBUTION

12

Energy
Model potential energy, V(x)

Molecule C O n=3 n=2 Model spring n=1 h n=0 x

V(x)=kx2/2 = m2 x2/2 where x=l-l0 and l0 is the equilibrium bond length.

Figure 3.2: The vibrations in a diatomic molecule can be modeled by a harmonic oscillator, where two masses oscillate back and forth while the bond length is either shorter or longer than in equilibrium. In this way potential and kinetic energy are alternately transformed into each other. A quantum-mechanical treatment of this problem yields an innite ladder of equidistant energy levels. The quantum mechanical energy levels of a harmonic oscillator turns out to form an equidistant ladder. The lowest level has the energy where is the classical angular oscillation frequency of the oscillator. The rst excited level has energy the second excited level has energy , etc.; the energy difference between different levels is . For a diatomic molecule, is on the order of 0.1 eV. Exciting the oscillator from one energy level to the next corresponds to increasing the amplitude of the oscillation in the classical case. Let us calculate the probability of nding a harmonic oscillator in a certain state. First we must nd the partition function; without we cannot nd the probabilities. Evidently since there is an innite number of states at hand, the partition function is a sum of innitely many terms. It is, however, convergent and can be written

which is the sum of a geometric series, see Appendix A. The rst term equals 1, and each consecutive term is found by multiplying the previous one by . This yields

and the probability of nding the oscillator in state

with energy

is (3.14)

varies with temperature. For a low temperature, is The left panel in Fig. 3.3 shows how very small for all but the lowest few energies. With increased temperature it is more likely to nd the oscillator in a higher excited state. Next let us calculate the energy expectation value , for an oscillator,
 

(3.15)



!

 5 3    1 5

$ @ 

5 3 2  1

% "  

  ! 1 

 

 " 5 3    1     

5 3    1

$ @

5 " 3    1 ( 5 4 ! 3  1

# "

 %

  

5 42 5  3  1

!

  % 

   1 

# "

$ @  " 





l0 Bond length, l

(3.12)

(3.13)

CHAPTER 3. THE BOLTZMANN DISTRIBUTION


1 kBT=0.5 h 0.8 kBT=2.0 h kBT=5.0 h < > / h 2 Heat capacity 2.5 Energy (classical) 1 Heat capacity (kB)

13

0.6 Pn

1.5

0.4

0.5

0.2

0.5

Energy (quantum)

0 0 1 2

0 0 0.5 1 1.5 2 Temperature ( h/kB) 2.5

Figure 3.3: Calculated results for the probability of nding a harmonic oscillator in state at different temperatures. The right panel shows how the energy expectation value (neglecting the zero-point energy) and heat capacity varies with temperature. The straight line in that diagram shows the classical limit of the energy expectation value. In order to make the notation less heavy, we introduce the symbol ,
  

(3.16)

Combining Eqs. (3.15), (3.14) and (3.16), we get

(3.17)
1

where . The sum appearing here can be evaluated by the methods discussed in Appendix A, and the result for the energy expectation value for a harmonic oscillator is


(3.18)

where ously

is the expectation value for the quantum number

and this very important distribution function is called the Planck distribution function. The diagram to the right in Fig. 3.3 shows how the energy expectation value varies with temperature. For large temperatures Eq. (3.20) yields

so that the energy expectation value (not counting the zero-point energy) is . This is the rst example of the equipartition principle that we encounter. This principle states that for

! 

" 5 3 2  1

"

@# " # "

This can be rewritten as

(3.19) of the harmonic oscillator. Obvi(3.20)

(3.21)

"

! @

 "  



3 n

!





!

# "

# "

 %  

# "

# "

 " @

# "

# "

CHAPTER 3. THE BOLTZMANN DISTRIBUTION

14

every term in the expression for the classical energy that is quadratic in either a momentum or a coordinate, there is a contribution to the expectation value of the total energy in the classical limit corresponding to high excitation. The total energy for a harmonic oscillator can be written and thus has two such quadratic terms. The prediction of the equipartition principle only holds true at temperatures that are high compared with . At really low temperatures the fact that the energy of the oscillator can only be increased in steps of renders the classical prediction of the equipartition principle invalid; then . We have also plotted the heat capacity per oscillator dened as (3.22)

in Fig. 3.3.

3.3 Two-dimensional harmonic oscillator


We now consider a two-dimensional harmonic oscillator. This means that oscillations can take place in two different directions, for example and as illustrated in Fig. 3.4. We assume that the oscillation frequency is the same in both directions. In the realm of classical mechanics the motion in the and directions are independent of each other. This is also true in quantum mechanics. The energy of any quantum state can be written as a sum of an -motion and a -motion energy. The only difference compared with the one dimensional case is that now two quantum numbers and are needed. The energy can be written (3.23)

N=2 N=1

PN

Figure 3.4: Schematic illustration of a two-dimensional harmonic oscillator and its energylevel diagram. The gure to the right shows the probability of nding the oscillator in a certain energy level with energy at temperature . Since higher energy levels have a higher degeneracy, the lowest energy level does not yield the highest probability in this case. Suppose we want to calculate the probability of nding the two-dimensional harmonic oscillator in a (any) state with energy . We can follow the method used in the previous section, however, we must be clear over the difference between quantum states and

@ 4

@ 4









   

where

0.15

Energy x
T=3 / kB h

N=3

0.1

0.05

N=0
0 0 1 2 3 4 5 N 6 7 8 9 10

  

# "

@ #4

$ @

# "

 @ 

$ @

 

CHAPTER 3. THE BOLTZMANN DISTRIBUTION

15

energy levels. In this particular case we have, for example, an energy level with the energy corresponding to . But the harmonic oscillator can have this energy either because and or because and . These two different possibilities correspond to two different quantum states with the same energy. One says that the energy level is degenerate, and since two states have this energy the degeneracy is 2. Usually the degeneracy is denoted (just as the multiplicity). Here, for a general value the degeneracy . [For example, can be reached in three different ways; can be (2,0), (1,1) or (0,2).] Let us now calculate . All states with a certain , regardless of the and values, have the same Boltzmann factor . The Boltzmann factor divided by the partition function gives the probability to be in a particular state. Then the probability of nding the oscillator in energy level with energy is

The degeneracy appears in the formula because we must sum the probabilities of all the states in the energy level. It remains to calculate the partition function . The sum-over-states of Boltzmann factors can be written


This sum can be evaluated in the same way as when calculating the expectation value for the energy for the one-dimensional harmonic oscillator (see Appendix A). The result is


Note that this is the square of the partition function we got in the one-dimensional case. This is a general phenomenon inasmuch as addition of energies implies multiplication of partition functions. Equation (3.26) together with Eq. (3.24) yields the nal result


Results for the case when are plotted in Fig. 3.4. Notice that here, unlike the one-dimensional case, the lowest energy level need not have the highest probability.

3.4 Solved examples


EXAMPLE 3.1: A quantum-mechanical system has three, nondegenerate energy levels with energies -50 meV, 0 meV, and 50 meV, as illustrated to the right. This system is in contact with a reservoir held at room temperature, . (a) What is the probability of nding the system in its ground state? (b) Calculate the expectation value of the systems energy.
Energy 50 meV 0 50 meV

SOLUTION: (a) The probability of nding the system in the ground state, i.e. with energy


1

  5   1

  5   1

 

 @ 4 %

@ #4

   

@#4

   5   1

"

&



  5  1

@ 4

  5   1

"

 %

&

 

@ 4

(3.24)

(3.25)

(3.26)

(3.27)

CHAPTER 3. THE BOLTZMANN DISTRIBUTION

16

is found as the ratio between the Boltzman factor in question , and the partition function for the system, . The partition function is found as the sum of the three Boltzman factors, one for each of the states,
( 1

where , , and , and . When evaluating the Boltzman factors we must make sure to use consistent units; the best is certainly to convert everything to SI units thus multiplying the energies given in units of meV by : i.e.
1  &

and the probability of nding the system in the ground state




We see that the difference in energy between the different levels is rather large compared with the thermal energy , and this means that the quantum system usually is not thermally excited. (b) The energy expectation value is given by the expression


EXAMPLE 3.2: Consider a two-dimensional harmonic oscillator with that is in contact with a thermal reservoir at temperature . Calculate the energy expectation value. SOLUTION: We will look at a couple of ways of solving this problem. The rst method makes use of the formulas we found above when investigating the twodimensional harmonic oscillator. The energy levels are given by
1  2  1  1  1 & 

and the probability of nding the system in level N is




With the parameter values given for the temperature and

, we get numerically
1

(3.35) and the remaining probabilities are very small. In this way we get an accurate approximation to the energy expectation value as


(3.36)

& #&

&

 

&

   " 

& &

&

 

 

 # " "

 

 

& &

"

@  ( '

 

&

&

@#4

&

etc. The partition function then becomes


( 0 @  @  

" !

( 2 &

"

&

$

& ( '

&   ( @ ( @ ( 

& ( 2

$ 

&



@ ( 

#

@ 4

"

 "

&

( ' &

&

   

 ! 

$ %

&

&

"

 @  

"



"

&

#"

&

"


1 

!

&


"

"



&

#"

(3.28)

(3.29)

(3.30)

(3.31)

(3.32)

(3.33)

(3.34)

CHAPTER 3. THE BOLTZMANN DISTRIBUTION

17

By adding more terms a more accurate value can be found. There is, however, an even simpler, and for general cases more useful, way of dealing with a system like this one that is built up by a number of subsystems that are mutually independent of each other. In this particular case we have two subsystems, the oscillator in the direction, and the oscillator in the direction that are independent of each other. In a general case systems are independent of each other if they do not interact, meaning that the energy levels of one subsystem are unaffected by the state of another subsystem. When these conditions are fulllled the energy of the system, in any state, is the sum of the energies of the subsystems. Moreover, the same holds true for the energy expectation value: it is the sum of the expectation values found when treating the different subsystems separately. Finally, the partition function for the entire system is the product of the partition functions for the subsystems. In the present case, where the subsystems are identical, this simplies things quite a bit since it means that we only need to calculate the partition function and the expectation value for one single subsystem. If we apply these ideas here we nd that by using Eq. (3.18) for the energy expectation value for one oscillator we get

This means that the expectation value for the system as a whole is
1

thus very close to the previously found approximate value. EXAMPLE 3.3: The quantum system shown in the gure below consists of in total six separate, but identical subsystems that are independent of each other. (In the gure each subsystem is inscribed in a frame of dashed lines.) The total energy of the system is thus the sum of the subsystem energies. Each subsystem has four nondegenerate quantum states with the eigenenergies , , , and , respectively. The entire system, and thereby each of the subsystems, is in thermal contact with a reservoir with a temperature . We assume that . (a) Calculate the expectation value for the energy for the system as a whole. (b) Calculate the probability for the entire system to have the energy . (c) Calculate the probability for the whole system to have an energy of at least .
3 2 0 3 2 0 3 2 0 3 2 0 3 2 0 3 2 0


SOLUTION: In this example, just as in the case of the two-dimensional harmonic oscillator, we are dealing with a system built up by a number of identical and independent subsystems. Thus to calculate in (a) the expectation value for the energy we apply the Boltzmann distribution to one subsystem and then get the nal result by multiplying by the number of subsystems, 6.

" !

"

&

 "  

#"

#"

#"

(3.37)

(3.38)

&

CHAPTER 3. THE BOLTZMANN DISTRIBUTION


The partition function for a subsystem can be calculated as
 ( ( ( ( 5 3 2 @ 5 3   @ 5 3 2 @ 5 42   1  1  1 3  1 

18

(Keep in mind that even though the distance between the energy level is the same everywhere, a subsystem is not equvalent to a harmonic oscillator; the ladder of states ends after here, whereas it goes on forever for a harmonic oscillator.) Numerically the partition function becomes (3.40) At this point we can also calculate the partition function , for the entire system (which we will need later). Since the subsystems are independent of each other the total partition function is the product of the subsystem partition functions,
 


The energy expectation value for one subsystem is now given as




and the expectation value for the system as a whole becomes





The rest of the problem [parts (b) and (c)] consists in calculating the probability for nding the system as a whole with a particular energy . Such a probability is in general given by the expression (3.44) where is the degeneracy for the energy level in question. We have already calculated the partition function and the Boltzmann factors are also easily calculated, so the remaining difculty lies in determining the degeneracies . For a complete solution we will need the degeneracies for the four lowest energy levels, , , , and . It is obvious that we will need in order to solve part (b). In part (c) of the example we will calculate the probability , for the total energy to be or more as (3.45)

For the whole system to be in the ground state each and every one of the subsystems must be so too; the ground state is consequently nondegenerate,
  

(3.46)

The energy is reached if one of the subsystems is excited to the energy level with energy while the remaining ve subsystems are in their ground state. Obviously this can be done in six different ways and (3.47) There are a total of 21 quantum states that gives a total energy of . In 6 of the cases one of the subsystems is excited to the level with energy while the others are in their respective ground states. Furthermore there are 15 states in which 2 of the subsystems are excited to
  

"  # "

&

" !

&

5 42 3  1

"

 

5 3    1

 "

5 3    1

"

&

 

# "

 

" 

"

 %

# "

# "

(3.39)

(3.41)

(3.42)

(3.43)

CHAPTER 3. THE BOLTZMANN DISTRIBUTION

19

the remaining four subsystems are left in the ground state; there are 15 ways of picking two elements from a group of 6 as discussed in the previous chapter. Thus,
  

With a total energy of to distribute between the different subsystems there are three distinct cases to consider: the energy can be given to just one of the subsystems (3), to two of them (2+1) or to three different subsystems (1+1+1). The rst possibility gives us 6 different quantum states (pick one element among 6). The second possibility gives 30 different quantum states (rst we have 6 different ways of picking the subsystem to give the energy and after that there are 5 ways of selecting one of the remaining subsystems to give the energy to). Finally the energy distribution (1+1+1) gives us another 20 quantum states since there are ways of selecting 3 elements from a total of 6 elements. Thus we arrive at (3.49) Now the various probabilities are easy to calculate. In (b)
1 # 
 

3.5 Problems for Chapter 3


3.1 A two-state system Consider the quantum system illustrated schematically to the right. It has two non-degenerate energy levels with energies and , respectively. The system is in thermal contact with the environment held at temperature . Determine the expectation value for the energy of the system when the is given by: (a) , (b) , (c) . 3.2 A three-level system The gure shows a schematic sketch of a three-level system, where the levels have the energies , , and , with . (a) Derive an expression for the probability of nding the system in the state with energy given a temperature of the environment. (b) Sketch a curve that roughly shows how varies with temperature. Try to explain in words why has this temperature dependence.
&

  

 "



&

!  

& ' 

and in (c)

0
0

&

&

@ 

@  
& 

&


  

   

@ 


(3.48)

@  @ 

&

&

&

! 

 

  


(3.50)

(3.51)

10

CHAPTER 3. THE BOLTZMANN DISTRIBUTION


3.3 The Boltzmann distribution and the harmonic oscillator As should be well known by now, the en0 ergy levels of a harmonic oscillator are given by -0.5


20

One wants to determine the frequency for a particular oscillator and to this end an experiment has been carried out at temperature that has given the probabilities of nding the oscillator in each one of the four lowest states. The results of the measurements are given in the diagram to the right. Determine an approximate value for

based on the data.

3.5 Vibrational energy of a CO molecule A CO molecule has four different vibrational modes. One of these (a) corresponds to a vibrational motion in which the entire molecule is stretched, the motion of another of the modes (b) means that one of the bonds is stretched while the other is compressed. Moreover, the molecule has two vibrational modes, (c) and (d), in which the molecule is bent. These two modes have the same vibrational frequency. [In (d) the atoms move perpendicularly to the plane of the paper.]

O (a)

(b)

(c)

(d)

The total vibrational energy of the molecule can be written as a sum of energies for four different harmonic oscillators (we drop the zero-point energy here)
1

where . Through experiments one has determined the following values for the vibrational quanta: = 166.3 meV, = 291.4 meV and = 82.8 meV. Calculate the expectation value , at temperature 500 K. 3.6 Average rotational energy of a diatomic molecule The rotational energy of a diatomic molecule is in a quantum-mechanical description given by

&

3.4 for a two-dimensional harmonic oscillator Consider a two-dimensional harmonic oscillator with expectation value at room temperature.

  @   @    

ln(Pn)

# ! " 

  22

 




1 21 & 

$ @ 


#"

      

&

 1

-1

-1.5

-2

-2.5

-3 0 1 n 2 3

#"


. Determine the energy

O a b c

CHAPTER 3. THE BOLTZMANN DISTRIBUTION

21

where is the moment of inertia of the molecule. The quantum number the magnitude of the rotational angular momentum can take the values
 2 

For each value of there are distinct possible quantum states corresponding to different orientations of the angular momentum ( can take the values ). Assume that the molecule is in a gas in thermal equilibrium at temperature . Calculate the average rotational energy of the molecule by proceeding as follows: (a) Calculate the partition function keeping in mind that it is a sum over all states, not just all energy levels. To do this, assume that the temperature is large enough that , so that the sum dening can be well approximated by an integral using as an integration variable. (b) Knowing , calculate the expectation value for the rotational energy. (c) For what temperatures is the condition satised for a CO molecule? 3.7 Magnetic susceptibility Consider the spin system discussed in chapter 2 containing spins. The system is in thermal contact with a reservoir at temperature . Calculate the magnetization as a function of by employing the Boltzmann distribution. Calculate also the magnetic susceptibility .

3.8 Probability of nding an adsorbed atom at different types of sites Consider adsorption of gas molecules on a surface. Adsorbed molecules cannot move around freely on the surface, but are instead stuck in certain places and jump only once in a while between these special places. The gure shows a certain face of a metal surface. The open circles represent metal atoms. Suppose the adsorbed molecules can either sit right on top of a metal atom (top site) or in between two metal atoms (bridge site). All other positions on the surface are inaccessible for the atoms except for very Bridge site short periods of time during jumps. Assume that there is an energy difference between the two types of sites; the energy of a molecule at a top site is 0, Top site Unit while that of a molecule at a bridge site is . Calcell culate the probability of nding a molecule at a top site and bridge site, respectively, as a function of . We assume that there are relatively few molecules on the surface so that they do not interact or get in the way of each other. Hint: How many different nearest neighbors does each metal atom have? 3.9 Zipper problem A zipper has N links; each link has a state in which it is closed with energy 0 and a state in
"!" "! !"" " "! !" ! #$!#$ $ #! !#$

 

 

  22

1 

"

"

 

 
1

1  1 &

the expression


which determines

CHAPTER 3. THE BOLTZMANN DISTRIBUTION

22

which it is open with energy . We require, however, that the zipper can only unzip from the left end and that link can only open if all the links to the left are already open. (a) Show that the partition function can be summed in the form


(b) In the limits and , nd the average number of open links. This is a very simplied model of the unwinding of two stranded DNA molecules. 3.10 Degeneracy of a three-dimensional harmonic oscillator In this chapter we discussed the two-dimensional harmonic oscillator. The energy levels are given by and the degeneracy of an energy level is
1

i.e. there is only one state with the ground state energy ( ), but there are for example 4 states with energy ( ). Now consider the three-dimensional harmonic oscillator for which the energy levels are given by where , , and are all natural numbers. Show, by making use of the result for the twodimensional harmonic oscillator, that the degeneracy in this case is given by
 1

3.11 Degeneracy for a collection of identical harmonic oscillators Consider now instead 5 separate, but identical, harmonic oscillators. To describe the quantum state of all the 5 oscillators we need ve quantum numbers (non-negative integers), , , , , and . The total energy for this system can be expressed as

 

Calculate the degeneracy for the energy level corresponding to with energy .


@ @ @

"

@  4 @ 4

  22

1  

@ 4

!  "    #4 " @

@ #4

@ 4

1 

 

@ 4

  " 


@ #4

@ @ @ 

@ @ 

! 

"


 

Chapter 4 Thermal radiation


4.1 Photon modes
In this chapter we apply the Planck distribution of Eq. (3.20) to calculate the energy content of the electromagnetic eld at a given temperature. We will also be able to establish a relation between the temperature of an object, like a piece of red-hot iron or the sun, and its color. The higher the temperature the more radiation is sent out at higher photon frequencies. Therefore when a piece of metal is heated, it rst starts to glow with a reddish color, but as the temperature is increased it becomes nearly white (a mixture of all colors). The starting point for the derivation is the Planck distribution valid for a harmonic oscillator. The reason for this is that the electromagnetic eld can be viewed as a collection of harmonic oscillators of varying frequency. A photon transmitting blue light is an excitation of a harmonic oscillator with a relatively high frequency. A red photon is an excitation of another oscillator with lower frequency. Any kind of electromagnetic radiation, radio waves, microwaves, visible light, x-rays, and gamma rays, can be described as excitations of harmonic oscillators.
n=3

Amplitude

n=2

Figure 4.1: Illustration of the rst few standing wave modes conned to the interval .
n=1

L/2 x

Since the expectation value of the energy for a single harmonic oscillator was already derived in Chapter 3, the task at hand here is to label and count all the harmonic oscillators that together constitute the electromagnetic eld. The total energy of the eld is then found by

23

CHAPTER 4. THERMAL RADIATION

24

adding the contributions from the different oscillators. This can be done because the different oscillators or modes of the eld are independent of each other. To count the number of modes in the electromagnetic eld one usually looks at a certain cubic volume ( is the cube side). The allowed modes then correspond to standing waves that t into the cubic volume as with a vanishing amplitude at the walls of the volume. [Note that the boundary conditions for the electromagnetic eld in reality are a bit more complicated, but the present way of reasoning is enough for counting the photon modes, which is our immediate objective.] If we rst restrict the attention to an example in one dimension, standing waves on the interval take the form
 2  1

Waves of this form are illustrated in Fig. 4.1. The wavelength for each of these modes is
1 

Let us generalize this to the real, three-dimensional case. Normally a wave will propagate in a general direction that is not parallel to any of the coordinate axis. The wave number depends on three different integers , , and , one for each dimension,
 2  1

In other words, is the magnitude of a wave vector with Cartesian components , , and . To specify a certain photon mode one needs a triplet of integers, for example . In addition one must also specify a polarization direction. The electric eld of a photon mode is a vector. This vector must be orthogonal to the propagation direction since electromagnetic waves are completely transverse. A wave propagating in the direction can therefore have two different, independent polarization directions and . Thus in general for every triplet there are two different photon modes, and they both have the photon energy (4.6)

4.2 Stefan-Boltzmann law

Knowing the modes of the electromagnetic eld and their photon energies we can proceed with the next step, the calculation of the total energy of the eld in thermal equilibrium with the surrounding matter. We recall from Eq. (3.19) that the average energy in a harmonic oscillator mode, not counting the zero-point energy , is


(4.7)

"

1 



  

The energy of a photon is

and the wave number . Thus, the photon frequency is and the corresponding angular frequency
     

(4.3) (4.4)

(4.5)

     

1 

@ @ 

      %$
 
  1 

 "  

# "

 !

&

1 

 ! 

(4.1)

(4.2)

CHAPTER 4. THERMAL RADIATION


Adding the energy of all the modes we get the internal energy of the electromagnetic eld
1

25

. The rst factor of 2 in where according to Eq. (4.6) Eq. (4.8) comes from counting the two polarization directions. Rather than evaluating the sum in Eq. (4.8) we will transform it to an integral, treating , , and as continuous variables. This is perfectly reasonable since the volume is macroscopic, and is much larger than typical photon wavelengths which means that the s are very large integers. We make the replacement (4.9)

To evaluate the integral, we make a change of variables to spherical coordinates using and the angles and instead of , , and . Then the integrand only depends on , (4.11)
 

Equation (4.13) is known as the Stefan-Boltzmann law. It says that the energy content of the electromagnetic eld varies as the fourth power of the temperature. If the photon eld we are studying is conned to some volume (or cavity) and is in thermal equilibrium with the walls of the cavity the radiation emerging from a small opening in the cavity wall is called black-body radiation. The opening into the cavity works as a perfectly black body. Any radiation that hits the opening from the outside will eventually be absorbed by the walls of the cavity and any radiation coming out through the hole is in thermal equilibrium with the walls, see Fig. 4.2. in Eq. The energy ux out from the cavity is proportional to the energy concentration (4.13), the speed of light , and a geometric factor equal to . Thus, the energy ux

where W/(m K ) is called the Stefan-Boltzmann constant. Equation (4.14) is valid when calculating the energy ux from any surface or body that absorbs all radiation that impinges on it (a so called black body).
0)( ' & 

 

$

&

 

The last integral has the numerical value per unit volume is

, so that the energy of the electromagnetic eld (4.13)

"

"

 

With the substitution




, this yields

(4.12)

(4.14)

 "     #   "

 

" 5 1 1 1

     

which yields
 

(4.10)

" 5 1 1 1

@ @        % % % #   " % % % 

  

 

% % %

    

 $

 

  

(4.8)

@ 

CHAPTER 4. THERMAL RADIATION

26

Temperature T
Figure 4.2: The radiation emitted by a small opening of a cavity with a very irregular shape like this one has the same intensity and spectral distribution as that coming from a perfectly black body. Any radiation entering the cavity will be absorbed by the walls of the cavity before it has a chance of escaping. Since the cavity is a perfect absorber it must also be a perfect emitter; is given by Eq. (4.14).

JU
Incoming ray

Cavity

In practice all surfaces do reect some of the radiation incident upon them. For example, a 0.7), also surface that only absorbs 70 % of the radiation incident upon it (the absorptivity has an emission rate that is reduced to 70 % of that given by Eq. (4.14) (the emissivity 0.7). If this was not so, a body that is initially in thermal equilibrium with the electromagnetic eld would after a while be out of equilibrium with the eld. This would of course be in conict with the second law of thermodynamics.

4.3 The Planck radiation law


In addition to knowing the total energy of the electromagnetic eld it is of considerable interest to know how this energy is distributed between different photon frequencies (or, for visible light, different colors). If we return to Eq. (4.11), and instead use the photon energy as an integration variable we get (4.15)


Using the Planck distribution function from Eq. (3.20),


1

(4.16)

the total energy can be written as

(4.17)

The integrand in this equation has an appealingly simple and clear interpretation. The rst factor is the density of photon modes per unit energy,

(4.18)
1

the second factor is the energy of one photon, while is the number of photons we can expect to nd in a mode with the particular energy . The product of these three factors

# "


 

" 5 3  2 0  1


# "   

" 5 3 2 0   1

 

# "


CHAPTER 4. THERMAL RADIATION


1.6 Rayleigh-Jeans: (/kBT)2 1.4 Planck: 1.2 1 0.8 0.6 0.4 0.1 0.2 2000 K 0 0 1 2 3 4 /kBT 5 6 7 0 0 0.5 1 1.5 (eV) 2 2.5 3 (/kBT)3 u (J/(m3 eV)) e/kBT-1 0.3 0.4 6000 K 0.5

27

0.2

5000 K

4000 K 3000 K

Figure 4.3: Plot of the function with that governs the energy dependence of the energy contents of the electromagnetic eld. As is seen in this gure the energy density reaches a maximum at . The right panel shows at a few different temperatures. multiplied by a (small) energy interval tells how much energy is carried by photons with an energy inside the energy interval ; the integral gives the total energy. Thus, the energy content of the electromagnetic eld per unit photon energy is


This function is displayed to the left in Fig. 4.3. We see that the eld energy per unit photon . For room temperature K this correenergy reaches a maximum when sponds to a photon energy of only 0.073 eV, but for a temperature of 6000 K (approximately the surface temperature of the Sun) the maximum appears at 1.46 eV, which is inside the visible range. The right panel in Fig. 4.3 shows plots of at different temperatures. In the limit of large temperatures and small photon energies Eq. (4.19) yields
& &

This relation is known as Rayleigh-Jeans law; a curve showing from Eq. (4.21) is displayed in Fig. 4.3. Rayleigh-Jeans law and the Planck law agree for photon energies that are small compared with , but Rayleigh-Jeans overestimates the energy of the eld at higher photon energies. Equation (4.21) results from a completely classical treatment of the electromagnetic eld that does not take into account that the energy of a mode can only increase in steps of . Rayleigh-Jeans law does not agree with experimental observations. In the late 19th century that was one of the rst indications that classical physics could not describe all physical phenomena. Max Plancks derivation of Eq. (4.19) marked the beginning of quantum physics.

" 0

 

 !) "

We see that the energy dependence of variable as

can be summarized in terms of the dimensionless (4.20)

" 5 42 0 3  1

! 

! 
 

" 5 42 0 3  1 



 "  

 #  "  

"

@  1 

(4.19)

  

0


(4.21)

CHAPTER 4. THERMAL RADIATION

28

4.4 Problems for Chapter 4


(  $

4.3 Number of thermal visible photons Estimate the number of photons with wavelengths between 400 nm and 800 nm (the visible at room temperature 293 K. range) that are thermally excited in a volume of 4.4 Number of thermal FM photons Estimate the number of photons in the FM radio range, between 87.5 MHz and 108 MHz, that are thermally excited in a volume of at room temperature 293 K. 4.5 Surface temperature of the Earth Calculate the surface temperature of the Earth on the assumption that as a black body in thermal equilibrium it reradiates as much thermal radiation as it receives from the Sun. Assume = also that the surface temperature of the Earth is constant over the day-night cycle. Use 5800 K; = 7 10 m; and the Earth-Sun distance of 1.5 10 m. 4.6 Surface temperature of Venus and Mars Repeat the calculation of the previous problem for our neighbor planets Venus and Mars. You have to look up the data you need yourself. Try also to nd information about the actual temperatures of those two planets. 4.7 Heat shield A black, non-reective plane at temperature is parallel to a black plane at temperature . The net energy ux density in vacuum between the two planes is , where is the Stefan-Boltzmann constant. A third black plane is inserted between the other two and is allowed to come to a steady temperature . Find in terms of and , and show that the net energy ux is cut in half because of the presence of the shielding plane. This is the principle for heat shields, and it is widely used to reduce radiative heat transfer. 4.8 Reective heat shield Consider a situation similar to the one in the previous problem, however, this time a plane that is partly reecting is inserted between the black planes held at temperatures and , respectively. We denote the reectivity so that the absorptivity and emissivity . Show that the net ux density is reduced by a factor compared with when the inserted plane is black ( ).

 

 

"

 

" 

 

 

 

 

"

 

&

&



(Hint:

.)

in equilibrium at temperature

in a volume

!

&

  

# "


4.2 Number of thermal photons Show that the number of photons

"

& '

& '

   

& '



4.1 Photon energy A photon has wave vector energy?

in vacuum. What is the photon is



"

 

CHAPTER 4. THERMAL RADIATION

29

4.9 Energy density per unit wavelength In some cases one looks at the energy of the electromagnetic eld per unit wavelength, instead of unit photon energy. To every photon energy interval corresponds a wavelength interval ( will of course be negative if is positive). The energy density per unit wavelength should satisfy the relation

" 5 4 1   3  

"

Show that this yields



 0  

@  1 


 0





   @  
 

Chapter 5 The ideal gas


In this chapter we are going to apply the Boltzmann distribution to an ideal gas. While the ideal gas is by all means a system for which the laws of classical physics are valid, we are just the same going to start from an equally valid quantum description since that makes it easier to generalize the results later on.

5.1 The Boltzmann distribution applied to a particle in a box


We need to know the energy levels of the gas atoms (we assume that the gas is monatomic for now). Normally one assumes that the gas is contained in a box. Then the quantum-mechanical wave functions of the gas atoms must form standing waves inside the box in very much the same way as the photon modes did in Chapter 4 (see in particular Fig. 4.1). A gas atom conned to a one-dimensional box would be described by a matter wave for which the de Broglie wavelength if is the length of the box and is a positive integer. We recall the relation between de Broglie wavelength and momentum for a particle,

This means that the energy of the atom can be written as


This reasoning can be generalized to three dimensions in the same way as in the photon case. Each energy level is characterized by three integers , , and (because there are three different directions in space), and the total energy of the particle is

Suppose now that we just have one particle in the volume of the box. By applying the Boltzmann distribution we can calculate the expectation value for the energy as well as other quantities. The partition function is

30

"

@ @  "  % % %

"

@ 

"

1 1 




(5.1)

(5.2)

(5.3)

(5.4)

CHAPTER 5. THE IDEAL GAS

31

and as before . If the gas container is macroscopic in size, the atom may be excited into states with very high , so just as for the photons, the sums can be turned into integrals over , and


By going over to spherical coordinates we then get a Gaussian integral yielding




The expectation value of the particles energy can be written




We note, comparing this to Eq. (5.4), that


(

Thus, the energy of a gas atom is directly proportional to the temperature. Each degree of freedom (each coordinate direction) contributes to the expectation value; this is another example of the equipartition principle, valid in the classical limit i.e. when one can expect a particle to be in a highly excited state. We discussed the equipartition principle earlier on page 13 in connection with the harmonic oscillator.

5.2 Boltzmann distribution and thermodynamics


Pressure. To be able to treat the case of gas atoms we will rst try to establish better connections between statistical physics and thermodynamics. Consider a system, for example the particle in a box of the previous section. Suppose that we change the volume of the system by a . To do this one must perform work on the system. From classical mechanics small amount and thermodynamics the work is given by
1

where is the pressure. No quantum transitions take place in the system during the volume change, but the internal energy of course increases by an amount equal to the performed work, i.e. . What happens is that the energy levels of the system move (up) so as to increase the total energy. Looking at the total internal energy the change can be written

which, in view of Eq. (5.9) means that




!
"

 

"

@ @  "    

 

 "  @



"

"

"   

"

 

@ 

"

% % %

"

 $


 "

(5.5)

(5.6)

# "

# "

(5.7)

(5.8)

(5.9)

(5.10)

(5.11)

CHAPTER 5. THE IDEAL GAS

32

The subscript on the partial derivative means that the entropy should be kept constant when evaluating the derivative. Why should one do that? The reason is that when the volume is changed there are no transitions between different quantum states, and then neither the multiplicity nor the entropy changes. Thermodynamic identity. The temperature was dened in Eq. (2.16) as


(In Chapter 2 we did not explicitly state that the volume should be kept constant since volume was not really a meaningful quantity for the model system studied there.) Now from Eq. (5.12) we conclude that energy and entropy changes in a slow, quasistatic process at a constant volume are related through (5.13) Combining this with the energy change given by Eq. (5.10), which holds for changes at constant entropy, one can write (5.14) in the limit of innitesimal changes. Equation (5.14) is known as the thermodynamic identity. The two different terms in Eq. (5.14) represent the two different possible ways of changing the energy of a system: heat and work. The heat supplied to the system in a quasistatic process is given by (5.15) It should be kept in mind that the thermodynamic identity is only valid for quasistatic processes that are slow enough that the state variables ( , , , etc.) are all the time welldened. The rst law of thermodynamics (energy conservation),
1 

is, on the other hand, valid for any kind of changes even if it is not possible to monitor all the state variables during a fast process, for example a rapid expansion of a gas. We will come back to a more fundamental discussion of heat and work later in this Chapter. In Eq. (5.14) one considers to be a function of the entropy and the volume ; we could also write

Helmholtz free energy. Sometimes it is not so practical to use and as independent variables. While the volume can be controlled in many experiments that is seldom the case with the entropy . It would be much more practical to use the temperature as an independent variable. This can be achieved by introducing Helmholtz free energy , dened as

Often

is just called the free energy.

"

"

which means that




" 

@  

(5.12)

(5.16)

(5.17)

(5.18)

(5.19)

CHAPTER 5. THE IDEAL GAS


By differentiating Eq. (5.19) we nd with the aid of the thermodynamic identity that

33

and by comparing this with Eq. (5.20), we nd that

One can show that a system that is held at constant temperature and volume, unless it is not already in internal thermal equilibrium, will change its macroscopic state so as to minimize . Thus, for a system in thermal contact with a reservoir plays a role similar to the one had in a completely isolated system (there all changes towards internal equilibrium drives to a maximum). This is one reason why is a very useful quantity. The free energy and the partition function. Helmholtz free energy also provides a very useful link between thermodynamics and statistical physics because it can be calculated directly from the partition function , (5.23)

and one can show that from Eq. (5.23) is a solution to this equation. In the next section we will apply some of the relations found here to the ideal gas.

5.3 Many particles in a boxthe ideal gas


We return to the partition function for a gas atom in a container,

 

! 

and its inverse, the quantum concentration


 

 

 

where we have introduced the quantum volume

This relation can be proved by inserting the expression for yields the differential equation

in Eq. (5.22) into Eq. (5.19). This (5.24)

(5.25)

(5.26)

(5.27)


"

$ %"

 

 

 "

"

for quasistatic processes. Thus, variables. One can also write




is a function for which

and

are the natural independent

"

 "

 " 


"

" 




 "  

"


(5.20)

(5.21)

(5.22)

CHAPTER 5. THE IDEAL GAS

34

The probability of nding the atom in the very lowest state with an energy that is practically equal to zero is then


atoms in the box the probability of having an atom in the lowest state is ( is the particle concentration) as long as the atoms can be considered to be independent of each other. A necessary requirement for having entirely independent particles is that . In other words the particle concentration must be low enough that it is unlikely to nd one particle in a volume . This condition is fullled for small particle concentrations and large temperatures. Figure 5.1: This gure illustrates the difference between distinguishable and indistinguishable particles. The two states to the left, in which each of the two particles have a distinct identity, are different from each other. However, once it is impossible to tell the difference between the particles (as in the state to the right) only one state remains. This reduces the partition function by a factor of , and in general, for particles by a factor of . atoms in a box. The total energy for the


Distinguishable particles
       

Indistinguishable particles

where is an atom index. If the atoms were truly distinguishable, a little like differently colored billiard balls, the partition function could be written
1

where the last result follows because the multiple sum factorizes into a product of identical, simple sums. But atoms are not distinguishable; we cannot tell the difference between the situation where atom is in one state and atom in another state and the situation where the two atoms have traded places with each other (see Fig. 5.3). As a result of this the correct partition function for gas atoms is a factor smaller than to account for the fact that all the possible permutations of the particles are indistinguishable from each other, thus

Helmholtz free energy for an ideal gas can now be written

! 4 "

  %

 

# "

 @

$ %"

 

 2 

! 4

@ % @ %

Let us now evaluate the partition function for system is a sum of the single-atom energies,
 

(5.29)

(5.30)

(5.31)

(5.32)

$ %"

! 4 " 4 $ "

"

  %

 



 2 

 

!

% %

 

 

  


 

$"

If we have

 
(

(5.28)

 # "

! "

 

  

     

CHAPTER 5. THE IDEAL GAS

35 ,
1

as one might expect from Eq. (5.8) as well as the equipartition principle. This means that the heat capacity at constant volume for a monatomic gas is


For polyatomic gases there are additional contributions to both the internal energy and the heat capacity that are due to rotational motion and (at high temperatures) vibrational motion, previously discussed in Chapter 3. It is also possible to calculate the pressure and in this way prove the ideal gas law. To evaluate the expression for the pressure, we rewrite Eq. (5.32) as
1

and consequently

which is the well-known ideal gas law. Boyles law for changes at constant temperature, and Charles law for changes at constant pressure follow from Eq. (5.40).

5.4 Heat and work in statistical physics


In thermodynamics, one introduces the two concepts heat and work in order to describe how the total energy of a system changes. It is important to keep in mind that heat and work have to do with changes; heat and work have nothing to do with the state of a system. A physical system contains energy that has been brought there either because heat was supplied or because work was done on the system, but just looking at the nal state of the system it is impossible to say how it reached that state.

! 4

 4

$ 

and note that

depends on

but not on

. Therefore

$ "

! 4

! 4

  "

! 4

so that

 

$"

 

$ %"

But

$"

"

! 4

"

and

is the particle concentration. Next we can calculate the internal energy since Eq. (5.24) yields (5.34)

4 " 4 $" 4


where we have used Stirlings formula, valid for large

$ %" 4

(5.33)

 

(5.35)

(5.36)

(5.37)

(5.38)

(5.39)

(5.40)

CHAPTER 5. THE IDEAL GAS

36

From a practical point of view it is normally quite clear when the energy changes because of heat transfer (e.g. boiling water) or because of work (e.g. compressing a gas), respectively. In statistical physics it is nevertheless possible to proceed further, and dene heat and work from a microscopic point of view. The energy of a quantum system can change in two distinctively different ways (see Fig. 5.2), and this distinguishes heat transfer from work: (i) There can be quantum transitions in which the system changes from one state to another with a different energy. This is what happens when heat is transfered. (ii) The actual energy levels can move if some external condition changes gradually, and even if no transitions take place the total energy of the system changes. This is what happens when work is done on the system, or the system does work on the environment.

Initial state

Heat added

Work done

Figure 5.2: Illustration of the difference between heat and work at a microscopic level.

5.5 Problems for Chapter 5

5.2 Quantum concentration (a) Calculate the quantum concentration for oxygen gas O at room temperature. (b) Calculate the concentration of O molecules in air at room temperature and atmospheric pressure. 5.3 Addition of pressure from different molecular species The ideal gas law states that the particle concentration


& '

$ & #&

 

5.1 O gas 1 liter of oxygen gas with an initial temperature of 40 C and initial pressure expands until the volume is 1.5 liter and the pressure is . (a) Calculate the number of moles of oxygen that are present. (b) Calculate the nal temperature after the expansion.

& '

$ %"

&

CHAPTER 5. THE IDEAL GAS

37

(a) Calculate the particle concentration in air at K at sea level. (b) Using the theory developed in this chapter, try to explain why the ideal gas law works for air, which is a mixture of nitrogen and oxygen, so that . 5.4 Air bubble in water We have an air bubble at the bottom of a 30 m deep sweet-water lake. The bubble has a diameter of 1 cm and the temperature of the air is the same as that of the surrounding water, 4 C. The bubble starts to rise towards the surface of the lake where the water temperature is 18 C. (a) Calculate the diameter of the water bubble when it reaches the surface assuming that it rises so slowly that the air temperature is all the time the same as the surrounding water temperature. (b) Calculate the diameter of the water bubble when it reaches the surface assuming instead that the rise is so fast that no energy is exchanged between the water and the air in the bubble. (c) What is the air temperature when the bubble reaches the surface in the second case? 5.5 Internal energy of indoor air The temperature in a room with a volume of 25 m (at sea level) is increased from 15 C to 20 C. Determine the change of internal energy for the air in the room.

This is the Maxwell velocity distribution. The quantity gives the probability of nding a particle with a speed in the interval . (b) For what velocity does have a maximum if the particles are O molecules at 300 K? 5.7 Value of at room temperature One mole of any gas at room temperature and atmospheric pressure occupies a volume of about 24 liters. Use this result to estimate at room temperature. Express the answer in units of eV. 5.8 Energy for an electron in a quantum well An electron is conned in a cubic quantum well. We treat this problem as a particle in a box with hard walls so that the energy of a state is given as
 

Here the quantum numbers , , and are non-negative integers, and the side of the cube . Calculate the energy expectation value for the electron at room temperature.

5.6 Maxwell-Boltzmann distribution (a) Use the Boltzmann distribution for an ideal gas of particles with mass distribution of speeds , for the particles follow


to show that the

4 @ 

5 4 !  3  1

 

@ 

# "

"

 

! 

1 1 

1 

CHAPTER 5. THE IDEAL GAS


5.9 Elasticity of polymers Consider a simple model of a polymeric chain consisting of N links, each of length . The chain is stretched by a mass hanging from one end of it. l We can treat each link as a two-level system with energy levels and depending on how the link is oriented (see the gure). The difference in energy is ultimately due to the potential energy M of the mass in the gravitational eld. Derive an expression for the total length of the chain as a function of in the limit of a large temperature,

38

= M g = M g

Thus, if the temperature is increased the polymer chain curls up and becomes shorter, thereby increasing its entropy. 5.10 Free energy of a two-state system Helmholtz free energy , is a thermodynamic state variable that is often useful. The free energy is related to the internal energy, temperature, and entropy through


(a) Find an expression for the free energy as a function of temperature of a system with two states one at energy 0, and one at energy . (b) From the free energy nd expressions for the internal energy and the entropy of the system. Sketch a graph that shows how varies with temperature.

(b) Show that the entropy for the harmonic oscillator is




5.12 Proof of Eq. (5.24) Verify that Helmholtz free energy given by Eq. (5.23) satises Eq. (5.24). 5.13 Entropy in the canonical ensemble Consider a system that can exchange energy with a reservoir at temperature . The system can be in a number of different states with probabilities given by the Boltzmann distribution


5 3   0  1

What result does this yield when

? How do you interpret it?

5.11 Free energy and entropy of a harmonic oscillator (a) Show that the free energy of a harmonic oscillator, neglecting the zero-point energy is




!

"  " 



!

"  " 

$ %"

$ %" "

 "

"  

$ %"

! !

!

 

It is also possible to calculate

once the partition function

is known since

!

"

!

"


, and . Show that,

CHAPTER 5. THE IDEAL GAS

39 to show that

Use the relation between Helmholtz free energy and the partition function the entropy of the system can be expressed in terms of the probabilities as


5.14 Free energy in the canonical ensemble Consider again the system discussed in the previous problem. Derive an expression for Helmholtz free energy , in terms of the energies and the probabilities and the temperature . 5.15 The free energy reaches a minimum in the Boltzmann distribution Use the expression derived for from the previous problem in order to show that has a minimum when the probabilities are those given by the Boltzmann distribution. Hint: What change , of the free energy do you get if you make changes , of the probabilities away from the Boltzmann distribution? (The changes must be such that what you add in one place has to be taken from somewhere else, .) 5.16 Pressure of thermal radiation I In this chapter you have seen that work performed on a system changes the energy of that system by moving the energy levels while keeping all the occupation numbers and thus the entropy unchanged. (a) Use these ideas to show that for the gas of photons conned inside a cavity the pressure can be written

5.17 Pressure of thermal radiation II (a) Show that the partition function of a photon gas is given by
1

where the product is over the modes . (b) The free energy is found directly from the partition function as




"

" 

$ %"

"

" 

(c) Show that this leads to

where the index is shorthand for all the mode indices and of photons in a certain mode. (b) Show that


denotes the average number

# " # "

&

"

"

!

$"

"

! "

&

What is required to have

$ %"




! "





CHAPTER 5. THE IDEAL GAS


Transform the sum to an integral and integrate by parts to nd


40

(c) By using the relation between the free energy and the pressure verify that


5.18 Pressure of thermal radiation III Try to relate the pressure of a photon gas to the change of photon momentum that takes place when a photon with momentum hits a cavity wall and is absorbed or reected. Why is the pressure the same regardless of the reectivity of the cavity walls?


 

"
 

Chapter 6 Chemical potential and Gibbs distribution


6.1 Chemical equilibrium and chemical potential
So far we have only considered situations where the energy of a system can change either because heat is exchanged with the surroundings or because work is done on the system. In this chapter we are going to look at systems that can also exchange particles with the surroundings; i.e. the particle number will not longer be a constant.

Figure 6.1: Schematic illustration of how two different gases mix. Initially the left compartment only holds one species of molecules and the right compartment another species. A hole is opened between the two compartments, and the two gases mix. In the process the multiplicity, and hence the entropy, of the macroscopic state increases. In reality particles often move from one place to another. Take for example one gas container with only O molecules and another with only N molecules. Once the two containers are connected to each other, gas molecules will diffuse between the containers. Eventually the system reaches a state of chemical equilibrium. Then the concentrations of oxygen and nitrogen molecules are uniform throughout the system; from a macroscopic point of view the system is chemically homogeneous. The above gas-mixing scenario is well-known to us from a lot of daily-life experiences, but how can we describe it in terms of statistical physics? Actually, the theoretical description of this phenomenon is similar to that of thermal equilibrium that we dealt with earlier. It is once again possible to determine the multiplicity of different states, and the result is that the multiplicity of the initial situation in Fig. 6.1 is much smaller than that of the nal situation. When we discussed thermal equilibrium we found that two systems are in equilibrium when they have the same temperature. Furthermore, the macroscopic state in equilibrium corresponded to a maximum for the multiplicity function for the system as a whole. The same reasoning applies in principle also for chemical equilibrium. In this case one needs yet another 41

( ' 4 8 ( 3 7 '( 4 8 ' 34 7 '('((' 343 7878 " @ 9 " !" @ 9@ " !!" @9 9 9@9@ ! !!" & 6 0 ) & 5 &%& 56 00 ) )0) %%& 6 % 5 %%& 56

$#$ #$ $# # #$#$ 2 1 22 1 121

       

            

   

CHAPTER 6. CHEMICAL POTENTIAL AND GIBBS DISTRIBUTION

42

state variable to describe the macroscopic state, namely the number of particles . So far we have only considered systems with a xed number of particles and have not thought of as a variable. (In a system with more than one kind of particles the number of each kind becomes a state variable.) Now the multiplicity function varies with as well as with the internal energy (we assume the volume is held constant). Two systems and that can exchange both energy and particles with each other will do so until they reach a macroscopic state that gives maximum multiplicity and entropy. Energy will, as before, be distributed so that

at thermal equilibrium. But the entropy must also be maximized with respect to the number of particles in system 1, , given the constraint that the total number of particles is constant. This yields (6.2)

At this point we dene the chemical potential

for the particles through




This means that two systems in both thermal and chemical equilibrium have the same temperature and chemical potential (for each particle species in the general case). Suppose that subsystems 1 and 2 have the same temperature but that the chemical potential in subsystem 1 is larger than the one ( ) in subsystem 2, i.e. [remember there is a minus sign in Eq. (6.3)]. Then if one particle is transferred from subsystem 1 to subsystem 2 there is an increase of the total entropy (6.4)

In other words the entropy of the system increases if particles are transferred from higher to lower chemical potential. This particle transfer will proceed until chemical equilibrium, , has been established. In the next section we will see how the chemical potential can be calculated for a simple system like the ideal gas.

6.2 Chemical potential and thermodynamics


For a system with a constant number of particles the thermodynamic identity reads


When the number of particles can vary there is obviously at least one more term in the expression for namely , thus

4 @

4 

4 

&

& " &

4 

&

"

&




&


&

 " @ "

&

" 

"

& "

(6.1)

(6.3)

&

&

&

(6.5)

(6.6)

CHAPTER 6. CHEMICAL POTENTIAL AND GIBBS DISTRIBUTION

43

our denition of . Now Eq. (6.6) can be rewritten


1

which yields an alternative expression for the chemical potential,




For practical calculations Eq. (6.11) is the most useful expression. Let us demonstrate how it can be used to calculate for an ideal gas. The free energy for a monatomic ideal gas is from Eq. (5.32)


The corresponding chemical potential is then

thus the larger the particle concentration, the larger the chemical potential. Since we have already concluded that the particle concentration in a classical ideal gas must be considerably lower than the quantum concentration, the chemical potential obtained from Eq. (6.13) must evidently be negative.

6.3 Internal and external chemical potential


Equation (6.13) shows that the chemical potential of an ideal gas is a function of particle concentration and temperature. Then an ideal gas in thermal and chemical equilibrium should have a uniform concentration. However, we know that this is not always the case. The air of the atmosphere is certainly a nearly ideal gas, but it is not homogeneous. The air at the top of Mount Everest is much thinner than the air at sea level. Does this mean that the atmosphere is not in chemical equilibrium? To resolve this question we have to remember that our treatment of the ideal gas has only taken into account the kinetic energy of the gas atoms or molecules. But the atmospheric air is also affected by the gravitational eld of the Earth. Thus, for every gas particle there is an additional contribution to the internal energy, and also to the free energy since . in addition to the Hence also the chemical potential contains an external contribution

"

! 4 "

 

&

$"

!

$"

! 4

4 3

 "

! @

thus


4 & @

"

Furthermore the denition of Helmholtz free energy

"

4 & @

$
 "

since this implies

" 

"

 

 4 "

 " 

&

&

&

(6.7)

 

"

$"

$ %"

&

!



! 4

&

&

(6.8)

(6.9) means that (6.10)

(6.11)

(6.12)

(6.13)

CHAPTER 6. CHEMICAL POTENTIAL AND GIBBS DISTRIBUTION

44

internal chemical potential given by Eq. (6.13). [We should point out here that Eq. (6.13) only deals with an ideal gas of completely structureless particles. For real gas particles there are other contributions to the internal chemical potential due to rotational motion, vibrational motion, chemical bonds, etc. The chemical potential after all is very relevant to chemistry even though we do not further deal with this aspect here.] The external chemical potential depends on the space coordinate (i.e. the height above sea level or whatever zero level one chooses for the gravitational potential), (6.14) where
50

is the mass of the particles, and the height above sea level.

40

30

10

[For molecular gases there is a contribution to the free energy and the chemical potential from rotational motion, however, this contribution is independent of the position and will not change the conclusions we are about to draw.] For the atmosphere to be in chemical equilibrium must be independent of , . Inserting this condition into Eq. (6.15) and exponentiating we get

and this relation can be simplied to read


&

. An oxygen molecule ( O) has the mass , which with K yields 7900 m. This means that the concentration of oxygen at an altitude of of that at sea level, and at the summit of Mount Everest (8848 m) is down 3000 m is to about of . Whenever there is an external potential present, the chemical potential has an external contribution. This contribution will cause particles to ow towards the regions of space where the potential energy is lower. The internal contribution to , on the other hand, is largest where the

&

"

&

 

!

where

Let us see what this means for the atmosphere; we assume that the temperature pendent of . The total chemical potential can now be written


is inde-

(6.15)

(6.16)

(6.17)


& &

20

Figure 6.2: Atmosphere generated with random numbers (4000 particles). The concentration of particles follows with m corresponding to a particle mass equal to that of O molecules.


Altitude (km)

"

"

"

( &   

"  

( &   

"  

$"

"

5 3    1

!

"

&

! & " &

&

"

"

&

& @  &

&

" "

 

"

"  

&

&

"

"

$

"

& &

&


"

CHAPTER 6. CHEMICAL POTENTIAL AND GIBBS DISTRIBUTION


45

particle concentration is largest, thus strives to even out differences in particle concentration. The resulting distribution of particles in space strikes the balance between particle drift as a result of the forces associated with the external potential, and diffusion that tries to even out concentration differences.

6.4 Derivation of the Gibbs distribution


We next study the effects the chemical potential has on a microscopic system. We are basically going to repeat the steps of the derivation of the Boltzmann distribution from Chapter 3, however, now for a microscopic system that cannot only exchange energy, but also particles, with a large reservoir.

R Energy: U

TOT

S 1 N1

NTOT N particles 1 g( U
TOT

1 , NTOT N 1 )

Figure 6.3: The small system can exchange both energy and particles with the reservoir . The probability of nding in a particular state with a given energy and particle number depends on the multiplicity for the reservoir in the corresponding situation. The reasoning is very similar to that previously used in Chapter 3. We take a large total system that can be split into a reservoir and a small system , see Fig. 6.3. The total, closed system + has the energy and contains particles. We compare the probabilities of the system being in two different distinct states, state 1 with energy and particles, and state 2 with energy and particles. The ratio between the two probabilities is given by the ratio between the multiplicity function for the reservoir in the two situations, since we compare two unique states of the system so that in both cases. Thus

Since the reservoir is assumed to be very large, the entropy values entering Eq. (6.19) can be found by a Taylor expansion,


The partial derivatives in Eq. (6.20) are the ones that appeared in the denition of (reservoir) temperature and chemical potential, respectively,


4  "  4 "  41 $   

and as before the multiplicity is related to the entropy through

, so Eq. (6.18) gives (6.19)

  

4 " 4  "    41  4 " " 

" &

&

R Energy: U

TOT

S 2 N2

NTOT N particles 2 g( U
TOT

2 , NTOT N 2 )

   

 4 " 41 " 

(6.18)

(6.20)

(6.21)

CHAPTER 6. CHEMICAL POTENTIAL AND GIBBS DISTRIBUTION

46

When this is inserted into Eq. (6.19) we get, after some simplications, that the ratio between the two probabilities is


Thus, the probability of nding the small system in a state with energy and particles is proportional to . This exponential is called a Gibbs factor (analogous to Boltzmann factor). To nd the absolute probability for a state the Gibbs factor must be divided by the sum of Gibbs factors of all possible states that the small system can be found in. This sum is called the Gibbs sum or the grand sum; we denote it here. The Gibbs sum can be written

Note that the structure of this sum is more complicated than that of the partition function ; the sum here runs over all possible numbers of particles one can nd in the system, and for every there is a sum over all possible energies. This set of energies in general differs from one particle number to another. The probability of nding the system in a state with energy and particles is now (6.24)

This is known as the Gibbs distribution or the grand canonical ensemble.

6.5 The Fermi-Dirac and Bose-Einstein distributions


We are going to apply the Gibbs distribution to two special and very fundamental cases. All particles found in Nature can be divided into two different categories in terms of statistics: Fermions and Bosons. Fermions have half-integer spin 1/2, 3/2, 5/2, etc. For simple particles, 1/2 is the typical case. According to the Pauli principle two fermions can never occupy the same quantum state; fermion states are always occupied by either 0 or 1 particle. Bosons have integer spin; they are not subject to the Pauli principle. A quantum state can be occupied by an arbitrary (non-negative) number of bosons and, moreover, if there is already bosons in a certain state the probability for other identical bosons to undergo quantum transitions into that state increases. The elementary particles building up matter, protons, neutrons, and electrons are all fermions. The fact that they do not occupy the same state is instrumental for the stability of matter; the Pauli principle prevents the particles from collapsing into one state. Photons, the elementary particles of the electromagnetic eld that we discussed in Chapter 4 are bosons. The fact that photons are bosons is essential for stimulated light emission, the physical principle behind lasers. Fermi-Dirac. We begin by considering the distribution function for non-interacting fermions. By non-interacting we mean that the fermions do not affect each other by mutual forces. Suppose we have a large reservoir with temperature and chemical potential for the fermions we are interested in. We want to know the probability of nding a fermion in a particular state with energy . To this end we choose the system to be that very energy level and nothing more. The grand sum then only contains two terms, for there are only two possible states that can be found in. The state is either empty or contains one particle with energy , thus

&

( 5 3   5 0 1 @  1

!

 

!  " & 4

!  " & 4 !  " & 4

5 1  " & 4

( 5 2 5 0 1 @ 5 3 2   3  1  1

5   1

  



 %

!

41 

(6.22)

 " & 4


(6.23)

(6.25)

CHAPTER 6. CHEMICAL POTENTIAL AND GIBBS DISTRIBUTION

47

The probability that the state is occupied by a particle is given by the second term divided by ,


This probability function is called the Fermi-Dirac distribution function, and is denoted
 ( @ 5 3    5 0 1  1 

(6.27)

Bose-Einstein. If we instead consider the case of non-interacting bosons, the starting point is the same in that we choose the system to be a single state with energy . The Gibbs sum will, however, in this case contain an innite number of terms since there is no limit to the number of bosons that can occupy the state. With no particles the energy is 0, with one particle it is , with 2 particles it is , etc. This yields

(6.28)
1

where the last equality follows because this is a geometric series. The average number of particles occupying the state is an important quantity. It is found from the sum

(6.29)
1

which can be evaluated by the same technique that we employed when calculating the expectation value for the energy of a harmonic oscillator (see Appendix A). The result here reads


(6.30)

This is known as the Bose-Einstein distribution function. We observe several things worth pointing out. First of all the Bose-Einstein and Planck distribution functions are very similar. In fact the Planck distribution is a Bose-Einstein distribution with . A chemical potential does not make much sense for particles like photons who can be created and destroyed in various processes; the number of photons is not constant. We also see that the algebraic forms of the Fermi-Dirac and Bose-Einstein distributions are similar; the sign in front of in the denominator is all that changes. Now, this change makes quite a large difference between the functions which are plotted in Fig. 6.4. The difference between the two distributions shows up for states with an energy near and diverges when apbelow the chemical potential. Since the Bose-Einstein distribution proaches the chemical potential for bosons is always smaller than the energy of the lowest energy level (the number of particles is a nite number). In the case of fermions the chemical potential can very well be larger than the energy of some energy levels. This is for example the case for the electrons who form a gas inside a metal or in a white dwarf star, or the neutrons in a neutron star. In these cases the particle concentration (of electrons or neutrons) is so large that the condition is no longer satised.

 

( @ 5 3   2 5 0 1 1

5 3    5 0 1 " 1 

 % ( 5 2 5 0 # 4 3  1 1

" 5 2 5 0 1 3 1

5 3    5 0 1 @ 1 ( 5 3   5 0 1  1
 (  ( 

5 2 5 0 # 3  1 1

# 4"

  

5 3   5 0 1  1

# 4"

% 

&

&

&

(6.26) ,

CHAPTER 6. CHEMICAL POTENTIAL AND GIBBS DISTRIBUTION


4 3.5 3 2.5 0.8 f() 2 1.5 1 0.5 0 -2 -1 0 1 (-)/(kBT) 2 3 Fermi-Dirac 0.4 0.2 0 -300 Classical: e-(-)/(kBT) f() T=1000 K 0.6 Bose-Einstein 1.4 4 kBT 1.2 300 K 1 T=300 K T=0 1000 K

48

-200

-100

0 100 - (meV)

200

300

Figure 6.4: The left panel shows a comparison between the Fermi-Dirac and Bose-Einstein distribution functions. A plot of the classical limit for the two functions is also displayed. The right panel shows plots of the Fermi-Dirac function at different temperatures.

6.6 The Fermi gas


Let us take a closer look at the fermion case. We return to the particle-in-a-box problem from Chapter 5 and see what happens if the particles are electrons. If they are conned to move in a cubic box with side the energy levels can be written


In addition to the three quantum numbers , , and , a spin quantum number equal to either 1/2 or -1/2 is needed to completely describe the state of an electron. The Pauli principle states that two electrons cannot have identical sets of the four quantum numbers , , , and . A consequence of this is that even if there are many electrons in the box, only two of them can be in the lowest energy level, ; one has , the other . The other electrons are forced to occupy higher energy levels. Now, suppose that the system is at zero temperature so that the total energy of all the electrons takes its lowest possible value. We wish to calculate in which energy level we then nd the electron with the highest one-electron energy. To do this we write down a sum over the quantum numbers yielding the total number of electrons ,


The rst factor of accounts for the two spin directions. As with the ideal gas we assume that the box is so large that the energy levels are closely spaced, and the sums can be transformed to integrals over , , and which are then transformed into an integral over spherical coordinates,


 "

@
1

  

@ 
"

% % %

1 1 




"

$

"

(6.31)


 

$

(6.32)

(6.33)

CHAPTER 6. CHEMICAL POTENTIAL AND GIBBS DISTRIBUTION




49

In the same way as we introduced a density of photon modes in Chapter 4, we can introduce an electron density of states,


If , is 1 for all energies below , and 0 for all energies above . In this case, the energy corresponding to the highest occupied single-electron level is called the Fermi energy, ,( at ). Equation (6.34) then yields

is the volume of the box, so the Fermi energy is set by the electron concentration . The electrons in a metal like sodium (Na) behave almost like non-interacting fermions. The electron concentration is 2.64 10 m and this yields a Fermi energy eV. This is certainly much more than at room temperature ( 25 meV). Thus, the Pauli principle lifts electrons to much higher energy levels than thermal excitation at room temperature can do.
1.4 T=0 K f() x normalized density of states f() x normalized density of states 1.2 1 0.8 0.6 0.4 0.2 0 0 1 2 (eV) 3 4 5 Occupied states DOS 1.2 1 0.8 0.6 0.4 0.2 0 0 1 2 (eV) 3 4 5 Occupied states 1.4 T=1000 K DOS

Figure 6.5: Plot of the density of states (normalized so that ) and the product for non-interacting fermions at and K. The Fermi energy eV was calculated using the electron concentration of Na. The lled area represents occupied states. With a non-zero temperature some electrons are thermally excited; the Fermi-Dirac distribution determines the probability for this thermal excitation. Since is practically equal to


& & 2 &

&

Here

 

and from this


 

&

   

  

By yet another change of variables, replacing

by the energy

 "

"

 

 

 
"

we get (6.34)

&

"

 

 "

4
  &

(6.35)

&

&

 !

 !

(6.36)

(6.37)

CHAPTER 6. CHEMICAL POTENTIAL AND GIBBS DISTRIBUTION

50

1 far below the chemical potential and 0 far above , thermal excitations will only affect the electrons with energies near (i.e. near ) as long as the temperature is not too large. Figure 6.5 shows the density of states for electrons in Na and the occupied states at temperatures 0 K and 1000 K. At 0K the borderline between occupied and unoccupied states is absolutely sharp. are occuAt 1000 K some states below the Fermi energy are empty and some states above pied but the changes are limited to a relatively small energy interval. Electrons with an energy far below cannot be excited thermally because other electrons are in the way. If the temperature is increased further so that the chemical potential will decrease a lot; adjusts so that . (For metallic electron concentrations the temperature must be of the order 10 K before this is a major effect so the following discussion is a bit academic in that context.) Eventually, drops to a value far below 0, and the Fermi distribution function can be approximated by

provided that . This is actually the same distribution function as we found for the classical ideal gas. Let us calculate what value the chemical potential should take in this limit in order to have the right expectation value for the number of particles,


as before. In other words, except for the last term , we recover the old expression for the chemical potential for the ideal gas in Eq. (6.13). We note that the requirement that for all states means that must be negative and lie several units of below the energy zero. This can only be achieved if the particle concentration is much less than the quantum concentration . Thus the two conditions and are consistent with each other, and both are basic requirements in order to have an ideal gas. Where did the last term in Eq. (6.41) come from? The reason we do not get exactly the same result as in Eq. (6.13) is that the particles we deal with here have spin, whereas the ideal gas particles in Chapter 5 were assumed to be spinless. The spin degree of freedom gives a shift in the chemical potential.

& " 

$ %"

 

 "

 

where


$ %"

 "

and from this we get

 

This integral can be calculated by the variable substitution

 5 ( 01

, which yields (6.40)

5 3 2 5 0 1  1

 

&

 

( @ 5 3  1    5 0 1

"

 # 4 "

$  ! "

"

&

&

# 4"

&

&

& " 

&

!

(6.38)

(6.39)

(6.41)

(6.42)

& " 

CHAPTER 6. CHEMICAL POTENTIAL AND GIBBS DISTRIBUTION

51

6.7 Problems for Chapter 6


6.1 Centrifuge A circular cylinder of radius rotates about the long axis with angular velocity . The cylinder contains an ideal gas of atoms of mass at temperature . Find an expression for the dependence of the concentration on the radial distance from the axis, in terms of (the concentration on the axis). Take as for an ideal gas. 6.2 Gibbs sum for a two-level system Consider a system that may be unoccupied with zero energy or occupied by one particle in either of two states, one of zero energy, the other of energy . (a) Show that the Gibbs sum for this system is
 ( 5 3 2 5 0 1 @ 5 3 2 @  1  1

(b) Show that the average number of particles in the system is


 ( 5 3    5 0 1 @ 5 3    1 1

(c) Find an expression for the energy expectation value of the system.

6.5 Density of states in two dimensions Show that the density of states for a free electron that moves in two dimensions on a square area is


6.6 Energy and pressure of Fermi gas at (a) N fermions are conned to a volume and held at zero temperature. Show that the internal . energy of this system is (b) Show that the pressure of the system


6.7 Fermi gases in astrophysics (a) Given the mass of the Sun, 2 10
$

kg, estimate the number of electrons in the Sun.

@ &

6.4 Symmetry of lled and vacant orbitals Let be the value of the Fermi-Dirac function at the energy . Show that


, and

has the value

the value at

 $ 

 

&

6.3 Derivative of Fermi-Dirac function evaluated at the chemical potential Show that


&

 

&

"

&

# 4"

"

" &

 

CHAPTER 6. CHEMICAL POTENTIAL AND GIBBS DISTRIBUTION

52

(b) In a white dwarf star this number of electrons may be ionized and contained in a sphere of radius 2 10 m. Find the Fermi energy of the electrons in electron volts. (c) If the above number of electrons were contained within a pulsar of radius 10 km, show that the Fermi energy would be 10 eV. 6.8 Mass-radius relationship for white dwarf stars and radius . The electrons form a degenerate Fermi gas Consider a white dwarf of mass (i.e. the chemical potential is positive). Due to gravity there will be a very high pressure inside the star; the pressure at the very center will be

where G is the gravitational constant and is the mass density. Unless this gravitational pressure is balanced by something else, the star would collapse. The balancing pressure is provided by the degenerate electron gas. , thus the larger the mass (a) Show that pressure balance leads to the condition the smaller the radius of a white dwarf. (b) A white dwarf is an old star that has used up most of its fuel (hydrogen). Estimate the white-dwarf radius of a star with the same mass as the Sun, taking into account that the most common nuclei in a white dwarf are C and O.


&

Chapter 7 Phase transitions


7.1 Introduction
Almost all substances can exist in different forms depending on the temperature and the pressure. In daily life we often encounter the phase transformations of water. When the outdoor temperature falls below 273.15 K, water in small pools freezes to ice; it does not rain, instead, in case of precipitation, there will be snow on the ground; and if the cold weather persists for several days, there will be ice on the surfaces of lakes. Moreover, in our kitchens we boil water so that the liquid is transformed to gas form. (One usually uses the word vapor, or in the case of H O, steam, to describe a gas that is in equilibrium with the liquid form of the substance.) From our experience with water we also know that for a phase transformation to take place, usually quite a lot of heat must be supplied, i.e. even if the temperature of the substance does not change, its internal energy changes a lot. If one has 1 kg of ice at 0 C, it takes 333 kJ to melt it, another 419 kJ to increase the temperature of the liquid from 0 C to 100 C, and nally as much as 2260 kJ to evaporate all the liquid. Thus, the average energy of the water molecules in gas phase and in liquid phase are quite different. When water is transformed from liquid to gas, the volume increases substantially and the distance between the molecules becomes much larger. This corresponds to an increase in potential energy for the molecules. Water molecules are polar, i.e. they are positively charged on one side and negatively charged on the other. In the liquid the molecules tend to arrange themselves so that the positive end of one molecule is near the negative end of another. This lowers the potential energy of the liquid. To form a gas, the molecules must be brought apart and the potential energy is then increased. The liquid-to-gas transformation for water is illustrated schematically in Fig. 7.1. One thing that becomes apparent looking at this gure is that all the heat supplied during the transformation does not go into an increase of the internal energy. Instead, since the volume of the vapor is much larger than the volume of the water, the water-vapor system does work. This work equals . Thus the heat supplied, the latent heat , is, per particle,

where and and and are the internal energies and volumes per particle in the gas and liquid, respectively. In addition to phase transformations in which a substance changes from solid to liquid form or from liquid to gas form, there are also other, less obvious, phase transformations. It is, for example, not unusual that a solid changes its crystal structure (its internal order) at certain

53

"

@ 

" 

"

(7.1)

CHAPTER 7. PHASE TRANSITIONS

54

T<Tb p
(a) (b)

T=Tb p
(c)

T=Tb p
(d)

T=Tb p
(e)

T>Tb p

g l l g l g

Figure 7.1: Water is heated at constant pressure. When the temperature reaches the boiling temperature , liquid and gas can coexist and some of the water is transformed into vapor. At the same time the volume increases. Eventually, when enough heat has been supplied, all the water has been transformed into vapor. If further heat is supplied the temperature of the gas rises above . temperatures. Ice can for example exist in about 10 different structures (to obtain most of these structures requires rather large pressures). In other phase transformations internal properties, such as the magnetization, changes. Simply speaking each of the atoms in a ferromagnetic material like iron carry a spin. At large temperatures these spins point in different directions so that the total magnetization of a sample of iron is zero. However, once the temperature is lowered below the Curie temperature (1043 K for iron), the spins of the different atoms tend to point more or less in the same direction, and a piece of iron spontaneously becomes magnetic; it has a net magnetization.

7.2 Phase diagrams


Let us return to the phase transformations of water and discuss them in more detail. Everything we said above assumed that the pressure was the normal atmospheric pressure, kPa, at sea level. The boiling point of water than lies at 100 C. But as you probably know, water boils at lower temperatures than that at high altitude where the pressure from the air is lower (cf. the discussion in Chapter 6). If the pressure is 2/3 atmospheres, the boiling point is down to about 90 C, and if the pressure is just 1/3 atmospheres it is not more than about 70 C. If, on the other hand, the pressure is larger than 1 atmosphere the boiling temperature exceeds 100 C. One can summarize these facts in a so called phase diagram where temperature is given on the axis, while the axis gives the pressure. We then draw a curve corresponding to the boiling temperature as a function of the pressure. This is done for water in Fig. 7.2. Every point in the diagram corresponds to a system in thermal and mechanical equilibrium, because the temperature and pressure is assumed to be constant throughout the system. In most points only one phase can exist, the one in which the water molecules has the lowest chemical potential. On the curves dividing two phases the chemical potential for water molecules is the same in both of them; the two phases can coexist in chemical equilibrium at these particular combinations of temperature and pressure.

CHAPTER 7. PHASE TRANSITIONS


25000 Critical point 20000 8 l = s p (kPa) p (kPa) 15000 6 s < l 4 l < s 10 g = l l < g g < l

55

10000

5000

Ice

Liquid water A B

Water vapor

Triple point s = g

0 -50

50 100 150 200 250 300 350 400 T (C)

0 -40

-20

0 T (C)

20

40

60

Figure 7.2: The phase diagram for water. The left panel gives an overview over a large range of temperatures and pressures, while the right panel gives a detailed view of the low-pressure range. The curves in the left panel shows at what temperature and pressure ice melts to water and when water boils. Thus, we have three different regions in the diagram, one for each of the three different states in which one can nd H O. Apart from conrming what we know from our everyday experience there are a few points of special interest that can be found in Fig. 7.2. The lines dividing solid from liquid, solid from gas, and liquid from gas meet in one point, the so called triple point at =0.01 C and = 0.6113 kPa. At these particular conditions all three phases can coexist in thermal and chemical equilibrium. Figure 7.3 illustrates how the triple point conditions for water can be realized if an initially completely evacuated vessel is partially lled with water and then cooled until some, but not all, of the water has frozen to ice.
T>Ttr (a) p>ptr Vapor (b) Vapor (c)

Water

Water

Q
Figure 7.3: Illustration of how the triple point conditions for water can be realized. The line dividing solid from liquid starts at the triple point and extends from there essentially to arbitrarily large pressures. This line has a negative slope: the melting temperature decreases from 0.01 C at the triple point to 0 C at atmospheric pressure. This behavior is unusual. For most substances this line has a positive slope. Then one can bring the substance from the liquid to the solid form at a constant temperature by increasing the pressure. With ice and water things work in the opposite way. By applying a pressure on a piece of ice, one can melt it even if the temperature does not change. This has to do with another unusual property of

T=Ttr

p=ptr Vapor

Ice

Water

CHAPTER 7. PHASE TRANSITIONS

56

water: its volume increases when it freezes. For other substances usually the volume decreases upon forming a solid. The curve dividing liquid phase from gas has a positive slope, i.e. a gas can be brought to liquid form by increasing the pressure at a constant temperature. The line dividing liquid from gas does not extend to arbitrarily large pressures and temperatures. Instead it ends at the so called critical point =374.14 C and =22090 kPa. Beyond this point there is no qualitative difference between liquid water and water vapor. Quite naturally this difference does not disappear in an entirely abrupt way at the critical point. Instead as one approaches and the latent heat involved in transforming liquid to gas decreases steadily and becomes 0 right at the critical point. Likewise the volume difference (per particle) between the two phases decreases as one comes closer to the critical point. Right at the critical point, the two volumes are equal. As a matter of fact, since there is no difference between a liquid and a gas beyond the critical point it is possible to transform a substance from liquid to gas twice, without transforming it back to a liquid in between. Suppose that we have water at a temperature and pressure corresponding to point A in Fig. 7.2. By increasing the temperature the liquid can be turned into gas (at point B in the phase diagram). If then both temperature and pressure rst is increased and then decreased so that the state of the substance follows the dashed curve encircling the critical point we return to the state A without going through a phase transformation from gas to liquid. Of course, one needs to reach rather extreme pressures to have water go through such a process. Under normal conditions there is a very distinct difference between liquid water and water vapor. of a coexistence The slope of a coexistence curve. It is possible to express the slope curve in terms of the latent heat and the change of volume that occurs in connection with the phase transformation. The expression reads (7.2)

where is the latent heat per particle and (equal to in the gas-liquid case) is the change of volume per particle. This equation is called the Clausius-Clapeyron law. To derive Eq. (7.2) one looks at the variation of the chemical potential of both the coexisting phases, for example liquid and gas, along the coexistence curve. While both and vary with temperature and pressure, they must always equal each other, , on the curve. To proceed with the derivation we introduce a new free energy, the so called Gibbs free energy, (7.3) In view of the thermodynamic identity, (6.8), we have the differential,


(7.4)

Gibbs free energy can thus be viewed as a function of the three independent variables , , and . Since is a kind of energy, and just looking at the denition in Eq. (7.3), one realizes that is proportional to the size of the system. But of the variables , , and , only has to do with the size of the system. (Such a variable is said to be extensive in contrast with variables like and that are said to be intensive.) As a consequence one can write

(7.5)

&

&

4 & @

&

&

"

 "

@ 

"

 " 

"

 

CHAPTER 7. PHASE TRANSITIONS

57

Equation (7.7) says that the chemical potential is equal to Gibbs free energy per particle. In analogy with (7.4), the differential can then be written
1

where is the entropy per particle and the volume per particle. Now we must write down expressions for both in the liquid phase and the gas phase
1

where the subscript indicates that the changes in and should be made such that we stay on the coexistence curve. Requiring that now yields the slope of the curve,


The heat that must be supplied to bring a particle from the liquid phase to the gas phase is . This yields Eq. (7.2) if the numerator and denominator in (7.10) are multiplied by ,


If one instead wants to look at the slope of the coexistence curve for the liquid and solid phase, all that needs to be done is to put in the corresponding entropy and volume differences into Eq. (7.11). For the case of water, this brings out the connection between the negative slope of this curve and the volume expansion when water freezes to ice. The latent heat and thus is positive, but and the slope is negative.

7.3 Applications to practical situations


The discussion above has always assumed that the system we look at contains only one substance, only one kind of molecules. The container in Fig. 7.1, for example, only contains H O molecules, either in gas phase or in the liquid phase. Normally when we for example boil water, we do this in air at atmospheric pressure. This means that the pressure that the liquid water feels from the surroundings equals the atmospheric pressure, 101.3 kPa, regardless of the temperature. Then the liquid may, or may not, be in equilibrium with the water vapor present in the atmosphere. The pressure from the vapor is, however, low compared with that coming from the nitrogen and oxygen molecules in the air. Typically what happens when one puts some water to boil on the stove is that already when the water temperature reaches 4050 C, one sees vapor rising from the surface of the



@ 

"

"

& 

"

& 



& 4

" "



  &

"

& 





& 





@ 

"

"



thus, the function

is actually the chemical potential, and (7.7)

&


3 4

& 

& 

where

is a function of

and . From Eq. (7.4) we can conclude that (7.6)

(7.8)

(7.9)

(7.10)

" 

(7.11)

CHAPTER 7. PHASE TRANSITIONS

58

water. This is because the chemical potential in the liquid is higher then for water molecules in the air (there are very few water molecules there). The molecules that have evaporated from the liquid will quickly spread into the room and one will never reach an equilibrium situation. Thus, water can be transformed to gas form at atmospheric pressure also at rather low temperatures, much lower than 100 C. In this way the water in rainwater pools evaporates after a while. If water can evaporate at temperatures lower than 100 C, what is then so special about the boiling point? Well, at this temperature, the liquid can be in equilibrium with pure water vapor at atmospheric pressure. The concrete result of this is that gas bubbles containing nothing but H O molecules begins to form inside the liquid. The liquid boils. These issues are illustrated in Fig. 7.4.
p p
air H2O

< p

vap

(T)

p p
air

H2O

< p

=1 atm.

=1 atm.

vap

(T)

T<T b

T=T b p
H2O

= p (T) =1 atm. vap

Figure 7.4: When water is heated at atmospheric air pressure some of the liquid evaporates before the temperature reaches the boiling point, because the H O pressure in the air, and thus the chemical potential, is very low. Once the temperature reaches the boiling point, , the liquid can be in chemical equilibrium with pure water vapor at atmospheric pressure. Then gas bubbles, containing only water vapor, starts to form in the liquid.

7.4 The van der Waals equation of state


The van der Waals equation of state, briey discussed in Bensons book, gives an opportunity to deal with a liquid-gas phase transition in a relatively simple way. This equation can be viewed as a generalization of the ideal gas law and reads


(7.12)

Here and are coefcients that depend on what particles we are dealing with. The term can be thought of as a reduction of the total volume since part of it is occupied by particles. The term corresponds to a reduction of the pressure because normally gas atoms or molecules separated from each other are interacting attractively. If one introduces the critical temperature , the critical pressure , and the critical volume , given by (7.13)
1

"

&

! 4

4 "

!

"

 

CHAPTER 7. PHASE TRANSITIONS


the equation of state can be written,


59

If then is plotted as a function of for different values of (these curves are called isotherms) as in the left panel in Fig. 7.5 one nds that for sufciently high temperatures, , there is a unique value for the volume for all possible values of the pressure. For the - curve has a terrace point at and , and when there is no longer a unique relation between pressure and volume for all values of the pressure.
2 (a) 1.5
1.2 0.95 1.1

1.2 (b) 1
1.0

1.2 (c) 1.0

0.8 p/pc p/pc

0.8 Liquid

p/pc

1.0 0.95 0.9

0.6

0.6 pvap

0.4 Liquid
0.85

0.5
0.85 0.8

Gas Liquid+Gas

0.4

Liquid+Gas Gas

0.2

0.2
Vg/Vc

0 0 1 2 3 V/Vc 4 5 6

0 0 1 2 3 V/Vc 4 5 6

0 Vl/Vc 1

3 V/Vc

Figure 7.5: Panel (a) shows plots of the pressure as a function of the volume found from the van der Waals equation of state for different values of the temperature. The values given next to the . In panel (b), the thick curve shows how the volume of liquid and gas, that are curves are in equilibrium with each other, vary with pressure. Panel (c), nally, shows in more detail how the van der Waals equation of state leads to separation between different phases, liquid and gas. The isotherm is shown, and at high pressure only the liquid phase can exist. However, at the vapor pressure, liquid and gas can coexist. The relation between pressure and volume is then not given by the van der Waals equation of state (thin curve) but follows instead a straight horizontal line. The volume can then vary between and depending on the proportions of gas and liquid. If the pressure is lower than at this temperature only the gas phase can exist. Every point on the isotherms for corresponds to a possible physical state for a gas that obeys the van der Waals equation of state. In case , the part of an isotherm to the left of the left part of the thick full curve in Fig. 7.5 describes how the pressure varies with volume in the liquid state and the part of an isotherm to the right of the right half of the thick curve gives pressure as a function of volume in the gas phase. However, the part of an isotherm that lies in between the two halves of the thick curve does not correspond to a stable state of matter. The physical state will instead be a mixture of liquid and gas at constant pressure. The exact value of the volume depends on the proportions between liquid and gas. When a liquid is transformed to gas at constant pressure, as illustrated in Fig. 7.1 (b), (c), and (d), the volume increases from to along a horizontal line in a diagram connecting two points on an isotherm. This is illustrated in some more detail in Fig. 7.5 (c). (Note, however, that for a real substance the exact shape of the isotherm is different; no real substance follows the van der Waals equation of state very closely.)

  

 

"

"

(7.14)

 " 

&

CHAPTER 7. PHASE TRANSITIONS

60

7.5 Problems for Chapter 7


7.1 for water Calculate the slope for the coexistence curve for water and water vapor near atmospheric pressure. The latent heat is =2260 kJ/kg at 100 C, and the volume of the liquid is 0.001044 m /kg and that of the gas is 1.6729 m /kg. Express the answer in K/atmospheres. 7.2 Internal evaporation energy for water Use the data given in the previous problem to calculate the increase of the internal energy when 1 kg of water is evaporated at 100 C. 7.3 Chemical potential for water at the boiling point Use the data and ndings from the two previous problems (7.1 and 7.2) to determine the increase of entropy for 1 kg of water when it is evaporated at 100 C. Then verify that the chemical potential for liquid and vapor equal each other under these conditions so that the liquid and gas phases can coexist. 7.4 Heat of vaporization of ice The pressure of water vapor over ice is 3.88 mm Hg at -2 C and 4.58 mm Hg at 0 C. Estimate from this the heat of vaporization of ice at -1 C. Hint: Assume that , and that the vapor follows the ideal gas law. 7.5 Vapor-pressure equation over ice Take the ideas from the previous problem one step further: Assume that the difference in volume between ice and water vapor in equilibrium is equal to the vapor volume and that the vapor can be treated as an ideal gas. Show that this leads to the following expression for the slope of the coexistence curve,


where is one point on the coexistence curve. Given that the latent heat for the solidvapor transition at 0 C is 2835 kJ/kg, estimate the vapor pressure over ice at -40 C. Compare the result you get with the experimental value, 0.0129 kPa.

If we neglect any temperature variations of the latent heat equation is separable and has the solution


, the above differential

"

!

!  

"

Appendix A Summation of geometric series


The sum of a geometric series ,
1 

which yields

In these notes we have often (for example when dealing with the harmonic oscillator) encountered sums like


When calculating the energy expectation value for a harmonic oscillator we also encountered the sum


Thus, by combining Eqs. (A.7) and (A.8), we get the nal result


61

" "

" "

" (

 )

On the other hand we can also calculate




from the result of Eq. (A.5) (A.8)

 

 "

"

 %

 %

"

 

 

To evaluate yields

we use the fact that it is related to the simpler sum

"

"

 %

In this case, by identifying

with , we get

  22

"

 %

 

@ 

 

(where we assume that

) can be calculated by noting that (A.2)

%


  22

@


(A.1)

(A.3)

(A.4)

(A.5)

(A.6) . Termwise derivation of (A.7)

 

(A.9)

INDEX

62

Index
, 11

denition, 13 ,5

atmosphere, 44 black body, 25 boiling point, 58 Boltzmann constant , 5 Boltzmann distribution, 915 mathematical expression, 11 Boltzmann factor, 10 Bose-Einstein distribution, 47 bosons, 46 Boyles law, 35 canonical ensemble, 9 centrifuge, 51 Charles law, 35 chemical equilibrium, 41, 54 chemical potential, 42, 54 external, 43 ideal gas, 43 internal, 43 Clausius-Clapeyron law, 56 coexistence curve, 56 critical point, 56 critical pressure, 58 critical temperature, 58 critical volume, 58 Curie temperature, 54 degeneracy, 15 degenerate energy level, 15 density of photon modes, 26 density of states two dimensions, 51 diatomic molecule, 11 diffusion, 45 distinguishable particles, 34 DNA, 22 drift, 45 electron density of states, 49 energy

in electromagnetic eld, 27 entropy, 38 denition, 4 entropy increase, 7, 42 equipartition principle, 13, 31 extensive variable, 56 Fermi energy, 49 Fermi gas energy, 51 pressure, 51 Fermi-Dirac distribution, 47 fermions, 46 ferromagnet, 54 rst law, 32 uctuations, 6 in macroscopic system, 6 fundamental assumption, 3, 10 Gibbs distribution, 46 Gibbs free energy, 56 and chemical potential, 57 Gibbs sum, 46 grand canonical ensemble, 46 grand sum, 46 Hamlet, 8 harmonic oscillator, 11 entropy, 38 free energy, 38 two-dimensional, 14 heat, 32, 35 heat capacity, 14 monatomic ideal gas, 35 Helmholtz free energy, 32 and partition function, 33 ideal gas, 34 ideal gas, 30 ideal gas law, 35 indistinguishable particles, 34 intensive variable, 56 internal energy ideal gas, 35 lasers, 46

!

INDEX
latent heat, 53 magnetic susceptibility, 21 Max Planck, 27 Maxwell-Boltzmann distribution, 37 measurements macroscopic and microscopic, 3 metal, 47 microcanonical ensemble, 9 most important result, 11 Mount Everest, 44 multiplicity, 4 relation to entropy, 4 in spin system, 4 maximum, 5, 6 systems in thermal contact, 5 neutron star, 47 particle in a box, 30 thermal energy, 31 partition function denition, 11 for harmonic oscillator, 12 partition function, , 11 Pauli principle, 46, 48 phase diagram, 54 phase transition, 53 photon modes, 23 Planck distribution function, 13 polymer, 38 pressure, 31 quantum concentration, 33 quantum volume, 33 radiative energy ux, 25 Rayleigh-Jeans law, 27 reservoir, 9 second law, 7 spin system, 37 in magnetic eld, 3 internal energy, 4 net, 4 steam, 53 Stefan-Boltzmann constant, 25 Stefan-Boltzmann law, 25 temperature

63 theoretical denition, 7 thermal contact, 5 thermodynamic identity, 32 tillst ndssumma, 11 a triple point, 55 two-level system, 3 van der Waals equation of state, 58 vapor, 53 white dwarf, 47 work, 31, 32, 35 zipper, 21

You might also like