Top-Rated Free Essay
Preview

PHYS 213 Textbook

Good Essays
53597 Words
Grammar
Grammar
Plagiarism
Plagiarism
Writing
Writing
Score
Score
PHYS 213 Textbook
PHYSICS

213

Elements of Thermal Physics
3rd Edition

James P. Wolfe
Department of Physics
University of Illinois at Urbana – Champaign

Copyright © 2010 by James P. Wolfe
Copyright © 2010 by Hayden-McNeil, LLC on illustrations provided
Photos provided by Hayden-McNeil, LLC are owned or used under license
Permission in writing must be obtained from the publisher before any part of this work may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, or by any information storage or retrieval system.
Printed in the United States of America
10 9 8 7 6 5 4 3 2 1

ISBN 978-0-7380-4119-3

Hayden-McNeil Publishing
14903 Pilot Drive
Plymouth, Michigan 48170 www.hmpublishing.com Wolfe 4119-3 F10

Contents

Preface ......................................................................................................... vii
Definition of Symbols .................................................................................. ix
Table of Constants and Conversion Factors................................................ xi

Introduction and Overview
A. Classical to Quantum Physics ............................................................... xiii
B. Systems with Many Particles ..................................................................xiv
C. Statistics and Entropy..............................................................................xv
D. Road Map for this Course .....................................................................xvii
Chapter 1: Origins of Mechanical Energy
A. Kinetic Energy and Work ......................................................................... 1
B. Extension to Many-Particle Systems........................................................ 3
C. Internal Energy ....................................................................................... 5
D. Potential Energy ...................................................................................... 7
E. Vibrational Energy—Kinetic plus Potential ............................................ 8
Chapter 2: Irreversibility and the Second Law of
Thermodynamics
A. Thermal Energy ..................................................................................... 13
B. Irreversibility of Many-Body Systems .................................................... 14
C. Entropy and the Approach to Equilibrium ............................................ 14
D. Entropy Maximization and the Calculus of Several Variables .............. 17
Chapter 3: Kinetic Theory of the Ideal Gas
A. Common Particles .................................................................................. 21
B. Pressure and Kinetic Energy .................................................................. 22
C. Equipartition Theorem .......................................................................... 23
D. Equipartition Applied to a Solid ........................................................... 25
E. Ideal Gas Law ......................................................................................... 26
F. Distribution of Energies in a Gas ........................................................... 27
Chapter 4: Ideal-Gas Heat Engines
A. The First Law of Thermodynamics ...................................................... 31
B. Quasi-static Processes and State Functions ........................................... 32 iii Physics 213 Elements of Thermal Physics

C. Isothermal and Adiabatic Processes—Reversibility ............................... 32
D. Entropy of the Ideal Gas—a First Look ................................................ 36
E. Converting Heat into Work ................................................................... 37
F. Refrigerators and Heat Pumps ................................................................ 40

Chapter 5: Statistical Processes I: Two-State Systems
A. Macrostates and Microstates .................................................................. 45
B. Multiple Spins ......................................................................................... 46
C. The Random Walk Problem—Diffusion of Particles ........................... 50
D. Heat Conduction.................................................................................... 54
Chapter 6: Statistical Processes II: Entropy and the Second Law
A. Meaning of Equilibrium ......................................................................... 59
B. Objects in Multiple Bins ........................................................................ 60
C. Application to a Gas of Particles ............................................................ 62
D. Volume Exchange and Entropy ............................................................. 64
E. Indistinguishable Particles ..................................................................... 68
F. Maximum Entropy in Equilibrium ......................................................... 68
Chapter 7: Energy Exchange
A. Model System for Exchanging Energy .................................................. 73
B. Thermal Equilibrium and Absolute Temperature.................................. 78
C. Equipartition Revisited .......................................................................... 79
D. Why Energy Flows from Hot to Cold .................................................. 81
E. Entropy of the Ideal Gas—Temperature Dependence .......................... 82
Chapter 8: Boltzmann Distribution
A. Concept of a Thermal Reservoir ............................................................ 87
B. The Boltzmann Factor............................................................................ 88
C. Paramagnetism ....................................................................................... 91
D. Elasticity in Polymers............................................................................. 94
E. Harmonic Oscillator .............................................................................. 95
Chapter 9: Distributions of Molecules and Photons
A. Applying the Boltzmann Factor ............................................................. 99
B. Particle States in a Classical Gas .......................................................... 100
C. Maxwell-Boltzmann Distribution ........................................................ 102
D. Photons ................................................................................................. 103
E. Thermal Radiation................................................................................ 105
F. Global Warming .................................................................................... 107
Chapter 10: Work and Free Energy
A. Heat Flow and Entropy ........................................................................ 111
B. Ideal Heat Engines ............................................................................... 112
C. Free Energy and Available Work ......................................................... 114
D. Free Energy Minimum in Equilibrium ............................................... 115 iv Elements of Thermal Physics Physics 213

E. Principle of Minimum Free Energy .................................................. 116
F. Equipartition of Energy ........................................................................ 117
G. Paramagnetism—the Free Energy Approach ...................................... 119

Chapter 11: Equilibrium between Particles I
A. Free Energy and Chemical Potential ................................................... 123
B. Absolute Entropy of an Ideal Gas......................................................... 125
C. Chemical Potential of an Ideal Gas ..................................................... 128
D. Law of Atmospheres ............................................................................. 129
E. Physical Interpretations of Chemical Potential ................................... 130
Chapter 12: Equilibrium between Particles II
A. Ionization of Atoms .............................................................................. 135
B. Chemical Equilibrium in Gases............................................................ 137
C. Carrier Densities in a Semiconductor ................................................. 139
D. Law of Mass Action: Doped Semiconductors .................................... 142
Chapter 13: Adsorption of Atoms and Phase Transitions
A. Adsorption of Atoms on a Solid Surface .............................................. 145
B. Oxygen in Myoglobin ........................................................................... 147
C. Why Gases Condense .......................................................................... 148
D. Vapor Pressure of a Solid ..................................................................... 148
E. Solid/Liquid/Gas Phase Transitions .................................................... 151
F. Model of Liquid–Gas Condensation .................................................... 154
Chapter 14: Processes at Constant Pressure
A. Gibbs Free Energy................................................................................ 157
B. Vapor Pressures of Liquids—General Aspects ..................................... 160
C. Chemical Reactions at Constant Pressure ........................................... 161
Appendices
Appendix 1: Vibrations in Molecules and Solids—Normal Modes ......... 165
Appendix 2: The Stirling Cycle ................................................................ 169
Appendix 3: Statistical Tools ..................................................................... 173
Appendix 4: Table of Integrals .................................................................. 179
Appendix 5: Exclusion Principle and Identical Particles .......................... 181
Appendix 6: Sum over States and Average Energy ................................... 185
Appendix 7: Debye Specific Heat of a Solid ............................................. 189
Appendix 8: Absolute Entropy of an Ideal Gas ........................................ 191
Appendix 9: Entropy and Diatomic Molecules ........................................ 195
Appendix 10: Vapor Pressure of a Vibrating Solid ................................... 199
Solutions to Exercises ............................................................................... 201
Index .......................................................................................................... 215
*May not be covered in Physics 213 v Physics 213 Elements of Thermal Physics

The central ideas in this course have a wide range of applications.
For example:

← fabrication of materials chemical reactions →

← biological processes phase transitions →

← magnetism electrons and holes in semiconductors →

← converting energy into work thermal radiation (global warming) →

← thin films and surface chemistry and much more...

vi

Preface

The unifying concepts of entropy and free energy are essential to the understanding of physical, chemical, and biological systems. Recognizing that these concepts permeate the undergraduate science and engineering curricula, the
Physics Department has created this sophomore-level course dealing with thermodynamics and statistical mechanics. Starting with a few basic principles, we introduce practical tools for solving a variety of problems in the areas of materials science, electrical engineering, chemistry, and biology.
These introductory notes on Thermal Physics are designed to be used in concert with the Physics 213 Lectures, Discussion Exercises, Homework Problems, and
Laboratory Exercises. The Lectures summarize the principal ideas of the course with live demonstrations and active-learning exercises. Discussion problems are solved cooperatively by students. The lab experiments lend reality to the basic principles.
Exercises at the end of each chapter are designed to complement discussion and homework problems. Solutions to most Exercises are provided in the back pages. Appendices (and Chapter 14) are included for students who want to dig a little deeper into the subjects of this course and gain additional links to advanced courses.

Acknowledgements
The precursor to this course was first taught in Fall 1997 and Spring 1998 by
Michael Weissman and Dale Van Harlingen. Subsequent versions of the course were developed by Doug Beck, Michael Weissman, Jon Thaler, Michael Stone,
Paul Debevec, Lance Cooper, Yoshi Oono, Paul Kwiat, and myself. I particularly wish to thank Mike Weissman, Lance Cooper, Yoshi Oono, Inga Karliner, and
Paul Kwiat for insightful suggestions and corrections to this book.

vii

Physics 213 Elements of Thermal Physics

Reference Texts
In developing material for this course, I have drawn heavily from three excellent books.
For students who wish to extend their knowledge in this area, I highly recommend them:
C. Kittel and H. Kroemer, Thermal Physics, Second Edition (W. H. Freeman,
1980)
D.V. Schroeder, An Introduction to Thermal Physics (Addison-Wesley, 1999)
F. Reif, Fundamentals of Statistical and Thermal Physics (McGraw-Hill, 1965)
The following books also provide very useful perspectives:
Thomas A. Moore, Six Ideas That Shaped Physics, Unit T (McGraw-Hill, 1998)
F. Reif, Statistical Physics, Berkeley Physics Course—Vol. 5 (McGraw-Hill,
1965)
Steven S. Zumdahl, Chemical Principles, 5th Edition, (Houghton Mifflin, 2005)
Professor Gino Segre has written a fascinating historical perspective of the world from the viewpoint of thermodynamics. It’s a “must read” for science and engineering majors:
Gino Segre, A Matter of Degrees: What Temperature Reveals about the Past and Future of Our Species, Planet, and Universe (Penguin, USA, 2003).

viii

Elements of Thermal Physics Physics 213

Definition of Symbols
(Alphabetically arranged)
A
<a>
␣ = U/pV = U /NkT

area, or amplitude of vibration thermal average of the variable a constant for ideal gas over some T range, ␣ = 3/2 (monatomic), 5/2 (diatomic)
␤ = 1/kT shorthand used in Boltzmann factor, exp(–␤En)
Cv (Cp) heat capacity at constant volume (pressure) cv = Cv/n molar specific heat (n = # moles)
E
energy of a single particle, or a single oscillator
␧ = hf quantum of energy for an oscillator with frequency f
␧ = Wby/Qh efficiency of a heat engine operating between Qh and Qc
KE
translational kinetic energy (= ½mv2 = p2/2m for a single particle) En energy of quantum state labeled n (an integer). (e.g.,
En = n␧ for an oscillator, n␮B for a spin, h2n2/(2L)2 for a particle in a box, and –(13.6 eV)/n2 for an H-atom.)
F = U – TS
Helmholtz free energy
F
force
G = U + pV – TS
Gibbs free energy
H = U + pV enthalpy ␥ = (␣ + 1)/␣ adiabatic constant (pV␥ = constant for adiabatic process with ideal gas) h, Planck’s constant = 6.63 ϫ 10–34 J s, =h/2␲ k Boltzmann constant = 1.381 ϫ 10–23 J/K ln(x) natural logarithm of x (base e = 2.7183) log(x) base-10 logarithm of x
ᐉx
step length in a 1-d random walk process

mean free path of a particle = v␶
␭m
wavelength of a standing wave with mode index m
M
# bins or cells for one particle, # steps in random walk
0 = (Nup – Ndown)␮ = m␮ total magnetic moment of N spins

magnetic moment of one spin, or chemical potential m mass of a particle m integer = Nup – Ndown or Nleft – Nright in binomial distribution N
# particles, # oscillators, or # spins n # moles n = N/V number density of particles (p = nkT for ideal gas) nQ quantum density of an ideal gas
NA
Avogadro’s constant = 6.02 ϫ 1023 molecules/mole p = F/A pressure ix

Physics 213 Elements of Thermal Physics

p = mv px = mvx
Pn
P(m)
P(E), P(x), P(m)
P(E)dE
q
Q
Qh and Qc
R = NAk
␴ = ln ⍀
␴SB
␴R(UR) or SR(UR)
S = k␴ = k ln(⍀)
␴d
T

U
UR
V v ⍀
⍀ = ⍀1⍀2
Wby or Won
⍀R(UR)
⍀(E)dE

x

momentum of a particle (distinguish from pressure by usage) x-component of momentum of a particle probability that a particle is in a quantum state labeled n probability of sampling m = Nup – Ndown or Nleft – Nright probability density (per unit energy, distance, or step) probability that a particle has energy between E and E + dE number of energy quanta in an oscillator heat (positive if inflow, or negative if outflow) heat flow to/from hot and cold reservoirs (defined as positive) gas constant = 8.314 J/mol-K
(dimensionless) entropy of a system with ⍀ microstates
Stefan–Boltzmann constant entropy of a thermal reservoir with energy UR conventional entropy (units J/K) standard deviation of a distribution absolute temperature (Kelvin) mean collision time for a particle energy of a many-particle system energy of a thermal reservoir volume speed of a particle
# microstates for a many-particle system,
# microstates of 2 combined systems that separately have ⍀1 and
⍀2 microstates work done by or on a system, Wby = ΎpdV
# microstates of a thermal reservoir with energy UR
# microstates of a particle with energy between E and E + dE

Elements of Thermal Physics Physics 213

Table of Constants k = 1.381 ϫ 10–23 J/K
NA= 6.022 ϫ 1023
R = 8.314 J/molиK
= 0.0821 literиatm/molиK h = 6.626 ϫ 10–34 Jиs
ប = h/2␲ = 1.055 ϫ 10–34 Jиs c = 3.00 ϫ 108 m/s g = 9.80 m/s2 me = 9.11 ϫ 10–31 kg mp = 1.674 ϫ 10–27 kg
␮e = 9.2848 ϫ 10–24 J/T
␴SB = 5.670 ϫ 10–8 W/m2k4

Boltzmann’s constant
Avogadro’s constant gas constant = NAk

= 8.617 ϫ 10–5 eV/K

Planck’s constant

= 4.136 ϫ 10–15 eV-s

speed of light acceleration due to earth’s gravity mass of an electron mass of a proton
= 1836 me
Electron magnetic moment
= 57.95 ␮eV/Tesla
Stefan–Boltzmann constant

Conversion Factors
1 liter = 103 cm3 = 10–3 m3
1 Pa = 1 N/m2
1 atm = 1.013 ϫ 105 Pa
1cal = 4.184 J = energy to raise 1g of H2O by 1 K
T(K) = T(°C) + 273 = (5/9)(T(°F) –32) + 273

1 eV = 1.602 ϫ 10–19 J
1 eV/particle = 96.5 kJ/mol
At 300 K, kT = 0.026 eV
At 300 K, RT = 2494 J
1 liter-atm = 101.3 J

Temperature Scales
ABSOLUTE
water boils water freezes liquid nitrogen absolute zero
"room temperature"

~
~

CELSIUS

FAHRENHEIT

373 K

100°C

212°F

273 K

0°C

32°F

77 K

-196°C

-320°F

0K

-273°C

-460°F

293 K

~
~

20°C

~
~

68°F

xi

Physics 213 Elements of Thermal Physics

xii

PHYSICS

213

Introduction and Overview

A. Classical to Quantum Physics
Physics seeks to explain and predict the world around us using precise mathematical tools. The math tools we develop are based on experimental observations. The essential test of a mathematical theory is its ability to predict the behavior of nature in new situations. The theories that allowed the engineers and scientists of the 1960s to put a man on the moon were developed and tested right here on earth. The future extension of microchips into the sub-micron regime (and the development of new data-storage media) will rely upon the creative application of the present theories of materials.
At the beginning of the 20th century, classical mechanics and electromagnetic theory were well developed, so when microscopic particles such as the electron were discovered, scientists were quick to apply these proven theories to the new regime. Explaining the atom in planetary terms soon failed. An electron circulating around a nucleus is continually accelerating, and an accelerating charge radiates electromagnetic energy like a radio antenna. Thus, according to classical theories, the electron should lose its energy and spiral into the nucleus. A resolution of this dilemma was provided by scientists in the early 1900s who proposed that an electron behaves more like a wave than a localized particle.

xiii

Physics 213 Elements of Thermal Physics

In essence, the classical orbits are replaced by stationary waves describing the probability of finding an electron in a certain place. Stationary charge means no radiation, and therefore the atom is stable. When a particle is confined in space, its wave nature gives rise to discrete energy levels, such as the electronic energy levels in atoms. Hence the name “quantum mechanics.”
In this course we shall see that macroscopic (large scale) properties of systems with many particles depend on the microscopic wave nature of the constituent particles. For example, the electrical conductivity of the silicon crystals in your watch or computer depends on the fundamental constant of quantum mechanics, h = Planck’s constant.

B. Systems with Many Particles
This course is an introduction to the physics of many-particle systems, also known as thermal physics. Traditionally this is the realm of classical thermodynamics, which approaches the subject from an empirical, or observational, point of view. As the word suggests, thermodynamics is the study of “heat” and “work.” Although steeped in mathematical formalism, classical thermodynamics has wide-ranging applications. The developments of modern machines—including your car, your computer, the plane overhead, etc.—are based on applications of classical thermodynamics. Modern chemistry and engineering rely on thermodynamic principles.
Some of the main questions of thermodynamics are: What are the practical limits in converting heat to work? Does energy always flow from hot to cold? What is the meaning of hot and cold—quantitatively? What determines the physical properties of a medium—for example, its heat capacity, its electrical conductivity, or its magnetic properties? Why does matter undergo phase transitions between gases, liquids, and solids?
Although the empirical laws of classical thermodynamics have wide-ranging utility, a basic understanding of many-particle systems requires an atomistic approach. Thermal physics begins at the microscopic level and applies statistical concepts to understand the macroscopic behavior of matter. The microscopic approach of thermal physics is also known as statistical mechanics.
Consider the magnitude of the problem: There are 6 ϫ 1022 atoms in a cubic centimeter of silicon crystal, and 0.27 ϫ 1020 molecules in a cubic centimeter of air. Even with the biggest computer imaginable, you could not predict the motion of an individual particle in such a complicated system. How, then, can we begin to predict the behavior of the gas in this room, or the electrical and thermal properties of a solid, or phase transitions between solids, liquids, and gases?
The answer is that the world around us is governed by the random, statistical behavior of many, many particles. Neither classical mechanics nor quantum mechanics can predict the properties of many-particle systems without the help of statistical methods.

xiv

Introduction and Overview Physics 213

C. Statistics and Entropy
To appreciate the importance of statistics in describing many-particle systems, consider the case of 10 gas particles in a two-part container. Initially we put all 10 particles in the left side:

Now we watch while the particles move around with some thermal energy. As time proceeds, both sides of the container become populated with particles. If you were to take snapshots of the system as time progressed you might find the following results:

10

Number in left side, NL

5

0

Time

As you continue to take snapshots, you would get a pretty good idea what values of NL you are most likely to observe. If you take many snapshots and tabulate the number of occurrences for each value of NL, you would find, roughly:

Number of occurrences 0

1

2

3

4

5

6

7

8

9

10

NL

If you could do the experiment with 100 particles, then the result would be a more compressed histogram, something like this:

Number of occurrences 0

10 20 30 40 50 60 70 80 90 100

NL xv Physics 213 Elements of Thermal Physics

Intuitively we can understand why the distribution is more compressed for a larger number of particles. The probability that the left half of this room would contain 90% of the gas particles is extremely unlikely. A general result is the following: For a total of N particles, the statistical variation in NL (i.e., the width of the distribution) is about N1/2.
For the 100-particle example above, the width of the distribution is about 10. For N =
1020 the variation is only 1010 particles, or one part in ten billion—an extremely sharp distribution. That’s why the pressure in this room doesn’t fluctuate significantly.
The above example suggests two things: 1) as systems get bigger, the macroscopic properties (e.g., the fraction of particles on the left, or the gas pressure on the left side) become more certain, and 2) in order to quantitatively describe a system, we need to count the number of ways that particles can distribute themselves. In technical terms, we count the “number of accessible microstates,” denoted ⍀. Counting microstates is a major topic in statistical mechanics. The “number of occurrences” plotted above are basically graphs of ⍀ for various values of NL.
The logarithm of the number of accessible states defines the entropy of a system. More specifically, entropy is S = k ln⍀, where k is a constant. Entropy is a fundamental property of a many-particle system. You may have heard the phrase, “entropy is disorder.”
Disorder is not really a well-defined concept; however, entropy does represent disorder in the sense that more entropy corresponds to a larger number of possible states.
To get a feeling for the importance of entropy, consider setting up the two-part container with 10 particles on the left and 90 particles on the right. This situation corresponds to the arrow on the diagram on page xv. Under the condition NL = 10 there are a limited number of “microstates” available. There are many more microstates associated with NL
= 50 than NL = 10. The particles will diffuse around, sampling all microstates, until there is roughly an average of 50 particles on the right and 50 particles on the left. Eventually there is very little possibility of finding the system with only 10 particles on the left.
There are two fundamental observations that we can make here:
1) Many-particle systems exhibit irreversibility. While it is possible for the system to revert back to 10 particles on the left, it is extremely unlikely. If we were dealing with N = 1023 particles, it would take longer than the age of the universe before
NL = N/10 = 1022 is observed. That would be equivalent to the pressure in one of your lungs suddenly dropping to 0.2 atmospheres.
2) In equilibrium, entropy is maximized. The fundamental postulate of statistical mechanics is that each available microstate is equally likely. If we initiate a system in a restricted set of microstates then remove the constraint, the system will redistribute, randomly occupying all accessible microstates. In equilibrium, therefore, the probability of measuring NL is proportional to the corresponding value of ⍀.
These facts are the basis for The Second Law of Thermodynamics. In this course we will exploit this principle to solve many useful problems. xvi Introduction and Overview Physics 213

D. Road Map for this Course
The following section is provided to give you an overview of the contents and goals of this course. I suggest that you read it briefly now and refer to it frequently as the course progresses.
Before attacking the statistical aspects of thermodynamics, we will bolster your intuition about the macroscopic world. In Chapter 1 the concept of internal energy is introduced for many-particle systems. The ideas of kinetic energy and potential energy are reviewed, and we consider a system that has both: the harmonic oscillator. The harmonic oscillator is the basis for vibrations in molecules and solids and comes up often in this course.
The underlying principle of thermal physics is the irreversibility of many-particle systems. In Chapter 2 we discuss this concept as the Second Law of Thermodynamics and introduce the concept of entropy. Entropy is a maximum for systems in equilibrium, leading to an important relationship between entropy and absolute temperature.
In Chapter 3 we investigate the ideal gas—a dilute collection of free particles with negligible interactions. Kinetic theory provides us with a microscopic model of pressure.
The Equipartition Theorem provides us with a working definition of temperature.
Using these concepts we derive the Ideal Gas Law, pV = NkT.
In Chapter 4, we examine the thermal cycles of ideal gases in the context of heat engines and discover that the most efficient engine is the Carnot engine. The Carnot cycle is the standard against which all other thermal cycles are measured. Heat engines that run in reverse are refrigerators or heat pumps, providing us with many useful applications.
Statistical concepts are introduced in Chapter 5. We begin with a system of spins, providing the basis for paramagnetism. The spin system clearly illustrates the concepts of microstates and macrostates. A mathematically similar two-state problem is the random walk, which is the basis for particle diffusion and heat conduction. The math tools in this chapter are the binomial and Gaussian distributions.
Chapter 6 extends our statistical tools to systems with multiple bins or cells, allowing us to treat the particles in an ideal gas. We examine in detail a basic problem of statistical mechanics: the equilibrium between two systems that exchange volume. The underlying principle is that the most likely configuration of an isolated system corresponds to a maximum in total entropy, S = k ln⍀.
In Chapter 7 we consider the exchange of energy between two systems, leading to the general definition of absolute temperature in terms of the derivative of entropy with respect to energy U,
1
dS
=
T dU , at constant volume V and particle number N. xvii Physics 213 Elements of Thermal Physics

In Chapter 8 we note that if a large system (a “thermal reservoir”) at temperature T is in thermal contact with a small system with variable energy U, then the entropy of the reservoir is just given by a Taylor expansion, So – (dS/dU)U, or in terms of temperature,
Sres = So – U/T, where So is a constant. We shall see that this relation (and Sres = k ln⍀res) leads directly to the probability Pn of finding a particle in a quantum state (labeled n) with energy En:
Pn = Ce –E

n / kT

,

which is the famous Boltzmann distribution. This basic result is applied to paramagnetic spins, elasticity in polymers, vibrations in molecules, and electronic states.
In Chapter 9 the Boltzmann approach is used to predict the energy distribution of particles in an ideal gas—the so-called Maxwell–Boltzmann Distribution. A second application is the frequency distribution of thermal radiation, leading to the Stefan–Boltzmann
Law for the power radiated from a hot object, such as a light bulb, your body, or the sun.
The application of these statistical concepts is greatly facilitated by defining what is known as the “free energy,”
F = U – TS, and its derivative with respect to particle number at constant V and T,
␮ = dF/dN, known as the “chemical potential.” Equilibrium conditions are determined by minimizing the free energy of a system, which leads to simple relations between the chemical potentials of its subsystems. In Chapters 10–13 we will apply the Principle of Free
Energy Minimum to the following problems:
I.
II.
III.
IV.
V.
VI.
VII.
VIII.

Ideal Gases
Paramagnetic Spins
Law of Atmospheres
Ionization of Atoms
Chemical Equilibrium in Gases
Carrier Densities in Semiconductors
Adsorption of Particles on Surfaces
Phase Transitions

The problems chosen for this course represent many important processes in the world around us. You will learn quantitative methods for studying a broad range of physical, chemical, and biological materials.

xviii

Introduction and Overview Physics 213

Here is some practical information about this book:
a) It is a good idea to read the assigned chapter before lecture and come to class with questions. b) Appendices generally are optional reading and some include advanced material that may be useful in preparation for upper division courses.
c) Chapter 14 (Gibbs free energy) may not be covered in this course, but it may be of specific interest to chemistry, materials science, and physics majors.
d) Exercise problems are provided at the end of each chapter. Solutions to most problems are given at the back of the book.
Enjoy your adventure into Thermal Physics!

Jim Wolfe, UIUC

xix

Physics 213 Elements of Thermal Physics

Exercises
1) An appreciation of the concept of microstates can be gained by considering a twocell container with 10 distinguishable objects, labeled A through J:
J

B

D

I
C

H

F

A
E

G

For a total of N objects, the number of ways of arranging the system with NL objects on the left and NR = N – NL objects on the right is the binomial distribution:
Ω=

N!
N L !N R !

Complete the following table for the above system: (By definition: 0! = 1.)

NL =

0

1

2

3

⍀=

4

5

6

7

8

9

10

210

We say that ⍀(NL) is the “number of microstates in the macrostate labeled NL”.
Compare to the graph in Section C. [In reality one must determine ⍀ for identical particles (atoms or molecules) in a given volume. Interestingly, the binomial distribution still applies to identical particles in a two-part container, as discussed in
Chapter 6 (E).]

p

2) Logarithms and exponents are not just mathematical conveniences; they are an integral part of thermal physics. The common integral Ύdx/x equals the natural logarithm, ln x. Conversely, the derivative of the natural logarithm d(ln x)/dx equals l/x. Using the ideal gas law, pV = NkT, show that the work W done by an ideal gas expanding from volume V1 to V2 at constant temperature is NkT ln(V2/V1).
(p = pressure, T = temperature, Nk = constant)
V2

Wby =

∫ pdV

V1

xx

=

CHAPTER

1

Origins of Mechanical Energy

A. Kinetic Energy and Work
Energy may have been the first concept in mechanics that you found difficult to understand intuitively. In contrast, momentum is not a particularly difficult concept for anyone who has bumped into something.
The pain or damage from a collision increases with both the mass and the velocity of the offending object. Momentum is simply mass times velocity. Energy, on the other hand, may have a different meaning (or many meanings) for each of us. Energy is what we are supposed to feel after eating a certain bowl of cereal in the morning, or what we don’t feel the day after cramming for a physics exam. Energy is what lights our lights, warms our dorms, propels our cars and our bodies, and costs money.
Hopefully, your mechanics and E & M courses provided you with a practical understanding of energy. But, just in case you don’t quite remember where ½ mv2 came from, here is how the idea crept in your mechanics course…

1

Physics 213 Elements of Thermal Physics

You were dealing with an object moving in a force field (say the earth’s gravitational field), and you wanted to predict the velocity of the object at different positions. Well, that’s just a physicist’s way of saying, “Drop a ball and describe what happens.”

yi y h

F yf To make things simple, we consider motion in one dimension. (No vectors here.)
The gravitational force on the object with mass m is F = mg, where g = 9.8 m/s2 is the acceleration due to gravity. We invoke Newton’s Second Law (and the definition of acceleration) to describe the motion of the object:
F = ma = m dv/dt = dp/dt,

(1-1)

where a and v are the instantaneous acceleration and velocity, and p = mv defines the momentum. Now, here is where some creative math comes into play. We start with
F dt = m dv

(1-2)

and take the rather unpredictable step,
F v dt = m v dv.

(1-3)

Why would we want to do that? The reason is that this step allows us to change the differential on the left from time to distance, namely: v dt = dy. Now, we have,
F dy = m v dv.

(1-4)

Now just integrate this equation to get the final result (remembering that Ύvdv = ½ v2):
ΎF dy = ½ mvf2 – ½ mvi2 ,

(1-5)

where the subscripts i and f refer to the initial and final velocities of the object. Because
F is a constant in this case, the integral on the left becomes mgh, where h is the distance the ball drops. (The integral is positive because F and dy are in the same direction.)
Now you can set vi = 0 and solve for vf.
Why didn’t we just integrate F dt = m dv to get mg (tf – ti) = m(vf – vi) ?
2

(1-6)

Origins of Mechanical Energy Chapter 1

If the ball is dropped with vi = 0 at ti = 0, then mgtf = mvf. This approach gives us the final velocity in terms of the final time, which we don’t know. Furthermore, if the force were a function of position (such as in the proximity of planets or electrical charges), we would not have enough information to do the integral of Fdt.
The neat trick about multiplying both sides of Eq. (1-2) by v, is that it turns dt into the differential of a quantity, y, which is the variable specified in the problem (yi = h, yf = 0).
Let’s not lose sight of our goal. We have just seen that
ΎF dy = ⌬(½ mv2),

(1-7)

where ⌬ = “change in,” and the integral form of this equation is required when F is a function of position. We recognize that the integral on the left is the work done on the object by the earth. For a general applied force,
Won ϵ ΎF ؒ dr,

(1-8)

and for a single particle, we define
KE ϵ ½ mv2

(1-9)

as the kinetic energy of the particle. The relation
Won = ⌬(KE)

(1-10)

is a very basic result for the motion of a single object, often called the “Work-Energy
Theorem.” In words, the work done on an object by an applied force equals the change in kinetic energy of the object. In fact, this result is nothing more than the integral form of Newton’s Second Law, with a couple of new definitions: work and energy.
Equation (1-10) is a highly useful form of Newton’s law. As you have seen in your mechanics course, for “conservative forces,” such as those due to gravitation and electric charges, the work done in moving from point A to point B does not depend on the path taken. The shape of a roller coaster doesn’t matter in determining the final speed at a given elevation (assuming no friction). The speed of an object orbiting the earth in an elliptical orbit is directly related to its distance from the earth.

B. Extension to Many-Particle Systems
If we extend our discussion of energy to a collision between two point-like particles and assume no external forces, then the left side of Equation (1-10) vanishes and we have,
0 = ⌬(KE1) + ⌬(KE2) = ⌬(KEtot),

(1-11)

where the numerical subscripts label the two particles. This is a statement of “conservation of energy” for the simple case where there are no interaction energies in the system, except briefly during collisions.
3

Physics 213 Elements of Thermal Physics

Recall that there was one other conservation law associated with Newton’s Second Law, namely the conservation of momentum. Let us assume that there are no external forces on a system of particles and that Fij is the force on object i due to object j. Newton’s
Zeroth Law dictates that forces appear in equal and opposite pairs: Fij = – Fji. Writing
Newton’s Second Law in vector form,
͚ Fij = mi dvi/dt = dpi/dt,

(Fij = force on i due to j)

(1-12)

where the left side is the sum (over j) of all the forces on the object labeled i. If there are
N point objects, then there will be N equations. Adding all of these equations together and noting that the pairwise forces cancel, we get,
0 = ͚ dpi/dt = d(͚pi)/dt,

(1-13)

implying that the total momentum p = ͚pi must be a constant. Notice that the total momentum is also the center-of-mass momentum pcm = Mvcm where M = ͚mi and vcm = ͚mivi/M is the center-of-mass (or average) velocity of the particles.
If a single external force F acts on a system of particles (or any object), then Newton’s
Second Law takes on the form,
F = dpcm/dt = Macm.

(1-14)

This is a very general equation, which applies to any system of particles, even those bonded together in a solid. Consider a force applied to the end of a solid board, as depicted here:

F
L

The dot represents the center of mass of the board, which is situated on a frictionless table. Applying the force to one end of the board produces a complicated motion involving both translation and rotation. Notice that the distance the force’s contact point moves is different from the distance that the center of mass moves. For example, at some specific time later the force has pulled a distance D and the center of mass of the board has moved a distance dcm = D – ½ L:

4

Origins of Mechanical Energy Chapter 1

F
F
F

dcm
D
(Note: the board is still rotating at this instant of time.) The work done by the constant force is F times the distance it acts,
Won = FD.

(1-15)

What does Newton’s Second Law (Eq. 1-14) tell us about this system? Using our little trick again,
F vcm dt = M vcm dvcm ,

(1-16)

F dcm = ⌬(KEcm) .

(1-17)

but now we have,

The center-of-mass subscripts are very important. dcm is the distance that the centerof-mass moved, not the distance the force acted, so the left side of this equation is not equal to the work done on the board! The integral of Newton’s Second Law in this case is not a work-energy equation. For simplicity, we call Eq. (1-17) the “c.m. equation.”

C. Internal Energy
The Work-Energy Theorem for a many-particle system with internal degrees of freedom (e.g., rotation or vibration) is actually a distinct concept from Newton’s Second
Law. It relates work to the change in total energy,
Won = ⌬(Total Energy).

(1-18)

For the rotating and translating board considered above, the total energy is the sum of
KEcm and rotational energy about the center of mass. Therefore,
FD = ⌬(KEcm) + ⌬(KErot).

(1-19)

The work done on the board is converted totally into translational plus rotational energy.

5

Physics 213 Elements of Thermal Physics

Recall from your mechanics course that rotational energy is ½ Icm␻2, where Icm is the moment of inertia of the object and ␻ is the angular speed in radians per second. The work-energy equation (1-19) plus the cm equation (1-17) allow you to determine the rotational energy of the board when it reaches the orientation in the second drawing.
(Answer: ⌬(KErot) = FL/2.)
In general, the total energy of an object is KEcm + U, where U is the internal energy of the system, which includes rotational and vibrational motions, as well as potential energy associated with the binding of molecules and atoms. The generalized workenergy equation may be written:
Won = ⌬(KEcm) + ⌬U

(1-20)

U is essentially the energy of the system in the cm frame of reference. Consider the mechanics problem where two blocks collide and stick together (an inelastic collision):
Initially:

m

Finally:

m

vi

m

m

v f = v i /2

Because the total force on the two blocks is zero, their total momentum is constant, implying mvi = 2mvf. Notice that vf = vcm in the lab frame. Also, acm = 0 implies that vcm is constant (= vi/2) throughout, so ⌬(KEcm) = 0. Therefore, ⌬U = 0 in this collision. In fact, the initial U is not zero for our 2-mass system. We can see this by observing the collision in the center-of-mass frame:
Initially:

m

v i /2

Finally:

v i /2

m

m

m

From the initial state, we see that the internal energy U of the 2-mass system equals ¼ mvi2. In the collision, all of this easily identifiable translational energy is converted into sound waves and thermal vibrations in the blocks.
The concept of internal energy is particularly important in the study of many-body systems. In fact, in this course we will rarely deal with the center-of-mass kinetic energy of a system. We will be concerned almost entirely with the internal energy U by observing the system in the cm frame. The statement of energy conservation when
⌬(KEcm) = 0 is simply:
Won = ⌬U.
6

Origins of Mechanical Energy Chapter 1

D. Potential Energy
In your mechanics course you learned how the work done on an object by a “conservative” force can be treated in terms of potential energy. For example, an object at a height h from the earth’s surface has a gravitational potential energy PE = mgh. One must be careful not to double-count work and energy in the Work Energy Theorem, as I will now demonstrate:
Won = ⌬(Total Energy)

(1-21)

= ⌬(KE) + ⌬(PE)
= ⌬(½ mv2) – mgh.
The minus sign is because the object lost mgh potential energy in falling a distance h
(chosen positive). However, we have already seen that the work done on the object by the earth’s gravitational field when the object falls a distance h is
Won = force ϫ distance = mgh.

(1-22)

Therefore, we are led to the (incorrect) conclusion that
⌬(½ mv2) = 2mgh

(1-23)

in contradiction to our earlier result. Where have we gone wrong?
The reason that we have double-counted mgh is that we have not clearly identified our system in applying Eq. (1-21). Remember, the procedure in solving a mechanics problem is:
1) specify the system that you are considering,
2) draw all the external forces on that system, and
3) apply Newton’s Second Law.
If we choose the system as the ball plus the earth, then we realize that there are equal and opposite forces between the ball and the earth. Thus, we realize that there are no external forces in this system, but there is the potential energy between the ball and the earth. So, the left side of the work-energy equation vanishes, and we have,
0 = ⌬(½ mv2) – mgh.

(1-24)

7

Physics 213 Elements of Thermal Physics

If, on the other hand, we choose the system as the ball alone, then there is no potential energy (potential energy requires at least two objects interacting, such as the ball and the earth), and the external force on the system is mg. The work-energy equation becomes, mgh = ⌬ (½ mv2),

(1-25)

which, again, is the correct result. A consistent choice of system and external forces is critical to the solution of a mechanics problem. These ideas carry over to thermodynamics problems. For example, in calculating atmospheric pressure at an altitude h, it is usual to include potential energy mgh per particle in the total internal energy
U, implicitly including the earth in the system.

E. Vibrational Energy—Kinetic plus Potential
Vibrations are important to an understanding of the thermal properties of molecules and solids. Often we model a molecule or crystal with balls and springs. The balls represent the atomic cores and the springs represent the binding forces due to the valence electrons.
In your mechanics course you solved the problem of the simple harmonic oscillator:

x

κ = spring constant κ m

The equation of motion (Newton’s Second Law) describing this system is
F = m d2u/dt2 = – ␬u ,

(1-26)

where u(t) = x(t) – L is the displacement of the ball from its rest position, and L is the length of the spring at rest. A solution to this equation is u(t) = A sin ␻t,

(1-27)

which, when plugged into Equation (1-26), yields the angular frequency ␻ = (␬/m)1/2, and frequency f = ␻/2␲.
The vibrational frequency of a “diatomic molecule” is a little more complicated:

m

8

κ

m

Origins of Mechanical Energy Chapter 1

To compute the vibrational frequency of this object, we must write equations of motion for ball 1 and 2, involving their displacements u1 and u2, m d2u1/dt2 = –␬(u1 – u2)

(1-28)

m d2u2/dt2 = –␬(u2 – u1).
These are two coupled differential equations. The solution to this problem is given in Appendix 1. The angular frequency of vibration of the molecule turns out to be
␻ = (2␬/m)1/2.
Now imagine three masses arranged in linear order,

m

κ

m

κ

m

This linear triatomic molecule has two compressional “modes of vibration” with two distinct frequencies:

ω = ( κ/ m) 1/2

ω = (3 κ/ m) 1/2

These vibrations are known as the “normal modes” of the molecule because once the atoms are started in a normal mode, they will continue vibrating in that mode indefinitely.
Check out Appendix 1 to see how normal mode problems are solved by matrix methods.
In one dimension, N masses connected by springs have N – 1 normal modes of vibration.
In three dimensions, N atoms have 3N – 6 normal modes of vibration (see Appendix
1). For a crystal, the number of atoms N is usually a very, very large number, so we can say quite accurately that N atoms in a crystal have 3N normal modes. The number of normal modes of a solid is very important to its thermal properties.

9

Physics 213 Elements of Thermal Physics

In a solid with N = 1022 atoms there are 3 ϫ 1022 normal modes of vibration. The frequencies range from a few kilohertz to about 1012 Hz. The vibrations of lower frequency we call “sound waves,” or “ultrasound” in the kHz to MHz range. A vast majority of modes are well above MHz frequencies, so that thermal energy (which at normal temperatures distributes randomly among all modes) mostly ends up in these high frequency modes.
Now you can appreciate why atomic vibrations in solids are so important to the study of thermodynamics. The vibrating solid is characterized by specifying the energy present in each of its normal modes.

10

Origins of Mechanical Energy Chapter 1

Exercises
1) Check your understanding of Section D by considering a mass m attached to a spring with spring constant ␬. The spring has an unstretched length L, the force on the mass is F = –␬u and the potential energy of the spring is PE = ½ ␬u2, where u = x – L. Starting with the spring stretched to uo and letting the mass go from rest to velocity v at displacement u, write down the work-energy equation for the two cases below, showing that they yield the same relation between u and v:

x
Mass alone:
Mass plus spring:

m
2) As an application of the Work-Energy Theorem and the c.m. equation, consider the case of a car accelerating from rest, as illustrated below. Assume that the accelerating force is F, which is the horizontal force that the road applies to the tires
(and vice versa). Assume that the tires do not slip.

©Hayden-McNeil, LLC

d

After the car has moved a distance d,
a) What is the velocity of the car?

b) What was the work done on the car?

c) Where did the energy come from that moved the car?

d) How does that energy enter into your equations?

11

Physics 213 Elements of Thermal Physics

Two problems from a former midterm exam will test your understanding of the concepts discussed in this chapter:
3) Four balls are rigidly connected together with four rods. At which point should you apply a constant force F in order to produce the greatest initial acceleration of the center of mass?

a

a) point a

b

b) point b
c) either point gives the same cm acceleration.

4) Two 4-kg balls of putty are attached to a string of length 2 meters. A constant force
F = 3 newtons is applied to the center of the string and the balls move without friction. After the force has pulled a distance of 7 meters, the balls collide and stick.

7m

a) What is the center-of-mass velocity of the balls at the instant that they collide? b) What is the thermal energy generated in the collision?

12

CHAPTER

2

Irreversibility and the Second Law of Thermodynamics

A. Thermal Energy
Consider the problem where a small mass (say, an atom) crashes into a solid. The solid is represented by a system of masses connected by springs, and the incident mass sticks:

What we find is that the collision excites not just one normal mode, but a combination of normal modes. Many frequencies are excited, as you would discover if you recorded u(t) for any one of the atoms and then took a Fourier transform to extract the frequency spectrum.
This is a simple model of what happens in an inelastic collision. The
“pure” kinetic energy of a single object is converted into the “complicated” internal energy of the many-particle system. If a mass with initial velocity vi is incident on an object with a N – 1 similar masses and sticks to it, then momentum conservation (mvi = (Nm)vf) dictates: vf = vcm = vi/N

(2-1)

The initial energy ½mvi2 equals the final energy ½Nm(vcm)2 + Uvib; therefore, nearly all the energy of the incident particle is converted into vibrational energy: Uvib = ½ mvi2 (1 – 1/N). In words:
Incident kinetic energy → Induced vibrational energy (2-2)
13

Physics 213 Elements of Thermal Physics

On the microscopic scale, the masses of atoms are extremely small and interatomic forces are large, so the frequencies of the normal modes (ranging up to about (␬/m)1/2) are very high, typically about 1012 Hz. In an inelastic collision the translational kinetic energy of a small object colliding with a large object is almost completely converted into thermal vibrations, which are mostly distributed in the high-frequency modes.
In short, the collision generates “thermal energy.” You may be tempted to call these high frequency vibrations “heat,” but technically the term heat is reserved for the transfer of thermal energy from one body to another. Sometimes we slip and call thermal energy by the name heat because it is a common non-technical usage of the term.

B. Irreversibility of Many-Body Systems
We can now see why an inelastic collision is basically irreversible. The conversion of the kinetic energy of a single particle into the many, many vibrational modes of a solid does not take place in reverse. The computer simulation with 20 masses gives us a feeling for this irreversible process, although with a small number of masses the total energy may eventually be returned to the incident mass if we wait long enough.
It is an experimental fact that in an inelastic collision between two solids one cannot recover all of the pure translational energy from the complex thermal energy generated by the collision, even though that would not be a violation of the First Law of Thermodynamics (energy conservation). The reason is that nature also obeys a Second Law of
Thermodynamics regulating energy flow in systems consisting of many particles.
The Second Law of Thermodynamics is quite consistent with our intuition. A hot object resting on a table does not suddenly cool with the result that the object jumps into the air. This is not because the total thermal energy of the object is small. You will see in a discussion exercise that if all the thermal energy in an object initially at room temperature were converted into center-of-mass energy, the object could indeed jump to quite a large height (many kilometers!). Of course, this would never happen. The irreversibility of energy flow is the Second Law of Thermodynamics in action.
We have touched upon a basic question in thermodynamics: how much usable energy is there in a system? Work can be easily converted to thermal energy by friction, but what fraction of an object’s thermal energy can be converted into work? The Second Law of
Thermodynamics says that it is impossible to convert thermal energy into work with
100% efficiency. Exactly how much work can be extracted from a vibrating solid, or from a system of moving gas molecules, is a major problem in the subject of thermodynamics, which we will treat in the context of heat engines. Indeed, the conversion of work to thermal energy, and thermal energy to work, is the basic issue of thermodynamics.

C. Entropy and the Approach to Equilibrium
Conversion of work into thermal energy, and vice versa, are fundamental processes of thermodynamics. Another fundamental process is simply the transfer of thermal energy

14

Irreversibility and the Second Law of Thermodynamics Chapter 2

from one system to another, i.e., the process known as heat and designated Q. Start with two systems with initial energies U10 and U20 and bring them into thermal contact:

U10

U20

U1f

U2f

We know intuitively that thermal energy will flow from one system to the other until an equilibrium condition is reached. The First Law of Thermodynamics only tells us that the total energy stays constant. There must be some other property of the system that tells us how the total energy will be partitioned between the two systems in equilibrium.
That property is the entropy. Entropy is an additive function of the two systems, just like energy. The basic approach of classical thermodynamics is to postulate that the total entropy
Stot = S1 + S2

(2-3)

is a maximum in equilibrium. Considering U1 as the free parameter (U2 = Utot – U1), we have dStot/dU1 = dS1/dU1 + dS2/dU1 = 0.

(2-4)

S tot

U1

f

U1

For this closed system dU1 = –dU2 by conservation of energy. Therefore, we may write the equilibrium condition as, dS1/dU1 = dS2/dU2 .

(2-5)

The term on the left is a property of system 1 and the term on the right is a property of system 2. Intuitively, we associate “thermal equilibrium” with an equilibration of temperatures, so it is natural to define the temperature in terms of dS/dU. To retain our concept of hot and cold, the most convenient choice is,
1/T ϵ dS/dU, or more precisely, 1/T ϵ (ѨS/ѨU)N,V

(2-6)
15

Physics 213 Elements of Thermal Physics

which reminds us that particle number N and volume V are held constant. Therefore, the equilibrium condition is T1 = T2. By this definition, if T1 > T2 energy will flow from system 1 to system 2 in order to maximize Stot. Defining the derivative as the inverse of temperature is consistent with both the maximization of entropy of a closed system and our intuitive concept that thermal energy flows from a high-T object to a low-T object.
The simplest statement of the Second Law of Thermodynamics is that the entropy of a closed system either
a) remains constant (if the system is in equilibrium), or
b) increases (if the system is approaching equilibrium).
In mathematical terms, as time proceeds,
⌬Stot Ն 0

(2-7)

which is an alternative statement to the Second Law. Here is a summary of the basic properties of entropy that will be further developed in this course:
1) Entropy is a property of the system – a “state function” like U, V, N, p, and T, and unlike heat Q and work W that are energies in transit.
2) Entropy for an isolated system is a maximum in equilibrium.
3) Entropy is increased by heat flow Q into a system at temperature T: ⌬S = Q/T.
4) Entropy is proportional to the logarithm of the number of accessible microstates:
S = k ln(⍀), with k = Boltzmann constant, defining the temperature scale.
Entropy is associated with the hidden motions in many-particle systems, in contrast to the collective motion of the center-of-mass. Because the entropy of an isolated system always increases or stays the same, nature exhibits irreversibility. A ball resting on a table does not spontaneously convert its thermal energy into center-of-mass energy because that would mean a decrease in entropy. Nor does heat flow spontaneously from cold to hot objects.
An important task of statistical mechanics is to determine the functional form of the entropy for an N-particle system with energy U and volume V,
S = S(U, N, V).

(2-8)

Knowing this function for the particles in question will enable us to compute the equilibrium conditions, to describe phase transitions, and to determine the capacity for doing work. A major aim of this course is to gain a microscopic picture of entropy for some common systems.

16

Irreversibility and the Second Law of Thermodynamics Chapter 2

D. Entropy Maximization and the Calculus of Several Variables
The Second Law of Thermodynamics says that an isolated many-particle system is in equilibrium when its entropy is maximized. How do we mathematically define this maximization condition? The answer involves the calculus of several variables, which is briefly described below.
First consider the simple function y(x) plotted at the right. A maximum in this function occurs where a small (nonzero) change in x produces zero change in y. In terms of the function’s derivative,
⎛ dy ⎞
Δy = ⎜ ⎟ Δx = 0 .
⎝ dx ⎠

y(x)

(2-9) x xm

Because ⌬x is nonzero, the maximum occurs when the slope of the curve, dy/dx, equals zero. That is, setting dy/dx = 0 for the known function y(x) yields the value x = xm.*
Entropy, however, is generally a function of many variables. For example, if we knew the function S(U,N,V) for a system containing N particles, how would we determine the equilibrium values of U and V? In the case where N is fixed and U and V are variables, the condition for maximum entropy is:
⎛ ∂S ⎞
⎛ ∂S ⎞
ΔS = ⎜
ΔU + ⎜
ΔV = 0.

⎝ ∂V ⎟⎠ U
⎝ ∂U ⎠ V

(2-10)

Quantities like (ѨS/ѨU)V are known as “partial derivatives.” (ѨS/ѨU)V is simply the derivative of S(U,V) with respect to U, treating V as a constant. Consider the problem pictured at the right. An ideal gas of atoms is contained in a cylindrical volume,
VϭA ϫ h. The container and gas are thermally isolated from the surroundings. Mechanical equilibrium means that the gas pressure p equals the applied force per unit area: p = F/A. What are the equilibrium values of V and U for a given force (or p)?

F
A
h

In Exercise 3 you will solve this problem given the functional form S(N,U,V) for the ideal monatomic gas and using the definition of temperature introduced in this chapter.
Amazingly, with Eq. (2-10) you will derive two basic properties of an ideal monatomic gas: its energy U(N,T) and the ideal gas law pV = NkT.
You won’t see many problems with partial derivatives in this book because multivariable functions such as S(U,V) or S(U1,U2) can often be reduced to one independent variable by explicitly stating a constraint such as V = constant or U1 + U2 = constant. In general, however, partial derivatives are a concise way of describing a property of a system under specified conditions; e.g., (ѨS/ѨU)NV =1/T.
*For y(x) = 4x – x2 (like the figure), you can easily show that the maximum occurs at xm = 2.
17

Physics 213 Elements of Thermal Physics

Exercises
1) Considering the definition of temperature in terms of entropy, 1/T ϵ dS/dU, which of the following diagrams is most reasonable for a many particle system? State your reason. (Hint: Note that dS/dU is positive in all cases and sketch T(U) for each case.) S

S

S

U

U
(a)

U

(b)

T

(c)

T

T

U

U

U

2) We shall see that the entropy of an ideal monatomic gas depends on energy as S
= (3/2)Nk ln(U), where N is the number of particles, U is the internal energy, and k is a constant. By maximizing the total entropy S1 + S2 of two gases in thermal contact, determine the ratio of their energies in equilibrium. Remember, you need
Stot in terms of a single variable (and Utot = constant). Sketch Stot(U1).

U1
N1 = 10

18

U2
N2 = 40

Utot = U1 + U2

Irreversibility and the Second Law of Thermodynamics Chapter 2

3) Work out the problem posed in Section D: Derive U(N,T) for an ideal monatomic gas, and the ideal gas law, pV = NkT, assuming that entropy has the form:
S = Nk ln(U3/2V) ϩ constants.
Helpful hints: First write S as a function of U plus a function of V (plus constants):
S=
Take the partial derivatives remembering that d(lnx)/dx = 1/x:
⎛ ∂S ⎞
⎟ =
⎜⎝
∂U ⎠ V

⎛ ∂S ⎞
=

⎝ ∂V ⎟⎠ U

Notice that one of the partial derivatives is directly related to temperature, giving
U(N,T):
U(N,T) =
Assume that the container has negligible thermal energy and note that dU of the gas is related to dV by the Work-Energy Theorem. Maximize S to find p(N,V,T):

p(N,V,T) =
This problem illustrates how entropy maximization yields equilibrium conditions.
The equilibrium energy is U(N,T) and the equilibrium volume is V = NkT/p =
NkT(A/F). The entropy of an ideal gas is derived later in this course from microscopic properties.

4) Two objects initially at different temperatures are brought into thermal contact.
Show that heat flow Q from the cold object to the hot object violates the Second
Law:
Q



⌬S =

T1 < T2

19

Physics 213 Elements of Thermal Physics

20

CHAPTER

3

Kinetic Theory of the Ideal Gas

A. Common Particles
The gas that you are breathing is composed of a variety of molecules.
Consulting a reference book on the subject, you would find the following facts:

Molecule

Mass/mole

Concentration

N2

28 g

78%

O2

32 g

21%

Ar

40 g

0.93%

CO2

44 g

0.033 %

H2

2g

trace amounts

(numbers are rounded to two significant figures)
One mole of gas contains NA = 6.022 ϫ 1023 molecules. NA is known as Avogadro’s constant and is defined as the number of carbon atoms in 12g of 12C. So, for example, a nitrogen (14N) molecule has a mass, m = 28 g / 6.022 ϫ 1023 = 4.65 ϫ 10–23g.

21

Physics 213 Elements of Thermal Physics

You may recall from your Chemistry course that one mole of gas, no matter what type of molecules, occupies 22.4 liters of volume at standard temperature and pressure (STP):
T = 273 K and p = 1.01 ϫ 105 Pa = 1 atm. This rather remarkable fact follows from the ideal gas law, to be discussed below.
In theory we define an ideal gas as a “non-interacting gas of molecules.” This means, for example, that we do not consider the potential energy between molecules. In the view of classical mechanics, the molecules are tiny hard spheres bouncing elastically off each other and off the walls of the container. Real gases have significant interactions between molecules that cause phase transitions to liquids and solids, a topic for later discussion. Ideal gases don’t condense into liquids or solids.

B. Pressure and Kinetic Energy
Pressure is the force per unit area on a surface. The pressure of an ideal gas depends on the density of the gas and the average kinetic energy of the particles. The relation between pressure, density, and translational kinetic energy can be determined by considering a particle bouncing elastically off the walls of a container of volume V = Ad:

Area A

Vx

v2 = vx2 + vy2 + vz2

d

1
KE = _ mv 2
2
(translational)

The round trip time for this particle is to = 2d/vx. Each time the particle hits the piston, it transfers a momentum 2mvx; therefore, the time-average force on the piston is
F = ⌬(mvx)/⌬t = 2mvx /to = mvx2/d.
If the container has many atoms, they will have random velocities, so we designate brackets, <>, to signify the average, or “mean,” value. By symmetry, all three “mean square components” of velocity are equal:
<vx2> = <vy2> = <vz2> = <v2>/3.
The average force on the piston due to N atoms is F = Nm<vx2>/d, so the pressure is p = F/A = Nm<vx2>/V, where V = Ad is the volume of the container. The average energy of a particle is
<KE> = ½ m<v2> = (3/2)m<vx2>;
22

Kinetic Theory of the Ideal Gas Chapter 3

therefore, pressure, number density (n = N/V), and average kinetic energy are related by the following formula,
2
p = n < KE >
3

(3-1)

Notice that KE represents the translational kinetic energy of a particle.

C. Equipartition Theorem
It is important to realize that in this view of classical particles the average translational kinetic energy is the sum of three average values,
<½ mv2> = <½ mvx2> + <½ mvy2>+ <½ mvz2>,

(3-2)

each of which, by symmetry, must equal the same value. If the particle were a diatomic molecule, such as N2 and O2, there is also a rotational kinetic energy, with an average value given by:
<½ I␻12> + <½ I␻22>

(3-3)

There are two terms because there are two possible axes of rotation normal to the molecular bond. (The quantum mechanical nature of molecules dictates that the energy corresponding to a rotational axis along the bond is not significant.)
The surprising consequence of statistical mechanics is that each of the “quadratic terms” in the energy (e.g., <½ mvx2> and <½ I␻22>) has the same thermal-average value. This fact is known as the Equipartition Theorem. Sometimes stated “each quadratic degree of freedom of the system has exactly the same thermal-average energy,” the Equipartition
Theorem is the classical basis for defining a temperature in terms of the microscopic motions of the particles. We will derive it later in the course.
We empirically define an absolute temperature T such that each of the quadratic terms has a thermal-average energy given by
<energy per quadratic term> = ½ kT,

(3-4)

where k is the Boltzmann constant, 1.381 ϫ 10–23 J/K, and T is the absolute temperature in Kelvin. The Boltzmann constant relates microscopic motion to a practical definition of temperature, the Kelvin scale. (The consistency of this definition with 1/T = dS/dU given in Chapter 2 will be shown later.)
Note on definition of temperature scales: By international convention, the Kelvin scale is an absolute temperature scale (0 K is absolute zero) that takes the triple point of water as exactly
273.16 K. This temperature is 0.01 K above the freezing point of water at atmospheric pressure.
The Celsius scale is defined by: degrees Celsius = T(K) – 273.15. At atmospheric pressure, water freezes at approximately 273 K (0°C) and boils at approximately 373 K (100°C).
23

Physics 213 Elements of Thermal Physics

Equation (3-4) implies that the monatomic particle has an average thermal energy of (3/2)kT, the diatomic molecule has a thermal energy equal to (5/2)kT, and so on.
Therefore total thermal energies of monatomic and diatomic gases are
U=

3
3
NkT = nRT
2
2

U=

(monatomic gas)

(3-5)

(diatomic gas)

(3-6)

5
5
NkT = nRT
2
2

where N is the number of molecules in the gas, n = N/NA is the number of moles, and
R = NAk = 8.314 J/(molؒK)
= 1.987 calorie/(molؒK)

(3-7)

= 0.082 literؒatm/molؒK is the ideal gas constant. 1 calorie = 4.184 J is the heat required to raise the temperature of 1 gram of water at 1 atmosphere from 14.5°C to 15.5°C. Note that
1 atm = 1.013 ϫ 105 Pa = 1.013 ϫ 105 N/m2,

and

1 literؒatm = 101.3 Joules.
In summary, the internal energy of an ideal gas can often be written,
U = ␣NkT = ␣nRT,

(3-8)

The coefficients ␣ can be experimentally determined by observing how much energy (in the form of heat) is required to raise the temperature of the gas by 1 degree at constant volume, i.e., the heat capacity,
CV = (dU/dT)V = ␣Nk = ␣nR,

(3-9)

for a temperature range in which ␣ is constant. The heat capacity per mole, or molar specific heat, is designated by a lower case letter, cv = ␣R = ␣ ؒ (8.314 J/ Kؒmol).

(3-10)

You might wonder why we did not consider the vibrational energy of the diatomic molecule. Clearly there is a potential energy associated with the molecular bond that is quadratic in displacement,
<½ ␬u2>,

24

(3-11)

Kinetic Theory of the Ideal Gas Chapter 3

where ␬ is the spring constant of the bond, and u is the stretch of the molecular bond from its equilibrium value. This perfectly valid contribution to the total thermal energy of the molecule is actually not observed in the heat capacity of common diatomic molecules at room temperature. At elevated temperatures, however, the contribution from molecular vibrations does appear, which for the diatomic molecule increases ␣ in the heat capacity formula to 7/2, considering internal KE and PE. What is going on?
We will see in Chapter 8 that a minimum thermal energy is required to excite the vibrational modes of a molecule. For molecules N2 and O2 the thermal energy at 300
K is insufficient to get them vibrating. However, for a molecule such as CO2, which has low frequency torsional modes, the thermal energy at 300 K is sufficient to excite these vibrations. Consequently, in a gas of CO2 molecules, vibrations do contribute to the heat capacity at room temperature.
The lesson is that we must be a bit careful in applying Eqs. (3-8) – (3-10) because ␣ (and thus the heat capacity) is not necessarily constant over wide temperature ranges. For example, for the diatomic H2, cv/R changes from 3/2 to 5/2 as rotational modes become thermally active, and from 5/2 to 7/2 as the vibrational modes become thermally active:

7/2 R
Vibration

cv

5/2 R
Rotation
3/2 R

Ideal Diatomic Gas
Translation

10K

100K

1000K

T

D. Equipartition Applied to a Solid
As a natural extension of these ideas, we consider the heat capacity of a solid material.
In Chapter 1 and Appendix 1, we saw what the vibrational modes look like for a collection of masses bonded together by springs, analogous to atomic bonds. Because the vibrational modes have both kinetic and potential energy components, each contributing
½kT, the Equipartition Theorem applied to solids says:
In the classical limit, each normal mode of vibration in a solid has an average thermal energy of kT.
Because there are 3N vibrational modes in a solid containing N atoms, the internal energy and heat capacity at constant volume are,

25

Physics 213 Elements of Thermal Physics

U = 3NkT

and

CV = 3Nk.

(3-12)

And the molar specific heat is, cv = 3R = 25 J/Kؒmol.

(3-13)

That was almost too easy. Does it mean that all solids, no matter what their atomic constituents or bond strengths, have the same heat capacity? Well, almost all:


The Equipartition Theorem is valid only at sufficiently high temperatures, a condition which may differ from solid to solid. The specific heat of diamond at room temperature, for example, is considerably less than 3R. The reason is related to the one given for molecular vibrations of the H2 gas. We shall examine this effect later in the course.



This analysis considers only the contribution of the heavy atomic cores. The kinetic energies associated with “free electrons” in a metal do contribute to the specific heat, but their effect is only apparent at very low temperatures.

The specific heat of a solid is often given in units J/Kؒkg. (Multiply by # of moles/kg.)

E. Ideal Gas Law
Having defined temperature empirically, we continue examining the properties of the ideal gas. Plugging the Equipartition result for the translational kinetic energy, namely
<KE> = (3/2)kT, into Equation (3-1), we immediately have, p = nkT

(n = N/V).

(3-14)

The constant ␣ does not appear in this equation, as it does in the total internal energy,
Equation (3-8). The reason is that the pressure depends only on the translational kinetic energy, not rotation and vibration. The average translational kinetic energy is equal to
(3/2)kT for any ideal gas (even if CV is changing with T), so the 3/2 in this equation cancels the 2/3 in Equation (3-1) for all ideal gases. The ideal gas law is commonly written in the forms,

pV = NkT,
(3-15)
pV = nRT, where N is the number of molecules and n = N/NA is the number of moles in the gas.
The letter “n” is italicized to distinguish it from n = N/V = number density.

26

Kinetic Theory of the Ideal Gas Chapter 3

F. Distribution of Energies in a Gas
We have just seen the need for statistics in our treatment of the ideal gas. In fact, we did not deal with the actual distribution of molecular velocities but simply used the average square velocity to characterize the thermal-average kinetic energy,
<KE> = ½ m<v2> = ½ m(<vx2> + <vy2> + <vz2>).

(3-16)

For brevity, set KE = E. The average square of velocity doesn’t tell us the actual distributions of velocity or energy. For example, here are several hypothetical distributions that could give the same average energy of a molecule, marked by the dashed lines:
P(E)

P(E)

P(E)

E

E

E

P(E) is called the probability density. Specifically, P(E)dE is the probability of finding a given molecule within a small energy range, dE. Therefore, P(E) has units of (energy)–1.
The integral over the entire energy distribution (area under the P(E) curve) equals the probability of finding the molecule in the container,
Ύ P(E) dE = 1.

(3-17)

Later in the course, statistical mechanics will show us that the kinetic energy distribution of molecules in an ideal gas is described by
P(E) = C E1/2 exp(–E/kT)

(ideal gas)

(3-18)

which looks something like the distribution plotted at the far right. The prefactor C is determined by setting Ύ P(E) dE = 1, where the integral runs over all values of E.
(Chapter 9(C)) When dealing with single particles, a common unit of energy is the electron volt (eV), the amount of energy an electron gains in being accelerated across an electric potential of 1 volt.
1 eV = 1.6 ϫ 10–19 coulomb ϫ 1 volt = 1.6 ϫ 10–19 J.
In eV units, the Boltzmann factor is k = 8.617 ϫ 10–5 eV/K.
Because the probability density P(E) has units of (energy)–1, in order to calculate a probability you must multiply P(E) by an energy interval. This is not so unreasonable: the probability of finding a molecule with energy exactly 0.01284759839485738 eV is essentially zero, but the probability that the particle has an energy between 0.011 and
0.013 eV is a definite number, given approximately by
Probability = P(E) ⌬E = P(.012) ϫ (0.002)

(3-19)
27

Physics 213 Elements of Thermal Physics

Probability is not a fuzzy quantity. Using the probability density, we can determine many useful measurable quantities of a many-particle system. For example, the average number of molecules in the small range ⌬E around E is simply
N(E) = N P(E)⌬E,

(3-20)

where N is the total number of molecules in the container.
Also, the average value, or mean value, of a quantity X for a given probability distribution is:
<X> = Ύ X P(E) dE.

(3-21)

For example, the average energy of a particle in an ideal monatomic gas equals:
<E> = C Ύ E ϫ E1/2 exp(–E/kT) dE.

(3-22)

Later we will compute this integral and show that it proves the Equipartition Theorem for an ideal gas.
Notice that the probability density for the ideal gas contains the factor exp(–E/kT).
This function is known as the Boltzmann factor and occurs in many problems of statistical mechanics. At this point, I hope that you are asking yourself, “What’s the origin of the Boltzmann factor?” and “Where does the factor of E1/2 come from?” When you complete this course you will know the answers to these key questions and many more.

28

Kinetic Theory of the Ideal Gas Chapter 3

Exercises
1) In Chapter 2 we defined temperature in terms of the change in entropy per unit energy, 1/T = dS/dU. From kinetic theory we have U = (3/2)NkT for the ideal monatomic gas. Using these relations, determine the change in entropy S2 – S1 for the ideal gas as its energy increases from U1 to U2 at constant volume. What is the functional form of entropy for an ideal monatomic gas? (Hint: Eliminate T from the two equations and integrate, remembering ͐dx/x = ln(x).)

2) a) Assuming that your lung capacity is 2 liters, calculate approximately the total kinetic energy of the gas in your lungs.

b) What is the weight of 2 liters of N2 gas at 300 K in units of a penny (1 gram)?

c) How high would a penny have to be dropped in order to reach a c.m. kinetic energy equal to the total energy of gas molecules in your lungs (neglect air resistance)? d) Compare the final velocity of the penny to the average velocity of N2 molecules at 300 K.

e) How much heat is required to raise the gas in your lungs 3 K?

3) If you drop a 1 kg block of aluminum from a height of 1 meter and assume that all the center-of-mass kinetic energy of the block is converted into heat as it strikes the floor, what is the rise in temperature of the block? (Al has a molar mass of 12 g.)

29

Physics 213 Elements of Thermal Physics

4) a) Plot a few points of the function x1/2e–x as a function of x and indicate where the peak and average energies are.

x

.1

.25

.5

1.0

1.5

2.0

3.0

x1/2e–x
.4

x 1/2 e -x
.2

0

x = E/kT
0

1

2

3

This is the function describing the distribution of kinetic energies of an ideal gas,
(E/kT)1/2e–E/kT at temperature T, as introduced in this chapter and explained in detail in Chapter 9.
b)

30

Estimate the probability that the particle will have an energy E between 0.5 kT and 1 kT. [Hint: Approximate P(E)⌬E by a rectangle of width 0.5 kT and compare its area to the total area under the curve.]

CHAPTER

4

Ideal-Gas Heat Engines

A. The First Law of Thermodynamics
When work is done on a system by its surroundings, conservation of energy takes the following form:
Won = ⌬(KEcm) + ⌬U.

(4-1)

All thermodynamic systems that we will consider have their center of mass at rest, so ⌬(KEcm) = 0, and U is the total energy of the system in the c.m. frame, i.e., the internal energy. However, if thermal energy is transferred to the system from its surroundings, there is another input term on the left. The transfer of thermal energy into the system is known as heat and is designated by Q. The First Law of Thermodynamics states: Internal energy is a state function, work and heat are forms of energy transfer, and total energy is conserved:
Total energy inputs = ⌬ (total system energy), or Won + Q = ⌬U.

(FLT)

(4-2)

In dealing with engines, it is convenient to define the work done by the system. In this case, the First Law is written,
⌬U = Q – Wby .

(FLT)

(4-3)

31

Physics 213 Elements of Thermal Physics

In other books you may find the first law written as ⌬U = Q + W, where it’s implied that
W is the work done on the system. In this course we find it is best to simply label W with a subscript to avoid confusion. We do not bother with a subscript for Q, defining
Q = heat input to the system.

B. Quasi-static Processes and State Functions
In dealing with work and heat engines, we will consider only quasi-static processes of a homogeneous gas. A “homogeneous” gas is one in which the pressure and temperature are uniform throughout, and “quasi-static” means that the pressure and temperature are well-defined at all times. So, an example of a process that is not quasi-static is one in which the gas piston moves faster than the gas can react. Quasi-static basically means
“slow” compared to the “settling time” of the gas (and its surroundings).
This statement implies that, given sufficient time, the gas will settle into a well-defined condition, known as equilibrium. In this context, a quasi-static process is one in which the gas is always very close to equilibrium with itself and its surroundings.
We envision the gas confined to a container with a movable frictionless piston:

State Functions:
N, V, U, p, T

Process energies:
Q, W

Recall that work and heat are energy in transit. They are “process energies” that cause a change in the state of the system. The internal energy of the gas, the number of molecules, the volume, the pressure, and the temperature are state functions that are always well-defined when the system is in equilibrium.

C. Isothermal and Adiabatic Processes—Reversibility
The most important construction for characterizing the thermodynamic processes of gases is the pV diagram. A quasi-static process can be drawn as a curve on this diagram.
In proceeding from point A to point B on this curve, the work done by the gas is simply the area under the curve,
Wby = Ύ p dV.

(4-4)

The two most important processes that we will study are the isothermal process and the adiabatic process. In the isothermal process, the gas is always in equilibrium with a thermal reservoir, which can exchange heat Q with the gas. In the adiabatic process, the gas is thermally isolated from its surroundings.
32

Ideal-Gas Heat Engines Chapter 4

The isothermal process (T = constant), is represented by p = NkT/V,

FLT: W by = Q because
U = α Nk Δ T = 0

p

V
The adiabatic process (Q = 0), is represented by p = (constant)/V␥,

FLT: W by = -ΔU because Q = 0

p

(γ = 5/3 for a monatomic gas)

V
The origin of the relation pV␥ = constant for an adiabatic process follows from the First
Law of Thermodynamics (here, ⌬U = – Wby), the Equipartition Theorem, and the ideal gas law. For infinitesimal changes in T and V,
␣ Nk dT = – p dV = – (NkT/V) dV which implies,
␣ dT/T = – dV/V
␣ Ύ (dT/T) = – Ύ (dV/V)
␣ ln T = – ln V + constants

33

Physics 213 Elements of Thermal Physics

yielding,
VT␣ = constant, or pV␥ = constant

(adiabatic process)

with ␥ = (␣ + 1)/␣. These functional forms apply only in the temperature range where
␣ and ␥ are constant.
Performed in quasi-static fashion (slowly), the isothermal and adiabatic processes described above are reversible. In the isothermal process, involving energy exchange with a thermal reservoir at the same temperature as the gas, the gas is able to perfectly transfer heat from the reservoir into work (Q = W), and vice versa, without any change in the internal thermal energy. In the adiabatic process, pushing the piston down causes the gas to heat up. As the piston moves back to the original position, the gas does work on the surroundings at the expense of internal energy, and the gas temperature decreases to its original value. Perfectly reversible.
It is important to note that in order to determine whether a process is reversible or not, we must consider any changes in the surroundings. Consider the following process. Is it reversible?

p p2 p1

V
Obviously, no work was done in this process because there was no volume change.
However, the pressure and temperature have changed by an input of heat. The initial temperature is T1 = p1V/Nk and the final temperature is T2 = p2V/Nk. In the simplest case, this heat came from a single thermal reservoir at temperature T2. The gas initially at T1 was brought into contact with this reservoir, and the gas+reservoir approached equilibrium by the transfer of heat.
The process described in the last paragraph is clearly irreversible. The gas will never cool down and give the energy back to the reservoir. There is a fundamental principle at work here (another statement of the Second Law):
Any process that involves an exchange of heat between two systems at different temperatures is irreversible.
34

Ideal-Gas Heat Engines Chapter 4

Actually, it is possible to envision a process that looks almost like the one in the graph above but is reversible. Imagine that the straight line is replaced by a lot of little segments of alternating isothermal and adiabatic processes:

p p2 p1

V
This series of processes is reversible, but there is a cost to pay. If we use n isothermal segments, then we need n thermal reservoirs at successively higher temperatures, Tn.
Plus, we would need to remove and connect them before and after each adiabatic process. [Aside: More practically, one can reversibly transfer energy from a reservoir at temperature T2 to a gas at T1 by using a second gas and piston. The temperature of this secondary gas is adjusted adiabatically to match the temperature of the reservoir or primary gas so that heat is always transferred isothermally. See Carnot cycle below.]
During isothermal and adiabatic processes, some important state functions of the system remain constant. Let’s see what they are.
Isothermal processes are represented by the family of curves p = NkT/V for different reservoir temperatures T:

Isotherms (ideal gas): pV = NkT = constant
U = constant

p

T3
T2
T1
V

35

Physics 213 Elements of Thermal Physics

If the isothermal process is performed slowly, the gas is arbitrarily close to thermal equilibrium with the reservoir at constant T. The transfer of heat occurs between two systems at nearly the same temperature. (If the temperature were exactly the same, no heat would flow.) A slight expansion of the gas (doing work) is accompanied by a slight cooling (causing heat flow), and this way the gas converts heat from the reservoir to work on the system holding the piston. The isotherms are constant-energy curves for the ideal gas.
Is there a quantity like energy that remains constant during an adiabatic expansion or contraction of the ideal gas? Well, that is the million-dollar question.

Adiabats (ideal gas): γ pV = constant

p

S3
S2
S1
V
The answer is yes. There is a state function of the gas—just as important as internal energy—which is constant during an adiabatic process. It is none other than the entropy, designated by S. The adiabats are constant-entropy curves. Let us briefly see how this happens.

D. Entropy of the Ideal Gas—a First Look
Before statistical mechanics, scientists and engine designers realized that there was a state function closely associated with the adiabatic process. Consider the First Law of
Thermodynamics for small energy transfers, dQ = dU + p dV.

(4-5)

It is clear from this equation that Q itself is not a state function because p⌬V depends on the path taken between two points on the pV diagram. (State functions, such as U,
V, p, and T, are well defined for every point on the diagram.) Like work, dQ is not a
“differential” of a function—a fact that is emphasized in some texts by using the symbols dQ and dW. Using the definition of heat capacity, dU = CV dT, and the ideal gas law, dQ = CV dT + (NkT/V) dV.

36

Ideal-Gas Heat Engines Chapter 4

Now here is the creative step: If this equation is divided by T, the right side becomes an exact differential; i.e., it represents a change in a state function S: dQ/T = CV (dT/T) + Nk (dV/V)
= d(CV ln(T) + Nk ln(V))

(d(lnx)/dx = 1/x)

= dS with the definition,
S = CV ln(T) + Nk ln(V) + constant.

(ideal gas with fixed CV)

(4-6)

S is the entropy. Using CV = ␣Nk and ␥ = (␣ + 1)/␣, you can show that S = Nk ln(VT␣)
+ constant = Nk ln(pV␥) + constant. These equations show that entropy is constant for an adiabatic process (Q = 0) and that a change in entropy is closely linked to the heat input to the system: dS = dQ/T ,

(4-7)

which is true even if CV is a function of T. The adiabats shown in the figure above are constant-entropy curves. We shall see later that this logarithmic form of entropy for the ideal gas has the requisite properties that we postulated for entropy in Chapter 2. We will discover the microscopic origin of this special property through statistical mechanics.

E. Converting Heat into Work
The principal purpose of a heat engine is to convert heat into work. At the outset, you might ask what the practical limitations of this process are. After all, we have just seen that an isothermal process is able to convert heat into work (Q = W), and vice versa, with
100% efficiency! Specifically, for an isothermal process, the work done by an ideal gas is
Wby =

Vb

Vb

Va

Va

∫ pdV = NkT ∫

p
T = constant

V dV = NkT ln b
V
Va

(4-8)

W by = area under curve
Isothermal Process: W by = Q
(Ideal gas)

V
Va

Vb
37

Physics 213 Elements of Thermal Physics

From this property of an isothermal process, one might naively expect that a properly designed heat engine could have nearly 100% efficiency. Unfortunately, once we have done work with the expanding piston, we must reset the system so that more work can be done. Exactly how the system is reset determines the overall efficiency of the heat engine. Efficiency equals (work done)/(heat used) over a complete cycle of the engine.
All practical heat engines undergo a cyclic (closed loop) process. In Appendix 2 we consider a specific engine cycle known as the Stirling cycle. It employs the isothermal and isochoric (constant-volume) processes that we have just considered. As an exercise you will calculate the efficiency of such an engine working between two thermal reservoirs, one at 0°C and the other at 100°C (i.e., 273 K and 373 K). A general diagram for a heat engine is shown below. In this model, a source of useful energy is represented by a thermal reservoir at temperature Th and the environment is represented by a thermal reservoir at temperature Tc. The engine works between these two reservoirs extracting useful energy from the hot reservoir, doing work with it, and dumping excess heat into the cold reservoir:
Thermal Reservoir at T h

Qh
W by = Q h - Q c

Heat Engine:

Qc
Thermal Reservoir at T c

For reasons that we will see later in the course, the most efficient engine operating between two temperatures is one that uses only reversible processes (isothermal and adiabatic). It is called the Carnot Engine. The Carnot engine is the standard by which we measure all practical engines. No cycle is more efficient than the Carnot cycle.
The Carnot cycle consists of two isothermal and two adiabatic processes: adiabats 1

Carnot cycle

p
2

Th

4

Tc

isotherms

3
38

V

Ideal-Gas Heat Engines Chapter 4

The efficiency of the Carnot cycle can be determined using the relations derived above.
The isothermal work by the gas is

V2
= Qh
V1

W12 = NkTh ln

(4-9)

V
W34 = NkTc ln 4 = −Q c .
V3
The input heat Qh from the hot reservoir and the output heat Qc to the cold reservoir are defined as positive. The adiabatic processes obey

V1 Th = V Tc α α

4

V2 Th = V3 Tc

(4-10)

V1 V4
=
V2 V3

(4-11)

Q h Th
=
Q c Tc

(Carnot cycle) (4-12)

α

α

which implies that

Therefore, from Eqs. (4-9),

The total work done by the gas in any closed cycle (not just Carnot) is simply the difference between the heat absorbed and the waste heat expelled,
Wby = Qh – Qc

(in general) (4-13)

because ⌬U = 0 for a closed cycle. Therefore, the efficiency of any heat engine is defined by
␧=

Wby heat input

=

Qh − Qc
Q
= 1− c
Qh
Qh

(in general) (4-14)

For the Carnot cycle, this result becomes
␧=1–

Tc
Th

.

(Carnot efficiency) (4-15)

Notice from Eq. (4-12) that Qh/Th = Qc/Tc in the Carnot cycle. Q/T is something special that is conserved in this reversible cycle. Yes, it’s the entropy. The fact that Q/T is related to an intrinsic property of the system was realized long before the microscopic origin of entropy was discovered.

39

Physics 213 Elements of Thermal Physics

In working with thermodynamic cycles, you will need to determine the work performed by the gas for an adiabatic process. You should be able to derive the required equation by applying the concepts already introduced in this Chapter. The First Law of Thermodynamics tells us
⌬U = – Wby ,

(adiabatic process).

(4-16)

Adiabatic work

(4-17)

Adiabatic work

(4-18)

For an ideal gas, U = ␣NkT, implying that
Wby = ␣Nk (T1 – T2).
Using the ideal gas law, pV = NkT,
Wby = ␣ (p1V1 – p2V2).

F. Refrigerators and Heat Pumps
The Carnot engine run backward (i.e., reversing the arrows) is a refrigerator or heat pump. In these devices, we provide work to make heat flow from a cold reservoir to a hot reservoir. This is not a violation of the Second Law of Thermodynamics, which says that heat will not spontaneously flow from hot objects to cold objects.
To appreciate these processes, consider the following questions (circle your answers and we will see how good your intuition is):
1) A heat pump can extract heat from the cold outdoors to warm the inside of your house. a) True, b) False.
2) The amount of work needed to overcome a heat leak Q from the inside of the house to the colder outdoors is
a) less than Q,
b) equal to Q,
c) greater than Q.
The following diagrams show the distinction between a heat engine and a refrigerator
(used to cool your food) or a heat pump (used to warm your house in winter or cool it in summer). For a refrigerator, room temperature is Th, and for a heat pump, room temperature is Th in winter or Tc in summer. And, yes, as improbable as it sounds, a heat pump can extract heat from a cold winter day. All it takes is work, usually provided in the form of electrical energy.

40

Ideal-Gas Heat Engines Chapter 4

Q h and Q c defined

p

Qh

Qh

p

Th

Th
Tc
Qc

as positive.
Arrows (and signs in equations) show direction of flow.

Tc
Qc

V

Heat Engine

V

Refrigerator or Heat Pump

Here is an example of a refrigerator problem: A refrigerator keeps the food cold at 5°C despite a heat leakage of 100 J per second (= 100 W), which is compensated by Qc = 100 J per second. Assuming that it has a Carnot efficiency and that the ambient temperature is 20°C, what electrical power is required to run this device?
We begin the fridge problem by making a simple sketch like the one shown at the right.
The First Law tells us that the work done on the gas over the entire cycle is:
Won = Qh – Qc .

(4-19)

In these cyclic problems, we use this energy-conservation equation to calculate power
(energy flow) in joules/sec = watts. For a Carnot cycle,
Qc Tc
=
Q h Th

Qh

W on

Q c = Q leak

(4-20)

In order to keep the fridge at a constant Tc = 5°C, we must remove Qc = 100 J per second
(offsetting the 100 W leakage). Therefore we write:
⎛T

Won = Qc ⎜ h − 1⎟
⎝ Tc


(4-21)

Remembering to use Kelvin units, you will find that only 5.4 J of input work per second
(5.4 watts of power) is required to overcome the 100 W leakage. Amazing. Sounds like a violation of some law, but remember we are using the work simply to transfer energy from one reservoir to another. Energy conservation is not violated. The result is correct. A more refined way of showing the energy flow for refrigerator or heat pump is:

41

Physics 213 Elements of Thermal Physics

Fridge Problem:
Kitchen (20˚C)

Heat Pump Problem:
Thermal Reservoir at T h

House (20˚C)

Qh

heat leak = Q h

W on heat leak = Q c

Fridge (5˚C)

Qc
Outside (0˚C)

Thermal Reservoir at T c

Notice that the input “cross sections” add up to the output cross section (Won ϩ Qc =
Qn). Reverse the arrows and change Won to Wby and you have a heat engine.
Now, do you want to reconsider your answer to question 2 above? Let’s say that we need to offset 100 watts heat leakage through the windows with Qh = 100 J per second from the heat pump in order to keep the interior of the house at 20°C when the outside air is at 0°C. What electrical work must be done to overcome this leakage?
Complete the sketch at the left following the rules: a) the hot reservoir is always at the top and the cold reservoir at the bottom, b) the arrows decide the directions of energy flow, so all quantities are positive, c) depending on the problem, the heat leak could be from the hot reservoir (Qleak = Qh) or to the cold reservoir (Qleak = Qc), d) the form of the conservation equation (Won = Qi – Qj) follows from the flow diagram.
As before, write the conservation equation in terms of the one known heat flow (Qc or
Qh) and use the Carnot ratios:
W = Q(



The answer I get is 6.8 watts. Interesting, eh? Only 6.8 watts of input power are needed to overcome 100 watts of leakage. Did you answer the question correctly?

42

Ideal-Gas Heat Engines Chapter 4

Exercises
1) Work through each process of the Stirling cycle and determine the heat transferred to the gas Qi, the work done by the gas Wi, and the internal energy change
⌬Ui. Assume that the gas is 0.1 mole of argon and Th = 373 K, Tc = 273 K and
Vb/Va = 2. Fill out the table in units of joules, and check that the First Law of
Thermodynamics, Qi – Wi = ⌬Ui, is obeyed in each process. Determine the efficiency,
␧ = work/heat-input = (W2 + W4)/(Q1 + Q2). p 1

2

Th
3
4

Va

Process (i)

Qi

Tc

Vb

isotherms

V

Wi

⌬Ui

1
2
3
4

43

Physics 213 Elements of Thermal Physics

2) Nitrogen gas initially at 300 K and atmospheric pressure in a volume of 1 liter is adiabatically expanded to 2 liters, then isothermally returned to 1 liter. How much heat is required to restore the gas to its initial conditions? Suggestion: Draw the p-V diagram and complete the following tables: p V

point

p

V

Q

Wby

T

a b c process 1
2
3
3) a) Using the FLT (Eq. (4-5)) and the ideal gas law show that heat capacity at constant volume is CV = (dU/dT)V and heat capacity at constant pressure is Cp = CV + nR.
b) For an ideal gas with U = ␣nRT show that Cp = ␥CV , with ␥ = (␣ + 1)/␣.

c) The temperature of a thermally isolated ideal gas increases by a factor of 1.219 when its volume is decreased by a factor of 2. What are CV and Cp for one mole of this gas (i.e., the specific heats cv and cp)? Describe the gas.

44

CHAPTER

5

Statistical Processes I:
Two-State Systems

A. Macrostates and Microstates
The word “state” is probably the most widely used term in the study of thermodynamics. In fact, there are two quite different ways to describe the state of a system in equilibrium. The first is its macrostate, which is a specification of its large-scale properties, such as U, V, p, T. The second is its microstate, which is a detailed specification of the condition of all of its particles. We will illustrate these two important concepts with two examples: a system of spins, and an ideal gas of particles.
The problem boils down to statistics, like a game of dice. When you roll two dice, each with equal probability of turning up 1 through 6, is it more probable that their sum will be “2” or “7”? Answer: There is only one way of rolling “2” (1+1), but there are 6 ways of rolling “7”
(1+6, 2+5, 3+4, 4+3, 5+2, and 6+1). Therefore, rolling “7” is six times more likely than rolling “2”. In statistics, each possible roll (say 2 + 5) is a microstate, and the collection of all those combinations which yield a sum of “7” is a macrostate.
A simple application of statistics is flipping a coin. In N tosses, what is the probability of getting Nh heads and Nt = N – Nh tails? To answer this problem, we have to know the number of ways one can throw Nh heads

45

Physics 213 Elements of Thermal Physics

in N tosses, which we denote as ⍀(N, Nh). This problem is commonly known as the “n choose m” problem (here “N choose Nh”) and has the solution:
Ω( N, N h ) =

N!
N!
=
N h ! N t ! N h !( N − N h )!

where, N! is defined by the example: 5! = 5 ϫ 4 ϫ 3 ϫ 2 ϫ 1 = 120. This equation represents the Binomial Distribution for two-state systems.
There are many important applications of the binomial distribution that do not involve gambling. In the Introduction, we considered the behavior of particles in a two-cell box. In this chapter, we will examine a system of spins (pointing up and down) and the random walk (left and right steps). The latter problem is directly related to diffusion of atoms in gases and solids. We begin with the spin problem.

B. Multiple Spins
Magnetism is an important property of materials with countless applications. Just consider the uses in your immediate vicinity: solenoids, power transformers, watch motors, audio headphones and speakers, tape drives, disk drives, etc., etc.
Magnetism has its origin in the spin of an electron, with an associated magnetic moment. If the electron, with magnetic moment ␮, is placed in a magnetic field, quantum mechanics tells us that it will be pointed either parallel or anti-parallel to the magnetic field. The lowest energy state is when the moment is pointing along the field (this actually corresponds to the electron spin pointing opposite the field, but we will adhere to the usual convention of using the term “spin up” meaning “moment up”):

or

B
Energy = - μ B
"spin up"

Energy = μ B
"spin down"

Just how electron spins “add up” or “cancel out” in an atom, a molecule or a solid is the subject for advanced courses. Here we simply consider a collection of non-interacting spins, each with moment ␮. As an example, imagine the following arrangement of 9 spins, labeled by their site:

B
1

46

2

3

4

5

6

7

8

9

Statistical Processes I: Two-State Systems Chapter 5

From now on we will omit the numbers and use the position of the spin to designate its site.
You may wonder why all the moments in the state shown above are not pointing along the magnetic field, which is the lowest energy state of the system. Well, this is how we happened to set up the particular system, which is isolated from other sources of energy.
In this system, two electrons of opposite spin can undergo a mutual spin flip,

but a single spin flip is not allowed because it would require adding or subtracting a magnetic energy of 2␮B from the system.
A specific arrangement of moments such as the one shown above is known as a microstate of the 9-spin system: the orientation of each one of the 9 spins is specified. In contrast, the macrostate of the system is a specification of the total magnetic moment of the system:
0 = ͚ (±)␮
(5-1)
= 5␮ – 4␮ = 1␮ for the situation depicted above. The total magnetic moment is typically what we would measure in an experiment.
Notice that there are many possible microstates for a given macrostate of the system.
In particular, for 0 = 1␮, several other microstates are:

The total number of microstates for “5 spins up” and “4 spins down” is

9!
= 126
5! 4 !

47

Physics 213 Elements of Thermal Physics

In statistical language this combination is called “9 choose 5.” Each of these 126 microstates provides exactly the same total magnetic moment, or macrostate.
Not all values of the macroscopic parameter 0 have the same number of microstates.
For example, there is only one microstate for 0 = 9␮:

In summary, we designate a macrostate by:
(N, Nup) where Nup designates the number of “up spins.” The number of down spins is obviously
N – Nup. All of the microstates in a given macrostate have the same total magnetic moment and the same total energy. The number of accessible microstates for a given macrostate (N, Nup) is:
N down
N up m Ω( N, N up ) =

N!
N!
=
N up ! N down ! N up !( N − N up )!

(5-2)

The macrostate may be specified either by Nup or by the total magnetic moment 0, which is proportional to the integer m = Nup – Ndown.
0 ϵ m ␮ = (Nup – Ndown) ␮ = (2Nup – N) ␮,

(5-3)

where m is the “spin excess.” The product m␮ is the total moment along the field direction. The total energy of the spin system is U = – 0 ؒ B = – 0B = –m␮B. You can easily show that ⍀ may also be written in terms of m:
Ω( m) =

N!
N
+ m ⎞ ⎛ N − m⎞

⎟!
⎟ !⎜
⎜⎝
2 ⎠ ⎝ 2 ⎠

(5- 4)

This statistical distribution is known as the Binomial Distribution because of its 2-state nature (spin up/spin down). Continuing with our analysis of the 9-spin problem, we can make the following plot of the number of microstates for the ten macrostates:

48

Statistical Processes I: Two-State Systems Chapter 5

Ω(N,N up ) or Ω(m)

Number of up spins:

N up =

0

1

2

3

Spin Excess:

m = M/μ = -9

-7

-5

-3

-1

1

# microstates:

Ω(m) =

9

36

84

126

126

1

4

5

6

7

8

9

3

5

7

9

84

36

9

1

(Notice that Nup ranges from 0 to N in steps of 1 and m ranges from -N to +N in steps of 2)

We can now convert this distribution of microstates into a probability distribution by assuming the fundamental postulate of statistical mechanics, namely that every accessible microstate is equally likely. With this assumption, what is the probability of finding a total magnetic moment of 0 = 3␮? Write your answer here:
P(m) = P(3) =
Give it a try before reading on. (This calculation corresponds to the limit B → 0. We will consider non-zero fields in a later chapter.)
You probably computed this by dividing the number of microstates in the “m = 3” macrostate by the total number of microstates. As you can verify by adding up the numbers in the 9-spin problem, the total number of microstates for a binomial distribution turns out to be:
Σ Ω( N , N up ) = Σ Ω( m ) = 2N

N up

m

(5-5)

Can you give a simple argument why the total number of microstates (including all the macrostates) of an N-spin problem is 2N?
You should find that the probability of finding 0 = 3␮ is 0.164. In other words, there is 16.4% chance of finding exactly 3 spins up. Now we can write a formula for P(m) in terms of ⍀(m) and N:
P(m) = ⍀ (m)/2N

(5-6)

In the discussion section you will be asked to repeat this exercise for a different number of spins. Convince yourself of the validity of the ⍀(N, Nup) or ⍀(m) functions by counting the possible microstates for each macrostate.

49

Physics 213 Elements of Thermal Physics

For large numbers of spins, it is very useful to approximate the Binomial distribution by a Gaussian distribution. As detailed in Appendix 3, the probability becomes:

(m) = (2/πN) 1/2 exp(-m 2 /2N)

(2/πN) 1/2
.607(2/πN) 1/2

0 N 1/2

m

Give it a try for the nine-spin problem. For example, I find ⍀(3) = 2N P(m) = 82.6, which is pretty close to the exact value given in the plot of ⍀(m) above. The agreement is even better for larger N. Moreover, for N > 100 or so, your calculator will probably choke on the factorials, requiring you to use the Gaussian form.

C. The Random Walk Problem—Diffusion of Particles
This classic problem is often stated in terms of a drunk staggering away from a lamppost.
He takes M steps of length ᐍx, but each step is in a random direction along the sidewalk.
This one-dimensional random walk is mathematically equivalent to the spin problem.
The step size ᐍx corresponds to the spin moment ␮, and the number of right and left steps correspond to Nup and Ndown. The total displacement x = m ഞx of the drunk from the lamppost after M steps corresponds to the total magnetic moment 0 = m␮ for N spins. We change N to M here because in problems dealing with particles it is usual to represent the number of particles by N. M = total number of steps.
The drunk is used as an example to get your attention in the potentially soporific field of statistics. In fact, there are many useful problems involving the mathematics of the random-walk problem, especially when we expand it to three dimensions. In this course, we will consider the diffusion of N molecules in the atmosphere and the diffusion of N electrons and impurity atoms in a semiconductor.
You can probably guess that, after M steps, the mean (i.e., average) displacement away from the lamppost at x = 0 is:
<x> = <͚si> = 0,

(5-7)

where the step size si = ± ᐍx and i ranges from 1 to M in the sum. This result doesn’t mean that the drunk will always end up back at the lamppost after M steps; but it does
50

Statistical Processes I: Two-State Systems Chapter 5

mean that, if he repeats this random process night after night; on average he will end up equally to the right or to the left of the lamppost. At least the dude isn’t driving.
For a random-walk process with a constant step size, the mean square displacement after
M steps is,
<x2> = Mᐍx2 ,

(5-8)

as proven in Appendix 3. As indicated in the last section, for a large number of trials
(many drunks, or one drunk many times), each taking many steps, the binomial distribution is well approximated by a Gaussian function:
N(x)

No

σd

= <x 2 > 1/2 = M 1/2

x

= standard deviation

x
0

Here is how we interpret this graph: If we observed this behavior for N drunks, recording each of their displacements after M steps, then N(x) equals the number of drunks that ended up at position x.
This is quite similar to the problem of a molecule released at a time t = 0 and at a position (0,0,0) in a room filled with some other type of gas. The molecule travels with an average thermal speed v, scatters randomly from the other molecules, and at time t ends up at position (x,y,z). The thermal speed is given by Equipartition: ½ mv2 = (3/2) kT.

(x,y,z)
(0,0,0)

51

Physics 213 Elements of Thermal Physics

We take ഞ to be the average distance traveled between collisions, or “mean free path.”
Because ഞ2 = ഞx2 + ഞy2 + ഞz2 and all three directions are equivalent, we see that ഞ2 = 3ഞx2.
The average x-projection of this ballistic path, ᐍx = ഞ /31/2, is roughly equivalent to the drunk’s step size on the sidewalk, and similarly we assume that each collision randomizes the molecule’s direction.
The mean time between collisions is ␶ = ᐍ/v. After an elapsed time t, the average number of collisions is M = t/␶. As shown in Appendix 3(D), the mean square displacement for random step sizes is <x2> = 2 Mᐍx2, in contrast to Mᐍx2 for a constant step size (Eq. 5-8).
Therefore,
<x2> = 2 (t/␶) ᐍx2 .

(5-9)

If we release N molecules at time t = 0 and position (0,0,0) and then plot their positions at a time t, we will find a Gaussian distribution for the number of molecules per unit distance,
N(x) = N P(x) = N (2␲␴d2)–1/2 exp(–x2 /2␴d2).

(5-10)

This distribution has a mean square displacement that depends linearly on time,
␴d2 = <x2> = (2ᐍx2/ ␶) t = (2ᐍ2/3␶) t.

(5-11)

It is customary to define a diffusion constant as
D = (ᐍ2/3␶) = v ᐍ /3 ,

(5-12)

using ᐍ = v␶ for the latter form. The root-mean-square (rms) displacement of the molecules at time t is xrms(t) = ␴d(t) = (2 D t)1/2.

(5-13)

This is the basic result of a diffusion process. The size of the cloud of diffusing particles increases as the square root of the time. The formula for the expanding distribution is,
2

N(x,t) =

x

N e 4Dt ,
4␲Dt

which is plotted for several times after the molecules are released:

52

(5-14)

Statistical Processes I: Two-State Systems Chapter 5

t=0 t=0.3 N(x,t)

t=1 t=3 t=10

-10

-8

-6

-4

-2

0

2

4

6

8

10

x
It is interesting and useful to notice that the expanding Gaussian distribution is a solution to the following differential equation:

d 2N dN =D 2 dx dt

(5-15)

where N = N(x,t). You will be asked to prove this as an Exercise. This famous equation is known as the diffusion equation.
The rms displacement along x of a particle in time t is (2Dt)1/2. Diffusion in 3 dimensions yields a square displacement, <r2> = <x2> + <y2> + <z2> = 6Dt, by symmetry. The rms diffusion radius after time t is rrms(t) = (6Dt)1/2 .

(5-16)

Notice that the mean volume occupied by the expanding cloud of diffusing molecules is
Vol Ϸ (4␲/3) r3rms Ϸ 4(6Dt)3/2.

(5-17)

If N particles are deposited initially at r Ϸ 0, then the average density of particles decreases as nav(t) Ϸ N/Vol Ϸ N/4(6Dt)3/2

(5-18)

53

Physics 213 Elements of Thermal Physics

The average number of particles remaining in a small volume dV at the origin after time t is approximately navdV, assuming dV << Vol.
The equations (5-16 thru 5-18) apply only when the mean diffusion distance is much larger than the size of the initial region into which the particles were deposited. If the particles were deposited in a Gaussian distribution with standard deviation ␴o, then the expanding distribution in one dimension has the mean square deviation ␴d2 = ␴o2 +
2Dt, which is also valid at early times. Also, these equations are valid only when ␴d and rrms are much larger than ഞ.
Sometimes you will want to know the typical time it takes for particles to diffuse a given distance. First find the diffusion constant in terms of the mean free path and thermal velocity, then use equation (5-13) or (5-16) to solve for a typical time t for a particle to arrive at distance xrms(t) or rrms(t). There are a lot of practical problems involving diffusion, such as the migration of impurities in a semiconductor chip, or the diffusion of carriers in a semiconductor. The few exercises given below will give you some experience with these important technological problems.

D. Heat Conduction
The second process that allows a system to approach equilibrium is thermal conduction. This process has similarities to particle propagation (or “particle conduction”) in the presence of an external force field, such as for electrical conduction of electrons in a semiconductor or metal. In solids at normal temperatures the principal entities conducting heat are packets of vibrational energy, also known as phonons. Both particle conduction and heat conduction are based on the random-walk process and therefore have similar mathematical descriptions.
Heat flows from a hot region to a cold region if there is a temperature gradient. A temperature gradient in the x direction given by dT/dx will cause a heat current density
(energy transfer per unit area per unit time) designated by Jx. In fact, the rate of energy flow is proportional to the temperature gradient, as given by the heat conduction law:
J x = −κ

dT dx (5-19)

where ␬ is known as the thermal conductivity and depends on the material being considered and even the temperature. Can you see why a positive gradient in T gives a negative Jx? (Answer: Heat flows from high T to low T.)

54

Statistical Processes I: Two-State Systems Chapter 5

For problems where the change in temperature across the material is small compared to the absolute temperature, the magnitude of the heat-current density (Eq. (5-19)) becomes:
J = ␬ ⌬T/⌬x,

(5-20)

where we drop the sign, knowing which way heat will flow. Let’s calculate the heat leak for a “thermopane” window of area A = 1.5 m2 and an air-gap thickness of 1 cm. Assume that there is a temperature difference of 20 K between the outside and inside and that the thermal conductivity of air is ␬air = 0.03 W/mиK. With these parameters, we find,
J ϫ A = (0.03 W/mиK)(20 K)(1.5 m2)/(.01 m) = 90 watts.
This is approximately the heat loss we assumed when we calculated the power required to drive an ideal heat pump (Chapter 4).

55

Physics 213 Elements of Thermal Physics

Exercises
1) Flipping coins has the same binomial description as the random-walk and spin-1/2 systems. a) What is the probability of getting exactly 5 heads in 8 tosses? b) What is the probability of getting exactly 400 heads in 800 tosses? (Hint: Use the Gaussian approximation.) 2) Show that
N down

Ω( m ) =

N up m N!
+
N m ⎞ ⎛ N − m⎞

⎟!
⎟ !⎜
⎜⎝
2 ⎠ ⎝ 2 ⎠

is consistent with ⍀(N, Nup) = N! / Nup! Ndown! .

3) Compare the exact relation P(m) = ⍀(m) /2N, using the ⍀(m) above, with the
Gaussian approximation,
P (m) = (2/␲N)1/2 (exp(–m2/2N)), for a few values of m, with N = 40:
P (m)

m exact 0

4

8

P (m)

approx. P (m) m 4) Verify that Eq. (5-14) is a solution to the diffusion equation, Eq. (5-15).

56

Statistical Processes I: Two-State Systems Chapter 5

5) A few atoms of Argon (atomic weight = 40) are released from a storage tank into a room full of air at 300 K. Assume that the Ar atoms have an average energy of
(3/2)kT and a mean free path of 0.1 micrometer. Determine the diffusion constant of the Ar gas and the rms displacement of the atoms from their point of origin at 1 second and at 1 hour after their release. Plot a few points of rrms(t) to get a feeling for the t1/2 function.

r rms (cm)
2.5
2.0
1.5
1.0
0.5
0.0
0

1

2

3

4

5

6

t (s)

6) Researchers measuring statistical events often determine the full-width at halfmaximum (FWHM) of a Gaussian distribution. Calculate the relationship between the FWHM and ␴d.

2
2
e -x /2 σd

1

FWHM

.5

0 x1

x

57

Physics 213 Elements of Thermal Physics

7) When light strikes a pure semiconductor such as silicon, electrons are promoted from a nearly filled valence band to a nearly empty conduction band. Conduction electrons produced by the light near the crystal surface diffuse as an ideal gas at
300 K and randomly drop back into the valence band with an average lifetime of
␶0 ഠ 1 ␮s. a) Complete the table below to compare the electron gas with N2 gas in air at 300 K, assuming a collision time (with phonons) of ␶ ഠ 0.01 ns.
b) Calculate the average depth an electron diffuses from the crystal surface during its lifetime of 1 ␮s. (Hint: Use the Equipartition Theorem for average velocity.) light crystal

X

N2
Molecules
electron gas v





D

500 m/s

0.2 ns

0.1 ␮m

1.67 ϫ 10–5 m2/s

0.01 ns

A snapshot of a diffusing gas in a semiconductor at Tϭ2 K is reproduced below.
(D.P. Trauernicht and J.P. Wolfe, Physics Department, University of Illinois)

58

CHAPTER

6

Statistical Processes II:
Entropy and the Second Law

A. Meaning of Equilibrium
The fundamental postulate of statistical mechanics is that in equilibrium all accessible microstates of an isolated system are equally likely. A basic problem of thermodynamics is to determine how volume, energy, and particles are distributed in various physical systems.
Consider the example given in Section C of the Introduction. N particles are contained in a box that is divided into two sections, like the picture above with NL particles on the left and NR = N – NL particles on the right. The box is isolated from its surroundings, and the particles move freely between the two sides. What is the most probable value of NL? Statistical mechanics says that to solve this problem, we need to compute the number of microstates ⍀ for each macrostate defined by a specific NL.
The probability of observing a given number NL for a system of particles in equilibrium is proportional to ⍀(NL). One of the most fascinating and important results of statistical mechanics is that for large numbers of particles, the probability distribution becomes extremely sharp, as depicted in the following figure:

59

Physics 213 Elements of Thermal Physics

Number of microstates, Ω(N L )

NL
0

N
Equilibrium value

Although equilibrium corresponds to many possible values of NL, the distribution is so sharp that only those NL very near the peak in the distribution are likely to be observed.
Therefore, we can usually say with great precision that there is only one “equilibrium value” of NL. (Very precise experiments on some systems are able measure the fluctuations around the mean value, i.e., the width of the distribution.) For large N, equilibrium values of macroscopic parameters, such as gas pressure and total magnetic moment, are extremely well-defined.
In the last chapter, we considered the binomial distribution of electron spins. Let us now consider an ideal gas of N particles, confined to a box of volume V, which in general is not described by a binomial distribution. We eventually need to know how the number of microstates depends on volume V, particle number N, and energy U. In this chapter we will examine the dependence of ⍀ on volume and particle number. In Chapter 7 we will see how ⍀ depends on energy.

B. Objects in Multiple Bins
To introduce the counting statistics, we consider a simple system of 4 bins that are occupied by objects (or particles) labeled A and B:
B

A

We allow the two objects to occupy the same bin or different bins; that is, we allow
“unlimited occupancy.”
You should find a systematic way of determining how many different ways there are of arranging 2 distinguishable particles in 4 bins. Here is some working space:

60

Statistical Processes II: Entropy and the Second Law Chapter 6

You will find that there are 16 possibilities. In fact, you should be able to reason that for the case of M bins and N particles, the total number of arrangements (i.e., the number of microstates) is simply
⍀ = MN.

N distinguishable particles, M unlimited occupancy bins

(6-1)

In real life, microscopic particles of a given type are indistinguishable. Every electron is identical* to every other electron. For a given isotope, every sodium atom is exactly like every other sodium atom. Every nitrogen molecule … and so on. Let’s see how our statistical counting would be different if the particles were identical:
A

A

How many distinct microstates can you find with these two identical particles? Again you may assume unlimited occupancy of the bins. Here is some working space:

* For atoms and molecules we use the terms “identical” and “indistinguishable” interchangeably.
61

Physics 213 Elements of Thermal Physics

You will find that the number of microstates is reduced to ⍀ = 10. The general formula for this case is:

Ω=

( N + M − 1)!
( M − 1)! N !

N identical particles, M unlimitedoccupancy bins

(6-2)

If the number of bins is large and much greater than the number of particles (M >> N), this result simplifies to (see argument in Appendix 5):

Ω≈

MN
N!

N identical particles, M unlimitedoccupancy bins, N<<M. (low density)

(6-3)

In Appendix 5 we consider also the situation where a bin can hold only one object at a time (single occupancy). A summary of results for ⍀ follows:
Unlimited Occupancy

Single Occupancy

Distinguishable

MN

M!
( M − N )!

Identical

( N + M − 1)!
( M − 1)! N !

M!
( M − N )! N !

In this course we will be concerned primarily with the low-density limit (N << M). In this limit, the formulas for ⍀ are:
⍀ = MN

for distinguishable particles
(6-4)

⍀ = MN/N!

for indistinguishable particles

C. Application to a Gas of Particles
Imagine a gas of N particles in a box of volume V. What is the number of microstates?
We ignore the kinetic energy of the particles for the moment in order to determine just the volume dependence of ⍀.
Each particle in the gas can be anywhere in the box,

Volume V
N par ticles

62

Statistical Processes II: Entropy and the Second Law Chapter 6

so we imagine the box divided up into equal cells of volume ␦V.

δV

The total number of cells in the box is
M = V/␦V.

(6-5)

Assume for this example that there are N distinguishable particles. The number of microstates is:
⍀ = MN = (V/␦V)N .

(6-6)

If the box has a volume of 1 liter (1000 cm3) and the size of a cell is chosen to be 1 cm3, then V/␦V = 1000. If there are 10 particles in the box, the total number of microstates is:
⍀ = (1000)10 = 1030.
Astronomical! The basic fact is that the number of microstates grows extremely rapidly
(exponentially) with the number of particles, and also very rapidly with V. Just double
V and ⍀ changes by a factor of 2N = 210 = 1024. (If the particles were indistinguishable, then we would need to divide by a factor of 10! = 3,628,800. ⍀ becomes 2.76 ϫ1023, which is still immense.)
Those numbers are for 10 particles in 1000 cells. Imagine the number of microstates for 1020 particles in 1023 cells (still the low-density limit):
M N = (10 23 )10 = 10 23ϫ10
20

20

(6-7)

which is an inconceivably large number. Even if we take the logarithm of this number, we end up with a huge number: log10⍀ = 23 ϫ 1020

(6-8)

Obviously dealing with the logarithm of the number of microstates is a much more manageable task than dealing with ⍀ itself. Generally we will use the natural logarithm
(base e = 2.72). It is the “natural” choice because d(lnx) /dx ϭ l/x.

63

Physics 213 Elements of Thermal Physics

An interesting consequence of taking the logarithm is that changing the number of hypothetical cells in the box doesn’t have a large effect on our counting. For example, if we take 100 times as many cells, then log10⍀ = 25 ϫ 1020, which is only an 11% increase. The properties that we derive from counting microstates do not usually depend significantly on the number of cells chosen. In the next section we will see that the equilibrium condition depends only on d⍀/dV. The ␦V term (cell size) drops out.

D. Volume Exchange and Entropy
Consider the following basic problem. We have two ideal gas systems that can exchange volumes by moving a partition. What is the most likely final position of the partition
(i.e., its equilibrium value)?

V1

N1

V2

N2

The numbers of particles N1 and N2 are fixed, but V1 and V2 are allowed to vary such that the total volume Vtot = V1 + V2 is constant. Because V2 = Vtot – V1 we specify the macrostate of this system by V1.
In reality, the microstates of an ideal gas depend on V, N, and U. In this chapter, we consider only how the microstates depend on V and N, leaving the energy dependence of ⍀ to Chapter 7. The purpose of this simplification is to build some intuition about entropy and equilibrium (i.e., entropy maximization) with simpler math.
Denote ⍀1 and ⍀2 as the number of microstates in the left and right chamber, respectively. For each microstate in the left volume, there are ⍀2 possible microstates in the right volume, so the total number of microstates for the combined system is the product,
⍀ = ⍀1 и ⍀2 .

(6-9)

Taking the natural logarithm of ⍀, we have, ln(⍀) = ln(⍀1) + ln(⍀2) .

(6-10)

How does the total number of microstates depend on the variable, V1? To keep the math simple, consider distinguishable particles first:

Ω1 = M1N

1

Ω2 = M N2
64

2

with

with

⎛V ⎞
M1 = ⎜ 1 ⎟ = the number of cells in V1
⎝ δV ⎠
⎛V ⎞
M 2 = ⎜ 2 ⎟ = the number of cells in V2
⎝ δV ⎠

(6-11)

Statistical Processes II: Entropy and the Second Law Chapter 6

Therefore,
Ω=

V1N V2N
= (constant ) V1N (Vtot – V1 )N
(δV)N
1

2

1

(6-12)

2

or, ln(⍀) = N1 ln(V1) + N2 ln(Vtot – V1) + constant.

(6-13)

The maximum in the number of total microstates is given by, dΩ = 0, dV1 or equivalently

d ln(Ω )
=0
dV1

(6-14)

because log(⍀) increases monotonically with ⍀. Because d(lnV)/dV = 1/V, using the logarithm form gives us the result immediately:
N1 N 2

=0
V1 V2

or

N1 N 2

V1 V2

(6-15)

This result is just what we should have expected. The most probable state is where the density of particles on the left is equal to the density of particles on the right. Or, to answer our original question, the most probable ratio of volumes is
V1 N1
=
.
V2 N 2

(6-16)

The same result holds for identical particles, where Ωi = M iN /N i! . The added term in
(6-12) doesn’t contain V1; therefore, d(ln⍀)/dV1 is unaffected. i Notice that finding the most likely macrostate is equivalent to maximizing the sum of logarithms: ln(⍀) = ln(⍀1) + ln(⍀2),

(6-17)

which is a sum of a property of system 1 and a property of system 2. Recalling our earlier discussion in the Introduction and Chapter 2, these were just the properties we postulated for entropy, S: it is an additive function of the two systems, and the maximum of S with respect to the free variable (here V1) determines the most likely macrostate. Thus, our definition of entropy from the statistical mechanics point of view is:
S = k ln(⍀) ,

(6-18)

where k is a constant (the Boltzmann constant, 1.381 ϫ 10-23 J/K) that defines the absolute temperature scale. Temperature comes into play when we consider exchange of energy between the two systems, as discussed in Chapters 2(C) and 7(B). For our present volume-exchange problem, we again see the additive property of this state function:
S = S1 + S2 .

(6-19)
65

Physics 213 Elements of Thermal Physics

In statistical problems it is often convenient to drop the Boltzmann constant and define the “dimensionless entropy,” ␴ = ln(⍀). It’s relation to the usual “thermodynamic entropy” is S = k␴, and, of course, for the two-system problem, ␴ = ␴1 + ␴2.
Because the number of accessible states ⍀ depends on volume (and energy) roughly to the Nth power, the magnitude of ␴ = ln(⍀) is generally close to the number of particles, as illustrated in Section F. This is a useful fact to remember in your dealings with entropy.
A numerical example of volume-exchange, shown below, involves six cells and a movable partition blocking the transfer of three distinguishable particles. Allowing for multiple occupancy of the bins and counting states, you can easily show that ␴ = ␴1 + ␴2 = 3.22,
3.47, 3.30, 2.77 and 1.61 for the 5 possible positions of the partition. A table for your results and plot of the total entropy are given below:

·

V2

·

V1

A

C

1

2

␦V

B

3

4

5

σ
⍀1

⍀2

⍀ = ⍀1⍀2

␴ = ln⍀

1
2
3
4

52

25

3.22

5

Most likely value of V1 if partition were allowed to move freely

4
3
2
1

V1 /

0
1

2

3

4

δV

5

On the next page we make a similar plot with the conventional entropy S ϭ k␴, showing how S1 and S2 add up to the total entropy S for a fixed V1. For a freely moving partition, the probability of observing a particular value of V1 is proportional to the product
⍀1⍀2. The most probable V1 is where ⍀1⍀2 and S(V1) are a maximum. For two systems with many cells and many particles (e.g., two real gases), the probability function is sharply peaked, and it is reasonable to call V1 at the peak the “equilibrium value” of V1.

66

Statistical Processes II: Entropy and the Second Law Chapter 6

Most probable configuration

S = S1 + S2
Entropy

S 1 = kN 1 ln (V 1 /δV)

S2

S 2 = kN 2 ln ((V tot -V 1 )/δV)
S1

δV

V tot -δ V

V1

For larger systems (bigger N’s and V’s), ⍀ and ln(⍀) are much more sharply peaked:

Ω 1 r Ω 2

"Equilibrium value" of V 1

( ∝ Probability of finding V 1 )

V1

In summary, we have defined the conventional entropy S = k␴ from a microscopic viewpoint: entropy is a constant times the logarithm of the number of accessible states. The dependence of entropy on V for an ideal gas of N particles has the form:
S = k ln(⍀) = Nk ln(V/␦V)
= Nk ln(V) – Nk ln(␦V)

(6-20)

The value of entropy depends on the choice of cell size ␦V; however, the derivative of entropy with respect to volume (used above to determine the most probable configuration) does not. That is, dS/dV = Nk/V. The derivative is well defined when the partition is allowed to move in infinitesimal increments; i.e., real life.

67

Physics 213 Elements of Thermal Physics

E. Indistinguishable Particles
In the Introduction, we considered a system of N distinguishable particles labeled A, B,
C, D, etc., that are able to move between two cells:

B

A
D

E
G

C

F

At any one time, the number of particles on the left is NL and the number of particles on the right is NR. Because the total number of particles is fixed (NL + NR = N), the macrostate of this system is labeled by:
(N, NL)

(6-21)

We have already solved this problem mathematically. Because each particle can exist in only one of two cells, the resulting distribution is a Binomial Distribution. The number of microstates for the macrostate (N, NL) is written as:
⍀(N, NL) =

N!
N L !( N − N L )!

(6-22)

To generalize this problem to a real box with N indistinguishable particles, the box (with volume 2V) must be divided up into many cells. As shown in the last section,
Ω(N, N L ) ∝

VN VN
VN
i

N L ! N R! N L ! N R !
L

R

(6-23)

with NL + NR = N. It is quite interesting that the functional form 1/NL!NR! for the many-cell problem with indistinguishable particles is the same as that of the binomial distribution for the two-cell problem with distinguishable particles, illustrated in Exercise
1 of the Introduction.

F. Maximum Entropy in Equilibrium
Using this example of particle exchange, we can now quantitatively understand the concepts of equilibrium and irreversibility in terms of entropy maximization, i.e., the
Second Law of Thermodynamics. In the classical picture, particles occupy microscopic cells of volume ␦V in the box. To simplify the formulas, let’s use microliter units for V and set ␦V = 1 microliter. Also, for this statistical problem, let’s use the dimensionless form of entropy ␴ = S/k.

68

Statistical Processes II: Entropy and the Second Law Chapter 6

Initially prepare the system by blocking the passage of particles between the two sides
(each with volume V) and then loading all N particles into the left half of the box. From
Eqs. (6-4) and (6-5), the number of accessible states is ⍀ = VN/N!, giving the entropy of this initial configuration as:

␴ = ln(⍀) = N ln V – ln N!

Now open a passageway between the two compartments. Initially, there is a nonequilibrium distribution. As the particles scatter and diffuse to the right side they will visit more and more microstates of the unconstrained system. The second law of thermodynamics tells us that eventually the system will reach a situation where each microstate of the total system is equally likely to be occupied.
In this case, ⍀ = (2V)N/N! and the entropy is,

␴ = N ln(2) + N ln V – ln N!

The increase in entropy by opening the passageway is
⌬␴ = N ln(2) = 0.69 ϫ 1020

for N = 1020 particles.

The key idea is that when a macroscopic constraint is removed (such as a partition in the container), the entropy of the system generally increases dramatically. Replacing the partition here does not return the entropy to its former value. This is the essence of irreversibility in a thermodynamic system. (See also Exercise 4.)
In calculating the entropy and equilibrium conditions of systems with large numbers, a very useful approximation comes into play: ln(N!) ഠ N ln(N) – N, which is known as Stirling’s approximation. This formula is discussed at the end of
Appendix 5.
Finally, recall that entropy has three natural variables: U, N, and V. In this chapter we considered how entropy depends on V and N. In Chapter 2 we saw that thermal equilibrium (maximum S) involves temperatures, defined by 1/T = (ѨS/ѨU)V,N. Just as thermal equilibrium between two systems implies equal temperatures, mechanical equilibrium between two systems means equal pressures. p1 = p2 must correspond to:
(ѨS1/ѨV1) = (ѨS2/ѨV2) ,

69

Physics 213 Elements of Thermal Physics

where the U’s and N’s are held constant. Because S has units J/K and pressure has units
J/m3 the general definition of pressure is: p = T (ѨS/ѨV)U,N, which is derived at the beginning of Chapter 10. You can test out this definition for the ideal gas by using the volume dependence of entropy that we derived in this chapter,
S = k ln(VN) + constants. Do you recover the ideal gas law?

70

Statistical Processes II: Entropy and the Second Law Chapter 6

Exercises
1) Using Stirling’s approximation, compute log10⍀ for 1020 identical particles in 1023 cells, and compare to Eq. (6-8).

2) A box with 4 bins contains four particles separated by a partition shown as the dark line. The particles on either side of the partition can only occupy bins on that side of the partition. That is, the particles cannot cross the partition. The bins allow multiple occupancies. As shown, the particles are distinguishable.
One possible microstate:

C
B

A
1

D
2

3

a) With the partition fixed in the central position, what is the total dimensionless entropy ␴ of the system?

b) At which fixed position of the partition is the total entropy a maximum?

c) If the constraint of fixed partition is removed and the partition is allowed to move among all three positions, what is the total entropy of the system?

3) Consider N = 1000 identical particles distributed in a box of volume V and assume that the number of cells in the box is much larger than N. By what factor does the total number of microstates ⍀ increase if the volume of the box is increased by just
1% (a tiny amount)? Assume that the cell size ␦V does not change.

71

Physics 213 Elements of Thermal Physics

4) Entropy of Mixing: Many practical problems deal with the mixing of two materials.
It is not surprising that if you mix the contents of two bottles of different gases, or two beakers of different solutions, the total entropy will increase. Let’s see how much.
Start with two separated helium and argon gases, each with N atoms and volume V:

SHe =
SAr =

Calculate the change in entropy if the partition is removed. Be sure to assume that atoms of a given type are identical, and use Stirling’s Approximation, ln N! ഠ N ln N - N.
Sinitial =

Sfinal =

⌬S = entropy of mixing =
(note: many terms cancel)

Now calculate the change in entropy if the two gases were initially the same type
(say He):
⌬S =

What statement involving the Second Law could you say about these two experiments? If all He atoms were not strictly identical (no N! term), would you get the same result?

72

CHAPTER

7

Energy Exchange

A. Model System for Exchanging Energy
The ideas of the previous two chapters will carry over nicely into the present problem: How do two systems come into thermal equilibrium with each other? In exchanging volume and particles, it was useful to consider discrete quantities of volume ␦V and discrete changes in particle number ␦N = 1. The approach is no different with the exchange of energy.
Again to simplify the mathematics, we would like to “quantize” the problem by choosing indivisible packets of energy that are swapped between systems. In fact, matter on the microscopic scale is quantized into discrete energy levels and therefore undergoes discrete changes in energy. However, for most systems the energy levels are not uniformly spaced. Nevertheless, let us imagine that an object in our thermodynamic system has a discrete set of equally spaced energy levels. (Those of you who have played with numerical calculations on a computer should have no trouble imagining this.) Adding or subtracting energy from this system amounts to giving or taking an integer number of discrete packets ␦E of energy, which for brevity we write as ␧. Here is the energy level diagram:

73

Physics 213 Elements of Thermal Physics

En = nε



Convention in this book:



E = single-particle energy
U = energy of a many-particle system


0

Assume that the energy levels continue this way to infinity. Because this is the energy diagram for a single particle, we use the letter E for energy. When we discuss a system of more than one particle, we will revert to the letter U for energy.
The particles in an ideal gas don’t have equally spaced quantum levels like the one above. In fact, a particle in a (1-dimensional) box has energies given by E = Cn2, with n = 1,2,3… The outer electrons in an atom are described by energy levels like that of hydrogen, E = –(13.6 eV)/n2. There is one basic system, however, that does have equally spaced energy levels on the microscopic scale. It is the simple harmonic oscillator:

m

κ

E = n ε, with n = 0,1,2,3.... ε = hf
(h = Planck's constant)
Frequency f = (κ /m)1/2/2π

In reality, there is a “zero-point energy” of ½ hf added to this scale that we will ignore.
The harmonic oscillator problem has wide applications in the vibrations of molecules, solids, and electromagnetic waves, as we shall see.
From here on we will refer to a system with equally spaced levels as an “oscillator.”
Consider two oscillators* in thermal contact with each other, meaning that they can exchange energy in units of ␧. So, for example, an exchange of one packet of energy may look like:

* The oscillators are “identical,” but they are “distinguishable” by their labeled position (A, B,…), as in the earlier problem with spins.
74

Energy Exchange Chapter 7

Before exchange

After exchange of

















0

0
Osc A

Osc B

Osc A

ε

Osc B

Note that there are a total of 7 energy quanta in this system, U = 7␧. In general we will write U = q␧, where q = # quanta = # energy packets. Now you determine all the possible microstates of the two-oscillator system assuming that the total energy is U =
EA + EB = 5␧ (five energy quanta).





0
Osc A Osc B

Osc A Osc B

Osc A Osc B

Osc A Osc B

Osc A Osc B

Osc A Osc B

Yes, there are 6 microstates, so ⍀ = 6. Now work out the microstates for three coupled oscillators with a total energy U = EA + EB + EC = 3␧ (three quanta). Here is the diagram for 3 oscillators:

EA

EB

EC



EA = n ␧ = En
EB = i ␧
EC = j ␧





n, i, j = integers

0
Oscillator

A

B

C

75

Physics 213 Elements of Thermal Physics

I suggest that you group the possibilities according to the energy En = n␧ in oscillator A:
En

0

ε





⍀ =

for 3 oscillators with three energy quanta (U = 3ε )

Continue this type of counting and fill out the following graph for 3 oscillators with
U = 0, ␧, 2␧, 3␧, and 4␧: (Note: ⍀(0) = 1)

20
15



10
5
0
0

76

1

2

3

4

U/ε = q
= # quanta

Energy Exchange Chapter 7

To repeat, we use the symbol q = U/␧ for the total number of energy packets, or quanta, in the system. This problem is like the bin problem we saw earlier, although here N is the # oscillators (or “bins”). In this case we want to know how many ways there are to arrange q identical packets of energy among N oscillators. The answer is:
Ω=

(q + N – 1)!
(N – 1)!q!

(7-1)

If there are many more packets of energy than oscillators (q>>N), known as the “classical limit,” the number of microstates is approximately given by (see Appendix 5):

⍀≈

qN – 1
ؔ q N – 1 ؔ U N – 1.
(N – 1)!

(7-2)

In particular, as the energy U of the three-oscillator system increases, ⍀ approaches the quadratic form, ⍀ ϰ U2. This result applies to a single three-dimensional oscillator (illustrated at the right) with energy E; i.e., ⍀ ϰ E2. Likewise, for a collection of N three-dimensional oscillators with total energy U the number of microstates has an energy dependence
⍀N ϰ U3N-1. Clearly the number of microstates increases very rapidly with both U and N.

Finally, using your work above, plot the probability Pn of observing oscillator A in a state with energy En for various n:

Pn
3 oscillators and U = 3ε

1

0.5

0

1

2

3



En

The lesson in this last exercise is that if you consider an individual system as a part of a larger system, the probability of finding the small system in a state with energy En decreases with increasing En. This is a very basic principle. Can you put in words why it is true? Look at the number of microstates of the total system as a function of En.

77

Physics 213 Elements of Thermal Physics

B. Thermal Equilibrium and Absolute Temperature
We now consider the energy exchange problem in more general terms. We denote the energy, the number of microstates, and the entropy of two systems in thermal contact as follows:
⍀1 (U 1 )

S 1 = k ln ⍀1
U 1 + U 2 = U tot

U1

U2

⍀2 (U 2 )

S 2 = k ln ⍀2

(total energy of the isolated system)

How is energy partitioned between U1 and U2? The equilibrium (most probable) value of the energy U1 is found by maximizing the total number of microstates of the system.
Equivalently we can maximize the total entropy,
S = S1 + S2 .

(7-3)

The math is similar to that for volume exchange, dS1 dS2
+
= 0 dU1 dU1

dU1 ϭ – dU2

(7-4)

dS1 dS2 −
= 0 dU1 dU 2

The resulting condition for equilibrium is, dS1 dS2
=
dU1 dU 2

(7-5)

The left side of this equation is a property of system 1 and the right side is a property of system 2. We know, however, from macroscopic thermodynamics that the basic condition of thermal equilibrium is that the temperatures of the two systems are equal.
Consequently the slope of the entropy curve, dS/dU, must be directly related to our concept of temperature. The absolute temperature is defined by the relation,
1
dS
=
T dU

78

(7-6)

Energy Exchange Chapter 7

In general, S = S(U, N, V), so T is defined by a partial derivative,

1 ⎛ dS ⎞
=⎜

T ⎝ dU ⎠ N,V

(7-7)

meaning that the derivative is taken while holding N and V constant. Now we show that k in S = k ln(⍀) is the Boltzmann constant in the Equipartition Theorem.

C. Equipartition Revisited
Let us see if this definition of temperature is consistent with what we know about a large collection of harmonic oscillators. Recall that the number of microstates for N oscillators with total energy U = q␧ is a rapidly increasing function of N and U. We consider the limit where the average oscillator has much more energy than the quantum of energy ␧, i.e., q >> N. In this case,
⍀ = (constant) и U N–1

(7-8)

as discussed with Equation (7-2). Therefore, the entropy for this system is,
S = k ln(⍀) = (N – 1)k ln(U) + const.

(7-9)

Taking the derivative of the entropy with respect to energy, we see that, dS ( N − 1) k
=
dU
U

(7-10)

For a large number of oscillators, N – 1 may be replaced by N. The entropy and its derivative are plotted below:

S

dS/dU

Nk In U
Nk / U

U

U

A graph of kT reveals an interesting result for this limit of U >> N␧ (i.e., q >> N),

79

Physics 213 Elements of Thermal Physics

kT = k

( ) dS U/N

-1

dU

U
In this limit, kT is simply the average energy per oscillator, U/N. Therefore, the total energy of N harmonic oscillators is related to T by,
U = NkT.

(7-11)

This is the average energy of N oscillators postulated by the Equipartition Theorem: recall that the energy of the harmonic oscillator has two quadratic terms, each with
½ kT thermal energy. So for N one-dimensional oscillators,
U = N < ½ mv2 + ½ ␬u2> = N kT.

(7-12)

The constant k in the statistical definition of temperature, Equation (7-6), is indeed the
Boltzmann constant introduced with the Equipartition Theorem.
[A technical detail: In most cases the difference between the most probable energy U and the average energy <U> is negligible. However, for small N our analysis yields
U = (N–1)kT, which is not the Equipartition result. For N = 1, the most probable singleoscillator state actually occurs at E = 0. (See your graph at the end of Section A.) The average energy, however, is greater than zero because states with E > 0 have non-zero probability of being occupied. In an exercise of Chapter 8, you can show that <E> = kT for the single oscillator. Therefore, <U> = NkT for all values of N, assuming that kT >> level spacing ␧.]

80

Energy Exchange Chapter 7

D. Why Energy Flows from Hot to Cold
The statistical definition of the temperature of a system can be represented graphically as follows:
S1

Slope = dS1/dU1 = 1/T1

dS 1

dU 1

U1
The logarithmic dependence of S = k␴ on energy is quite universal for many-particle systems because we have demanded that the entropy 1) is an additive property of a system, and 2) it increases monotonically with the number of accessible states, which rises exponentially with particle number. In precise terms, entropy (like energy) is an extensive quantity: its value doubles when two identical systems are put together. (Intensive quantities like T and p do not change when two identical systems are combined.)
Notice that the number of accessible states, ⍀, is not an extensive property because it increases (exponentially) with the number of particles.
The slope of the logarithmic curve is 1/T, which decreases monotonically as the energy of the system increases. This behavior for all types of systems is consistent with our intuition that temperature increases with increasing internal energy.
As in the case of volume exchange, the condition for thermal equilibrium between two systems can be represented graphically by the logarithmic plot:

S1 + S2
Entropy

S2

S1

U1
Most probable U 1

U1*

U tot
81

Physics 213 Elements of Thermal Physics

The probability of observing system 1 with a particular energy U1 is proportional to
⍀(U1) = e␴(U1), which for large systems is a very sharply peaked function. Hence, the most probable value of U1 can be confidently called the equilibrium value.
The equilibrium value of U1 occurs where the slopes of the S1 and S2 curves (with respect to U1 and U2) are equal. If the energy of system 1 were initially higher, say at the energy denoted by U1*, the temperature of system 1 would be higher (smaller slope) than for equilibrium, and vice versa for system 2. To reach equilibrium, energy would flow from system 1 (at higher T) to system 2 (at lower T) until the temperatures are equal. Because we want the convention that energy flows from “high T” to “low T,” we have defined T as the inverse of the slope.
The fact that energy flows in the direction that maximizes entropy, however, is not a convention. It is the Second Law of Thermodynamics, which we now understand in terms of the most probable condition of a many-particle system in equilibrium.
The shape of the logarithmic function makes all this possible. Because the slope of ␴ increases towards infinity as U approaches the ground-state energy, it is always possible to match the slopes of two systems so that thermal equilibrium is attained.

E. Entropy of the Ideal Gas—Temperature Dependence
The natural variables for entropy are U, V, and N. However, many problems that we will deal with involve a system of interest ( ) in good thermal contact with a much larger system ( R) at temperature T (i.e., a “thermal reservoir”). For these cases it is useful to describe the entropy of system in terms of T rather than U. From the definition of absolute temperature, we can determine how the entropy of a gas or solid depends on temperature. Assuming that the volume is fixed, we rearrange (7-6) as dS =

dU
T

(7-13)

Since no work is done, dU = CV dT

(7-14)

C V dT
T

(7-15)

leading to, dS =

Assuming constant CV and integrating this differential, we find the change in entropy as the temperature is changed from T1 to T2,
S2 – S1 = CV (lnT2 – lnT1)
⌬S = CV ln (T2/T1).

82

(constant CV)

(7-16)

Energy Exchange Chapter 7

This relation also works well for solids because their volumes are relatively constant as a function of temperature. For constant-pressure processes in a gas or solid one can substitute CP for CV. For an ideal gas with constant CV we can now include the volume dependence of the entropy, from Chapter 6,
S = CV ln(T) + Nk ln(V) + constants.

(7-17)

This equation can only be used to determine differences in entropy during a thermodynamic process: if both V and T are changing quasi-statically for an ideal monatomic gas (CV = (3/2)Nk),
Sf – Si = (3/2)Nk ln(Tf /Ti) + Nk ln(Vf /Vi).

(7-18)

There are no unknown constants here, and the result is independent of units used for
V. The absolute entropy of the ideal gas is calculated in Chapter 11. Notice that this relation implies S(U,V,N) = Nk ln(U3/2V) + constants*, where U is the average thermal energy U = (3/2)NkT of the ideal monatomic gas. The instantaneous energy of a gas with many particles is extremely close to its average energy (in contrast to widely changing energy of a single particle), as discussed at the beginning of Chapter 6.

* This formula was used in Exercise 3 of Chapter 2. It can be shown that for large N, fluctuations in the energy of the gas have little effect on its entropy.

83

Physics 213 Elements of Thermal Physics

Exercises
1) Consider a system of 4 oscillators with a total energy of 6␧; that is, 6 quanta shared among 4 oscillators. a) What is the average energy per oscillator? ____________
b) Using the exact formula for ⍀, determine the probability distribution for a single oscillator in this system by completing the following table and plotting the result.
(Check your numbers with the sums given.) Here En is the energy of the selected oscillator, and UR is the energy of the remaining three oscillators.

En

⍀R

Pn

⌺⍀R = 84

⌺Pn = 1

UR

PnEn

0

2␧
3␧
4␧
5␧
6␧

0.5

Compute the average energy of an oscillator with the formula, <E> = ∑ PnEn and mark the average energy with a vertical line.

0.4

Pn

0.3
0.2
0.1
0
0

84

1

2

3

4

5

6



Energy Exchange Chapter 7

2) Consider a collection of 100 oscillators, each with an average of 10 quanta. a) By what factor would ⍀ change if the total energy were increased by a factor of 2?
b) By what factor would ⍀ change if one more oscillator were added to the original system without changing the total energy?

3) An ideal monatomic gas at an initial pressure of p1 = 1 atm and temperature
T1 = 300 K expands adiabatically from 1 liter to 10 liters.
1

P

2

V
Consider the relation: S2 – S1 = (3/2)Nk ln(T2/T1) + Nk ln(V2/V1).
a) What is the change in entropy due to the volume change alone, ignoring any effects due to changing internal energy?

b) What is the final temperature?

c) What is the change in entropy due solely to the change in temperature, ignoring the entropy change due to volume change?

d) Considering a) and c), what is the total entropy change in this adiabatic expansion process? Is the result surprising?

85

Physics 213 Elements of Thermal Physics

4) a)

What is the entropy change for 2 moles of ideal diatomic gas cooled from
300 K to 200 K at constant V? b) at constant p?

5) The Einstein model of a solid considers each atom in a harmonic potential well with 3 degrees of freedom; therefore, N = 1022 atoms correspond to 3 ϫ
1022 oscillators. Let f = 1 ϫ 1012 Hz. The average energy of each oscillator is kT with T = 300 K.
a) Find the average number of quanta q for each oscillator.

b) Determine ␴ = ln ⍀ for the solid. (Stirling’s eqn.: ln (N!) ഠ N lnN – N.)

The following graphing exercises are provided for those who wish to gain more insights into entropy maximization and spin temperature:
6) a) Consider a single-particle system with number of accessible states ⍀1(U) = C1
U in thermal contact with a 4-particle reservoir with number of accessible states
⍀R(U) = CR (10 – U)4. For simplicity set C1 and CR equal to unity. The total energy of the two systems is 10. Plot the individual entropies ␴1 and ␴R and the total entropy ␴(U) = ln ⍀1(U) ⍀R(10 – U) for values of U between 1 and 9.
b)

Determine analytically what value of U gives the maximum S(U). Check your answer against the graph.

7) a) Plot ⍀(m) = 100 exp (–m2/25) between m = –10 and 10. This Gaussian function mimics the number of accessible states of a multiple-spin system (with m = U/␮B), as we have seen in Chapter 5 and will study later in Chapter 10.
b) Plot the entropy, ␴(m) = ln ⍀(m), of this model system between m = –10 and
10. What is this shape called?
c) Calculate and sketch the temperature of this spin system (Note: d␴/dU = (1/␮B) ϫ d␴/dm). Can you explain the unusual dependence? Do you think that negative temperatures are physically attainable in the laboratory?

86

CHAPTER

8

Boltzmann Distribution

A. Concept of a Thermal Reservoir
Consider the following situation: A small system consisting of one oscillator is brought into thermal contact with a larger system (a “thermal reservoir”) consisting of three oscillators:
En

Ei + Ej + Ek = UR = energy of reservoir



Total energy Utot = En + UR



⍀R = # microstates of reservoir


0
Small system Reservoir

The basic question asked here is, “What is the probability that the small system is found in a state with a particular energy En?” Because each microstate is equally likely, the probability is proportional to the number of ways En can occur, which is simply the number of microstates of the reservoir when it has energy (Utot – En). To compute the probability, we

87

Physics 213 Elements of Thermal Physics

must divide ⍀R(Utot – En) by the total number of microstates of the system, considering all possible values of En,

Pn =

ΩR
Ω R ( U tot - E n )
=
∝ Ω R ( U tot - E n )
Ωtot ΣΩ R ( U tot - E m )

(8-1)

m

Results for Utot = 3␧ are easily determined from your previous exercises in Chapter
7. Your graph of ⍀(U) for 3 oscillators becomes ⍀R(UR) here. Notice the value of the denominator, ⍀tot = ⌺⍀R = 20. The results are:

En

UR = Utot – En

⍀R(UR)

Pn = ⍀R / ⌺⍀R

0

3␧

10

10/20 = 0.5



2␧

6

6/20 = 0.3

2␧



3

3/20 = 0.15

3␧

0

1

1/20 = 0.05

20
Pn
0.5

0

1

2

3



En

Why the decrease? In order for the energy of the small system to be increased by energy ␧, that energy must be taken away from the reservoir. The number of microstates of the reservoir decreases rapidly as its energy decreases (as in your plot of ⍀(U) for 3 oscillators) so the probability of observing the single oscillator in a state with energy
En decreases rapidly with increasing En. If you follow this reasoning, you understand a basic result of statistical mechanics. It will allow you to predict the average energies of oscillators, atoms, molecules, spins, etc., in thermal equilibrium.

B. The Boltzmann Factor
Now let us generalize this problem to any single-particle system with energy levels En in thermal contact with a many-particle system with energy UR. Again, n identifies a single state of the small system, and R stands for thermal reservoir. A thermal reservoir

88

Boltzmann Distribution Chapter 8

is a system that has sufficiently large heat capacity that its temperature remains virtually unchanged when contacted by a small system of interest. (d␴R/dUR Ϸ constant.)

UR

En

The small system need not have equally spaced levels like the oscillator, but we assume that it has discrete energy levels labeled by the index n. This is a reasonable assumption because all confined particles have quantized levels due to the wave nature of matter.
The entropy of the reservoir is written as,
SR = k ln ⍀R

(8-2)

The number of microstates of the reservoir ⍀R(UR) depends on the energy of the small system because
UR = Utot – En

(8-3)

with Utot = total energy (a constant). As illustrated in the former problem, the probability that the small system is in a state labeled n (with energy En) is,
Pn =

ΩR ( U tot –E n )
∝ ΩR (U tot –E n )
ΣΩR ( U tot –E m )

(8-4)

m

where the sum over states in the denominator is the normalizing factor (= 20 in the above example).
⍀R(UR) varies rapidly with energy UR so we can more accurately approximate the entropy, SR = k ln ⍀R, which varies more slowly:

S R (U tot - E n )

Utot

En

89

Physics 213 Elements of Thermal Physics

Because the small system is likely to extract a very small fraction of the reservoir’s energy in equilibrium, En << Utot, we therefore need only consider the region enclosed by the circle:

S o - E n /T
So
SR

As indicated in the drawing, the entropy of the reservoir near En = 0 can be approximated by a straight line,
SR = So – (dSR/dUR) En + …

(Taylor Series)

SR ഠ So – En/T

(8-5)

where T is the temperature of the reservoir, which is a constant for small En.
Probability is directly proportional to ⍀, not S. Entropy and the number of microstates are related by SR = k ln ⍀R, or equivalently
ΩR = eS

R

/k

.

(8-6)

Combining the above equations, we have,
ΩR = eS

o

/k – E n /kT

(8-7)
=e

So /k

e

–E n /kT

.

The first factor is a property of the reservoir (its number of microstates when it has all the energy) and is a constant. Thus, the probability that a state labeled n (with energy
En) is occupied is simply,

Pn = C e − βE . n 90

(␤ ϵ 1/kT)

(8-8)

Boltzmann Distribution Chapter 8

This is the famous Boltzmann distribution. The “constant” C is determined from the normalization condition ⌺Pn = 1. A plot of this general result

∑P

= C∑ e −␤E n = 1

n

n

C

C=

n

1
1
=
− ␤E n
Z
∑e n Pn

Z = ∑ e −␤E n (“sum over states”) n En looks a lot like our prior results for small oscillator systems. It basically says that the number of accessible states of a reservoir decreases exponentially with the energy removed, as long as the energy removed is small compared to the total energy of the reservoir.
This is an extremely important result that you will use in a variety of problems. Why is it so important? In addition to telling us the likelihood of finding a small system in a given state (for example, an excited state of an atom or molecule), the Boltzmann factor, exp(–␤En), is used to determine the thermally-averaged values of various properties of the system. We now consider three practical examples.

C. Paramagnetism
In Chapter 5 we counted the microstates associated with a system of magnetic moments in a magnetic field:

Spin energy:

E down = +μB

B

ΔE = 2μB
E up = -μB

Note that the spins pointing in the direction of the magnetic field (i.e., “up”) have the lower energy. This system corresponds to (spin-½) electrons whose moments have a magnitude given by
␮ = ␮B = 9.2848 ϫ 10–24 J/T.

(electron)

(8-9)

91

Physics 213 Elements of Thermal Physics

The unit of magnetic field strength (or flux density) is Tesla (T). An atom or molecule with unpaired electron spins has a magnetic moment in units of ␮B. A collection of such atoms with little or no interaction between spins is known as a paramagnetic system.
In this section we will determine the total magnetic moment of a paramagnetic system in contact with a thermal reservoir at temperature T. For a system of N spins with Nup moments pointing in the direction of the magnetic field, and Ndown moments pointing opposite the field, the total moment in the direction of the field is,
Ndown
Nup

0 = (Nup – Ndown) ␮ = m ␮

(8-10)

and the energy of the spin system is, m U = – 0 B = – m␮B .

(8-11)

If the spins are in contact with a thermal reservoir at temperature T, we can compute the probability that a particular spin will be pointing up or down by using the Boltzmann distribution: Pup = Ce

– E up /kT

= CeμB/kT
(8-12)

Pdown = Ce − E

down

/kT

= Ce −μB/kT .

The constant C is determined from the condition Pup + Pdown = 1, giving,
C=

1
.
( eμB/kT + e − μB/kT )

(8-13)

Because Nup = NPup and Ndown = NPdown, the total moment of the spin system at temperature T is
0 = N␮(Pup – Pdown)
= Nμ

μB/kT

(8-14)

− μB/kT

−e
(e
)
.
( eμB/kT + e − μB/kT )

This combination of exponentials is defined as the hyperbolic tangent,
0 = N␮ и tanh(␮B / kT) .

92

(8-15)

Boltzmann Distribution Chapter 8

Here is a graph of this function:

M


μB/kT

0
0

1

We see that the total moment “saturates” at N␮, corresponding to all spins pointing along the field.
How strong must the magnetic field be to significantly “polarize” an electron spin system at 300 K? To find out, we plug in the numbers that make ␮B/kT = 1 (marked by the cross in the drawing):
B = kT/␮ = (1.38 ϫ 10–23J/K)(300 K)/ (9.27 ϫ 10–24 J/T)

(8-16)

= 447 Tesla.
This field strength is about ten times larger than that produced by the largest laboratory magnet made today, which indicates that the practical region for paramagnetic systems at room temperature is the small linear region near the zero in the graph. We can approximate the hyperbolic tangent in this region by: tanh x ഠ x

for small x,

(8-17)

which leads directly to the equation,
0ഠ

N␮ 2B B

. kT T

(8-18)

This inverse-temperature dependence of the total moment (known as Curie’s Law) is the signature of a paramagnetic system at normal temperatures. By lowering the temperature to a few degrees Kelvin, however, it is possible to almost completely polarize a system of electron spins in a laboratory field (0 ഠ N␮). In the low temperature regime we must revert to the exact expression for 0, Equation (8-14).

93

Physics 213 Elements of Thermal Physics

D. Elasticity in Polymers
Ever wonder how a rubber band works? It’s a little different from most solids because it is comprised of long-chain polymers with many segments. Let us imagine that a segment can point either parallel or antiparallel to the chain. This simple model of the rubber band considers the segments of the polymer to be randomly oriented in an “up” or “down” position, sort of like the drawing below, a L

Weight:

w =mg

The polymer-plus-weight is mathematically equivalent to the spin problem, with magnetic field replaced by gravitational field.
Segment
orientation gravitational field, g

E = 2aw

In the polymer case, a force, w = mg, stretches the polymer to length L. The change in gravitational potential energy when one of the segments flips is:
⌬E = 2aw ,

(8-19)

where a equals the segment length. (This energy is provided by the thermal reservoir.)
We note the correspondence ␮B → aw with the spin system and write for the thermalaverage length of the polymer:
< L > = Na и tanh(aw / kT) ഠ
94

Na 2 w kT (8-20)

Boltzmann Distribution Chapter 8

(Compare to 0 in the last section.) The length is proportional to the weight and inversely proportional to T in the high-T limit. Obviously there is not a perfect correspondence with the spins because the polymer has a finite length at zero force and the segment picture is oversimplified, but this model gives us useful insights into how the rubber band works. Its elasticity is not due to the stretching of bonds, as in the case of ordinary solids, but it comes from the thermodynamics of the long-chain polymer.
One amazing consequence is that the total length of the weighted polymer is predicted to shrink as the temperature is raised. You can do this experiment yourself with a rubber band, a weight, and an “entropy gun” (such as the one you use to dry your hair). This contraction effect is opposite the thermal expansion of ordinary solids. The orientations of the segments become more random as the temperature is raised, causing the rubber band to shrink. More generally, many types of polymers tend to curl up as temperature increases. E. Harmonic Oscillator
An important example of Boltzmann statistics is a simple harmonic oscillator in contact with a thermal reservoir at temperature T. The distribution of states for the simple harmonic oscillator is almost the simplest imaginable:

ε
0

ε











En = nε

. . .

In Appendix 6, we show that the sum over states in this case is

1
(␤ = 1/kT)
(8-21)
1 − e − βε n and that the average energy of a harmonic oscillator in contact with a thermal reservoir is:
Z = Σe − βnε =

<E>=

ε e −1

(8-22)

βε

Here is a plot of this function:

<E>
<E> ~
~ kT

kT

ε/2

95

Physics 213 Elements of Thermal Physics

At high temperatures (␤␧ << 1), the exponent can be represented by the first and second term of a Taylor series expansion, e␤␧ ഠ 1 + ␤␧, and the average energy becomes <E> ഠ ␤–1 = kT. The total vibrational energy of a system of N oscillators is U = N<E> = NkT, as predicted by the Equipartition Theorem. However, at low temperatures the energy decreases more slowly with T. What is happening at low temperatures?
Essentially when kT < ␧ the thermal excitations are “frozen out.” If kT << ␧ the probability that the n = 1 state is occupied is (from Eq. (8-8)),
P1 = e–␤␧ /(1 + e–␤␧ + …) ഠ e–␤␧

(␤ = 1/kT)

(8-23)

which decreases rapidly as T decreases. The evolution from quantum to classical statistics is shown schematically for several temperatures:
Increasing temperature

E

kT < ε

Pn



E

kT ≈ ε



E

kT > ε





















0

0

0

Pn

Pn

The total thermal energy of N diatomic molecules is given by U = N<E>, so the heat capacity is
Cv = dU/dT = N d<E>/dT.

(8-24)

The slope of the <E> graph diminishes at low T; therefore, the contribution to the heat capacity due to vibrations drops to zero at zero temperatures:

96

Boltzmann Distribution Chapter 8

Cv (vibrations)
Freeze-out of vibrations Nk

kT

ε/2

Strongly bonded molecules like N2, O2, and H2 have large vibrational frequencies; therefore, ␧ = hf >> kT at 300 K and the vibrational contribution to their specific heat is small. On the other hand, CO2 and H2O molecules have low frequency torsional vibrations that are thermally active at room temperature (␧ ഠ kT). For this reason the heat capacity of these polyatomic gases has a significant vibrational contribution at room temperature. Now we know the physical origin of the temperature dependence of the specific heat (cv = ␣R) for an ideal gas, as discussed in Chapter 3(C). We shall see in the next chapter that the low-frequency normal modes of CO2 play a key role in the global warming of our planet.
Probing the Boltzmann Factor: e-E/kT
The derivation of the Boltzmann factor in Eq. (8-8) may seem a little magical. If so, you may wish to gain some intuition about how this fundamental relation comes about by considering the following example: A 6-particle reservoir of harmonic oscillators initially has energy equal to 10, assuming ␧ = 1. The number of available states of the reservoir depends on how much energy U a single oscillator draws from it; i.e.,
⍀R = (10 – U)5. Show that d␴R/dU = 1⁄2 for the reservoir at U = 0 and generate two simple graphs using your calculator:
a) Plot ⍀R = (10 – U)5 from U = 0 to 5.
b) Plot ␴R = ln⍀R over the range U = 0 to 9, and show that near U = 0, a good approximation is ␴R = ln⍀R(0) - U/2, implying that ⍀R(U) = ⍀R(0)e-U/2.
c) Sketch this function on your original graph … a good approximation? ln ⍀ R

⍀R
12

1 X 105

U

0
0

1

2

3

4

5

U

0
0 1 2 3 4 5 6 7 8 9

97

Physics 213 Elements of Thermal Physics

Exercises
1) a) Compute the ratio Pup/Pdown for electron spins at T = 2 K and B = 1 Tesla.

b) At what magnetic field would this ratio equal 10?

c) Compute the total moment for 1023 spins at T = 2 K and B = 1 Tesla.

2) With Equation (8-22), show that in the “classical limit” of high temperatures the energy of N harmonic oscillators is U = NkT.

3) Calculate the average energy for a harmonic oscillator with level spacing
␧ = 0.5 ϫ 10–20 J at a temperature of 300 K.

En
_______ 3␧

4) A hypothetical molecule has 4 low-lying states spaced by an energy
␧ = 0.5 ϫ 10–20 J.
a) What is the average population of each level for a collection of 100 molecules at 300 K?

_______ 2␧
_______ ␧
_______ 0
b) What is the average energy of one of these molecules? Compare your answer to that of problem 3.

98

CHAPTER

9

Distributions of Molecules and Photons

A. Applying the Boltzmann Factor
When we consider a small system in contact with a thermal reservoir at temperature T, the Boltzmann factor tells us the probability of finding the small system in a particular state (labeled n),
Pn = Ce – E

n

/kT

(9-1)

where En is the energy of that state and C is the normalization constant determined by setting the sum over all probabilities ⌺Pn equal to 1. Often we wish to know the probability, P(E)⌬E, that the small system has energy between E and E + ⌬E, where P(E) is the probability density
(probability per unit energy). To answer this question, we must find the number of states in the energy range E to E + ⌬E, then multiply by the probability that each state is occupied.
For a 1-dimensional harmonic oscillator (e.g., a diatomic molecule) the distribution of energy levels is very simple, so the probability density is also very simple:

99

Physics 213 Elements of Thermal Physics

⌬E
P(E) = C e -E/kT
(1-d oscillator)

E

ε
1d osc.
In this case, the number of states with energy between E and E + ⌬E is simply ⌬E/␧, which is independent of E. The normalization constant C is found by setting Ύ P(E)dE
= 1, which yields C = 1/kT using an integral given in Appendix 4.
In the Einstein model of a solid, interactions between atoms are ignored and each atom is considered to act as a three-dimensional oscillator, as illustrated below. In Chapter 7 we found that the number of states associated with independent oscillations along x, y, and z increases as the square of the total energy: ⍀(E) ϰ E2. Therefore, the probability density for a vibrating atom in a solid has the form:

P(E) = C E 2 e -E/kT
(3-d oscillator)

The normalization constant in this case is not the same as for the single harmonic oscillator. Doing the integral gives C = 1/2(kT)3, again referring to Appendix 4.

B. Particle States in a Classical Gas
In order to determine the distribution of energies of particles in an ideal gas, we must again determine how many states there are between E and E + ⌬E. In Chapter 6 we considered the microstates associated with a gas of particles in a container of volume
V. We divided the volume up into a grid of cells with volume ␦V, and we assumed that

100

Distributions of Molecules and Photons Chapter 9

the classical particle can be localized in a cell at position r. The picture looks something like this (imagine a 3-dimensional grid):

y

Volume V

δV = dxdydz

r x z

To completely determine the particle’s state, we must also specify its momentum. Momentum space is divided up the same way that we sectioned real space. Now here is the important point: Because E = p2/2m, a sphere in this momentum space represents a constant-energy surface described by the equation, px2 + py2 + pz2 = p2 = 2mE, where p = (2mE)1/2 is the radius of the sphere:

py
Constant-energy
surface

p px 101

Physics 213 Elements of Thermal Physics

So, how many states there are for particles with energy between E and E + ⌬E? The relevant volume in momentum space is a spherical shell: py p px Δp

The number of momentum states with energies between E and ⌬E is proportional to the volume between two constant-energy spheres:
⍀1 ϰ vol. of shell ϰ 4␲p2⌬p ϰ 4␲(2mE)⌬p
Using p = (2mE)1/2, we relate ⌬p and ⌬E:
⌬p = dp и ⌬E ϰ E–1/2 и⌬E dE Combining these two equations, we have the desired result,
⍀1 ϰ E1/2 ⌬E
The “1” stands for “one particle.”

C. Maxwell–Boltzmann Distribution
All the states within the thin shell have nearly the same energy E and nearly the same
Boltzmann factor exp(–␤E), with ␤ = 1/kT. Therefore, the probability of finding a particle with energy in the range E + ⌬E is
P(E)⌬E ϰ ⍀1 exp(–␤E).
With ⍀1 ϰ E1/2 ⌬E, the probability density for a particle in an ideal gas is:
P(E) = C E1/2 e–␤E

102

(9-2)

Distributions of Molecules and Photons Chapter 9

A plot of this function looks like this:

P(E)
E1/2

(β = 1/kT)

e −βE
E = 1/2 mv 2

1/ kT
2

This is the famous Maxwell–Boltzmann Distribution of particle energies in an ideal gas. By setting dP(E)/dE = 0 you will find that the peak in the distribution occurs at the energy Epeak = (1/2) kT. The constant of proportionality is determined by integrating from zero to infinity:
ΎC E1/2 e–␤E dE = 1
The substitution E = x2 and an integral given in Appendix 4 yields C = 2␤3/2/␲1/2. Another integral in Appendix 4 provides us with the average energy of a particle at temperature T:
<E> = C ΎE P(E) dE = C ΎE3/2 e–␤E dE = (3/2) kT,

(9-3)

which is precisely the prediction of the Equipartition Theorem for the ideal monatomic gas. Finally, the average energy of a monatomic gas of N atoms is:
U = N <E> = (3/2) NkT

(9-4)

This derivation of <E> from first principles justifies the Equipartition Theorem (½kT for each quadratic degree of freedom) for the ideal monatomic gas. It is a result of
Boltzmann statistics, which comes from maximizing the entropy of the gas-plus-reservoir.

D. Photons
Another important application of Boltzmann statistics is the radiation of electromagnetic energy. How much energy is radiated from a hot object such as a light bulb, your body, or the sun? We begin this problem by considering electromagnetic waves confined to a box (or “cavity”) at temperature T. The waves are naturally generated by the jiggling of electronic charges in the walls of the cavity. An equilibrium is set up between the moving electrons and the electromagnetic waves.

103

Physics 213 Elements of Thermal Physics

We know from our E&M course that electric fields vanish inside a metal; therefore, the waves have a null very close to the wall boundaries. This means that there are discrete wavelengths determined by the size of the cavity:

E x y
B

L

The allowed wavelengths of these standing waves are
␭m = 2L/m

(9-5)

with m = an integer (the “mode index”). The corresponding frequencies are, f = c/ ␭m

(9-6)

where c is the velocity of light. “m” is the number of half wavelengths fitting in the cavity.
Now here’s the new physics: Quantum field theory dictates that only certain energies are allowed for each wave, just like our harmonic oscillator with an energy level spacing given by ␧ = hf,
En = n ␧ = n hf

(9-7)

where h = 6.626 ϫ 10–34 Jиs is Planck’s constant and n = 1,2,3… Strange as it may seem, we can picture the standing wave as having only certain allowed amplitudes corresponding to the quantized energies above.
En = n ε = n hf



Electric field, E

3

m=3



2



1

0

104

n=

Distributions of Molecules and Photons Chapter 9

The quantum of electromagnetic energy is ␧ = hf. We refer to this packet of electromagnetic energy as a “photon” and say that there are 1, 2, or 3 photons occupying the EM mode shown above. Each mode (with wavelength ␭m) has a different frequency, so n(f) is designated as the number of photons in a particular mode with frequency f. The energy in that mode is simply n(f) ϫ hf.

E. Thermal Radiation
From our discussion of the harmonic oscillator in Chapter 6, we know that the average energy in a harmonic oscillator mode with frequency f is:
<E>=

hf e −1

(9-8)

βhf

which is known as the Planck Distribution. Now we are faced with adding the contributions from all possible modes. Since there are an infinite number of possible frequencies, f = c/ ␭m, the total electromagnetic energy in the cavity is formally obtained by summing over all these modes,

U = Σ<E> =Σ f f

hf e ␤ hf Ϫ1

(9-9)

In practice, the counting of modes is similar to the counting of accessible states for the ideal gas. For a 3-dimensional cavity there are three mode indices, mx, my, and mz. We make the following sketch (imagine 3-d): my shell of radius
1
m = (mx2 + mx2 + mx2) /2

mx

Each mode has an average energy hf <E>= eβhf - 1

The frequency f of a mode with indices mx, my, mz equals mc/2L, where m = (mx2 + mx2
+ mx2)1/2 is the radius of a constant-frequency sphere. The volume of a shell of radius m and width ⌬m is 4␲m2 ⌬m, and because m = (2L/c)f, we have ⌬m = (2L/c) ⌬f. Therefore, the number of modes between f and f + ⌬f is proportional to f 2 ⌬f.

105

Physics 213 Elements of Thermal Physics

Taking into account both the number of modes between f and f + df and the average energy hf in a given mode, the electromagnetic energy for all modes with frequencies between f and f + df is, u(f )df ∝

f3 df eβhf − 1

where u(f ) is the electromagnetic energy per unit frequency, sketched below,

u(f)
Planck Radiation Law

hf
2.8 kT

The energy 2.8 kT at the peak is determined by setting du(f)/df = 0. Taking into account the constant factors and integrating Ύu(f )df , the total electromagnetic energy contained in the cavity of volume V = L3 at temperature T is,
U=V

8π 5 (kT)4
15 (hc)3

(9-10)

This is the famous Stefan-Boltzmann Law of Radiation. This important prediction tells how objects at temperature T radiate electromagnetic energy. The total power per unit area radiated by a perfect radiator (a so-called “blackbody”) at temperature T is proportional to U and is given by,

with

JU = ␴SB T4
␴SB = 5.670 ϫ

10–8

W/m2

(9-11)
K4

A numerical example of thermal radiation is provided as an Exercise for this chapter.
Real solids are not perfect electromagnetic radiators. This fact is usually taken into account with a single parameter, the emissivity ␧. The radiated power per area becomes
JU = ␧ ␴SB T4, where ␧ depends on the material and quality of the surface. In equilibrium the emission from a solid must equal its absorption from the environment. Perfect emitters (␧ = 1) are also perfect absorbers; hence, the name “Black Body Radiation” is associated with Eq. (9-11).

106

Distributions of Molecules and Photons Chapter 9

Remember that at normal temperatures the peak in the radiation curve is in the infrared
(IR). A strong absorber in the IR may appear brightly colored in the visible region due to electronic transitions. This discussion leads us directly to a critical environmental issue.

F. Global Warming
One of the most pressing issues of our time is dealing with the increasing temperature of the earth due to so-called “greenhouse gases.” Politics aside, we now have the tools to see for ourselves what the basic scientific issues are. We begin by calculating the temperature of the earth using the Planck Radiation Law and a few simple assumptions.
We only need a few well-known numbers: the radii of the earth RE and sun RS, the distance R between earth and sun, and the surface temperature of the sun, TS.
Because radiative power per unit area (i.e., flux) from a point source falls off as 1/R2, the sun’s radiative flux at the earth’s surface is (RS/R)2 times the immense flux at the sun’s surface. A calculation of the earth’s surface temperature is quite straightforward:

Surface Temperature of the Earth from the Radiation Law

R

Rs
Rs = 7 x 108 meters

Earth
TE

R = 1.5 x 1011 meters
Ts

Sun

Ts = 5800K

JR ϫ ␲RE2 = JE 4␲RE2
JR = 4 JE

JS = Sun’s flux at its surface = ␴SBTS4
R
JR = “Sun’s flux at Earth” = ␴SBTS4 ϫ ⎛⎜ s ⎞⎟
⎝ R⎠
JE = Earth’s flux at its surface = ␴SBTE4

2

2

⎛R ⎞
␴SBTS4 ϫ ⎜ s ⎟ = 4␴SBTE4
⎝ R⎠
⎛R ⎞

1/ 2

TE = ⎜ s ⎟ TS = 280 K
⎝ 2R ⎠

From these simple assumptions, we predict the steady-state surface temperature of the earth to be 280 K, or about 45°F. In fact, the average surface temperature of the earth

107

Physics 213 Elements of Thermal Physics

is about 290 K—a comfortable 60°F for life as we know it. Our calculation has left out two important factors:
a) A significant fraction (about 30%) of the radiation from the sun is reflected (or scattered) from the earth’s atmosphere. Because T ϰ (flux)1/4, this reduction in sun’s flux at the earth’s surface causes about 8% reduction in TE from what we just calculated. The corrected TE is about 250 K, or 0°F. That would not sustain life as we know it.
b) Fortunately, counteracting this effect, the earth’s atmosphere also acts as a sort of blanket, reflecting back some of the radiation that the earth emits. This is known as the “greenhouse effect,” after the clear-glass buildings that keep plants warm even in the winter. A greenhouse works because the frequencies f (and thus photon energies hf) of the radiation from the sun and the earth are quite different:

Planck Radiation Law (not to scale)

u(f)

sun

earth
2.8 kT E

2.8 kT s

hf = photon energy Photons from the sun have frequencies (or wavelengths, ␭ = c/f) mainly in the visible and ultraviolet (UV) range of the electromagnetic spectrum, whereas photons from the earth have frequencies mainly in the infrared (IR). Much of the sun’s light is transmitted through the glass roof of a greenhouse, warming the plants and earth inside, but the glass is not so transparent to the infrared radiation emitted from within. Like the greenhouse roof, our atmosphere reflects a portion of the infrared radiation from the earth’s surface.
The “greenhouse gases” that are most effective in reflecting infrared radiation back to the earth’s surface are H2O and CO2. As we learned in Chapter 3(C) and 8(E), CO2 has low-frequency vibrations that contribute to its heat capacity even at room temperature.
These torsion, or bending, vibrations (see below) have normal mode frequencies in the infrared, almost exactly at the peak in the earth’s radiation spectrum. The higher the concentration of CO2 in the atmosphere, the larger the backflow of thermal energy to the earth.

108

Distributions of Molecules and Photons Chapter 9

En = n␧

CO2

Photon

0

The importance of the greenhouse effect on a planet’s climate can be realized by comparing to our nearest neighbors: Mars has a thin atmosphere with few greenhouse gases, and its temperatures range from 70°F in the day to -130°F at night! Venus has lots of
CO2, and its atmosphere averages 800°F! These facts and the following are drawn from a chapter on Global Warming in Professor Segre’s book, cited in the Preface.
As we all know, our planet’s animals take in O2 and generate CO2, and plants turn H2O
+ CO2 into O2. About 80% of the generated CO2 stays in the atmosphere and 20% is absorbed by the ocean. Seashells are CaCO2. An alarming rise of the CO2 concentration in our atmosphere, however, is caused by human activities such as fossil fuel usage.
Studies of ice cores show that CO2 is now about 360 parts per million (ppm) of the atmosphere, compared to 227 ppm in 1750, before the industrial revolution. Scientific studies suggest that a doubling of the CO2 concentration in the atmosphere—possibly occurring before the end of this century—would lead to about 10°F increase in the earth’s surface temperature. The changes in the earth’s ecosystem would be immense.
The North Pole is ice over water, and melted ice does not displace water. Ice melted in the Antarctic (a land base), however, increases oceanic water. With melting of the
West Atlantic Ice Sheet (WAIS) in the Antarctic, oceans would rise 15 feet, leading to the disappearance of most ports. For example, LA and much of Florida would be gone.
At the time of this writing, the most comprehensive scientific study on global warming has been conducted by the Intergovernmental Panel on Climate Change. IPCC, 2007:
Summary for Policymakers deals with the physical science issues and can be accessed on the Web. The 2006 movie An Inconvenient Truth by Nobel Laureate Al Gore provides important, graphic details about global warming and should not be missed. In April
2010, the U.S. Environmental Protection Agency (EPA) published an excellent report on climate change at http://epa.gov/climatechange/index.html.

109

Physics 213 Elements of Thermal Physics

Exercises
1) As discussed in this chapter, the energy distribution for an ideal gas in a box is given by P(E) = C E1/2 e–E/kT, as illustrated below. It is also possible to confine atoms by creative use of electromagnetic fields. The harmonic oscillator potential, U(r) =
½ ␬r2, sketched below acts like a spring to confine the atoms. Such “atom traps” are used by physicists to cool atoms to nanokelvin temperatures and observe their unusual quantum behavior.
E

Box potential

P(E)

E

r
H.O. potential

P(E)

a) The energy distribution for a collection of harmonic oscillators (or atoms in a H.O. well) at temperature T is given by P(E) = C E2 e-E/kT. Using this function and an integral given in Appendix 4, compute the average energy <E> =
Ύ E P(E) dE of a particle trapped in the harmonic oscillator potential.

b) Show that this result is consistent with the Equipartition Theorem for a particle trapped in the harmonic oscillator potential U(r) = ½ ␬r2 = ½ ␬(x2 + y2 + z2). Don’t forget to count the quadratic terms in the kinetic energy of the particle.

2) a) Calculate the power radiated by a 1-kg sphere of aluminum at 20°C due to electromagnetic radiation. The density of aluminum is 2.7g/cm3. [Hint: Compute the surface area of the sphere and use the radiation formula.]

b) How rapidly do you think this sphere would cool if it were in outer space?
Calculate how much time would it take for the Al sphere to cool from room temperature to freezing (⌬T ഠ 20 K).

110

CHAPTER

10

Work and Free Energy

A. Heat Flow and Entropy
Given the statistical interpretation of entropy, we will now re-examine the thermodynamic processes of an ideal gas. Does entropy provide any insights into heat engines?
First recall the relationship between entropy and heat flow in a reversible process, ⌬S = Q/T, which was derived at the end of Chapter 4(D) for the ideal gas. This relation between entropy and heat actually extends beyond the ideal gas, as we will now show. For a fixed particle number, the differential of S = S(U,V) is quite generally written

⎛ ∂S ⎞
⎛ ∂S ⎞ dS = ⎜ dU + ⎜ dV .

⎝ ∂V ⎟⎠ U
⎝ ∂U ⎠ V
This is Eq. (2-10) for infinitesimal changes in S, U, and V. The derivative
(ѨS/ѨU)V equals 1/T by the definition of temperature. The derivative
(ѨS/ѨV)U can be determined by considering the figure at the right. For constant p, the First Law gives dU = –pdV when dQ = 0. The equilibrium condition (maximum entropy) is

p

U, V
⎛ p ⎛ ∂S ⎞ ⎞
⎛ ∂S ⎞
⎛ 1⎞ dS = – ⎜ ⎟ pdV + ⎜ dV = ⎜ – + ⎜
⎟ dV = 0,
⎝ ∂V ⎟⎠ U
⎝ T⎠
⎝ T ⎝ ∂V ⎠ U ⎟⎠

111

Physics 213 Elements of Thermal Physics

yielding (ѨS/ѨV)U = p/T. Quite generally, the change in entropy for a change in volume at constant energy is p/T. We now have a general relationship between the differentials of the three state functions U, S, and V:
TdS = dU + pdV , which is named the Thermodynamic Identity. Combining this result with the First Law of Thermodynamics, dU = dQ – pdV, we have, dS = dQ/T

for any reversible process.

We first saw this result in Chapter 4D. Remember that S = S (U,V) is a state function, so dS is a small change in that function. Q is a process energy, so dQ is the small amount of thermal energy added to the system, as discussed after Eq. (4-5). For larger changes at constant T, the equation becomes ⌬S = Q/T.

B. Ideal Heat Engines
Recall the Carnot Cycle discussed in Chapter 4: p 2

1

Qh adiabat Th
4

3

Tc

Qc

Vb Va

Vc Vd

V

The area enclosed by this closed cycle is simply the work done by the engine over one cycle. Wby = Ώ p dV .

(10-1)

There is another useful way of representing this cycle: Plot the gas entropy vs. temperature, remembering that ⌬S = Q/T on the isotherms and ⌬S = 0 on the adiabats:

112

Work and Free Energy Chapter 10

S
3
S2

Qc

4

2

W by

S1
1

T
Tc

Th

Note that Qc = Tc⌬S, Qh = Th⌬S, and Qh = Qc + Wby from energy conservation. This graph shows clearly that the entropy extracted from the hot reservoir (process 2) equals the entropy expelled to the cold reservoir (process 4):

Qh Qc
=
.
Th
Tc

(10-2)

When heat is transferred in the Carnot engine, the gas and thermal reservoirs are ideally always at the same temperature, so the change in the total-system entropy
⌬Stot = –Qh/Th + Qc/Tc + ⌬Sgas is zero. (Qc and Qh are defined as positive, and ⌬Sgas =
0 for a closed cycle.)
By definition, the efficiency of any heat engine is ε= Wby
Qh

=

Q h − Qc
Q
= 1− c ,
Qh
Qh

(10-3)

which, using Eq. (10-2), leads to the efficiency of the Carnot engine,
␧=1–

Tc
.
Th

Carnot efficiency

(10-4)

The Second Law says that the entropy of any isolated system cannot decrease, i.e.,
⌬Stot Ն 0. A reversible process is one in which the entropy of the total system (engine plus reservoirs) does not change: ⌬Stot = 0. On an adiabatic curve (Q = 0), the total entropy of the system does not change because energy and volume contributions to ⌬Sgas cancel out: dS = dQ/T = (dU – pdV)/T = 0. For an isothermal branch with heat flow Q, the change in entropy of the gas is equal and opposite to the change in entropy of the reservoir because Tgas = Tres:
⌬Sgas + ⌬Sres = Q/Tgas – Q/Tres = 0

(10-5)
113

Physics 213 Elements of Thermal Physics

corresponding to a vertical line on the S-T diagram of the gas. The heat flow must be small enough that a negligible temperature gradient is generated at the interface. This is the prime drawback of a Carnot engine: it cannot be cycled quickly and still keep its efficiency. You’ve probably noticed that “speed” and “quality” are inversely related, even in life!
All real engines have ␧ < ␧Carnot because, ⌬Stot > 0 for any real engine. (The proof is simple and is left as an Exercise for you.) Three common sources of unwanted entropy generation are 1) friction, which represents the excitation of vibrational degrees of freedom in the parts of the engine, 2) the flow of heat between two parts of the system that are not at the same temperature, and 3) a process not carried out in a quasi-static fashion. As an example of the latter process, consider a piston that is allowed to move very rapidly, so that the pressure is not uniform in the gas. An extreme case is “free expansion”:

Imagine that the piston is massless so it takes no work to move it. If the volume changes instantly by a factor of 2, the gas would eventually fill the volume without doing work (⌬U
= 0). For free expansion the gas temperature is unchanged and the entropy changes by
⌬S = Nk ln(V2/V1) = Nk ln(2) = 0.63 Nk.

(10-6)

Due to these types of irreversible processes (see also Chapters 2(B), 6(F)), the efficiencies of all real engines are less than that of the Carnot engine.

C. Free Energy and Available Work
Let’s use the energy in a hot brick to drive a Carnot engine. The brick is initially at a temperature Th, and the environment is at temperature Tc. The Carnot efficiency of the engine will decrease as the brick cools, but initially the efficiency is given by:
␧ = 1 – Tc/Th

(10-7)

Th

brick

Qh
W by
Qc
room

Tc

This means that for a small heat input dQh (defined positive) from the brick, the Carnot engine performs a small amount of work: dWby = dQh(1 – Tc/Th)
114

(10-8)

Work and Free Energy Chapter 10

During this process the energy of the brick changes by dU = – dQh and its entropy changes by dS = –dQh/Th. Therefore the work done is: dWby = – dU + TcdS .

(10-9)

Notice that the temperature of the environment enters into the work. This equation applies to each step of the process. When the brick finally reaches Tc, the total work accomplished is;
Wby = –⌬U + Tc ⌬S = –⌬F

with F = U – TcS ,

(10-10)

where F is the “Helmholtz Free Energy” of the brick referenced to the temperature of the environment at a fixed Tc. In this course, we simply use the term “free energy” to represent F. U, S and F are properties of the brick. In essence, for this reversible process, free energy is converted into work:
Wby = Available work = –(change in F) = Fi – Ff .

(10-11)

To calculate the total work performed by the engine, one must calculate ⌬U and ⌬S.
For solid matter at normal temperatures, the heat capacity C is relatively constant, implying ⌬U = C⌬T, but the entropy change depends on temperature, requiring us to perform an integral:
⌬S = ΎdQ/T = C ΎdT/T = C ln(Tf / Ti)

(10-12)

A numerical example is provided in the Exercises at the end of this chapter.
The essential physics of Equation (10-10) is that even for the most efficient engines, some heat is lost. An energy –⌬U is supplied by the energy source, but an amount ⌬Qc
= Tc⌬S is lost to the environment. The free energy F of the energy source (brick or gasoline) accounts for this loss: ⌬F = ⌬U – Tc⌬S. Got it?

Second
Law in action. D. Free Energy Minimum in Equilibrium
What if the brick were initially colder than the environment? Can we extract positive work from this system? Equation (10-8) tells us that the differential work is dWby =
(dQh/Th)(Th – Tc), where Th is now the environmental temperature and Tc is the (variable) temperature of the brick. As in the hot brick case, dQh and (Th – Tc) are positive, providing positive work.
300K

Th
Qh
W by
Qc

brick

Tc

115

Physics 213 Elements of Thermal Physics

In general, all it takes is two systems with different temperatures to produce positive work. In the cold brick case, the energy to do work is extracted from the environment.
Indeed, there are legitimate proposals to produce work (e.g., electrical energy) by running engines between our environmental temperatures and colder temperatures deep in the sea; however, the idea is not economically feasible at the present time.
How is the work done by the engine related to the free energy change of the cold brick?
Here we notice that ⌬U = Qc and ⌬S = Qc/Tc = Qh/Th for the brick; therefore,
Wby = Qh – Qc

(closed cycle)

= Th⌬S – ⌬U

(Carnot engine)

= – ⌬F = Fi – Ff

(Free energy decrease)

Notice that in both hot-brick and cold-brick cases, free energy of the brick is converted into work; that is, Wby = Fi – Ff . This means that, even though its temperature is rising, the free energy of the cold brick decreases as it approaches equilibrium. We can graphically display this fact with a diagram:
Fi

Ti

Ff
200K

300K

400K

(initial temperature of the brick)

Tenv

This analysis shows that the free energy of the brick is a minimum when it is in equilibrium with its environment at Tenv. In general,
F = U – TenvS.

E. Principle of Minimum Free Energy
F also plays an important role in determining the equilibrium conditions of systems in which work is not involved, and in which the system is always in thermal equilibrium with a reservoir at temperature T (= Tenv in the above equation). Consider a small system with initial energy U = 0 brought into contact with a large thermal reservoir. The small system gains entropy S(U) by extracting energy U from the reservoir. The temperature of the reservoir does not change significantly because U << UR; however, the reservoir does lose entropy when it supplies energy to the small system. The equilibrium value
116

Work and Free Energy Chapter 10

of U occurs when the total entropy of the system Stot is a maximum. When the small system has extracted energy U, the entropy of the reservoir is SR = SO – (dSR/dUR)U =
SO – U/T, where SO is the entropy of the reservoir when U = 0. You’ve seen this result before: the heat flow Q from the reservoir equals U, so its entropy change is –Q/T.
The total entropy is the sum of the entropies of the reservoir and the small system:
Stot = SR + S ≈ SO −

Stot = SO –

U
+S
T

U Ϫ TS
T

Stot = SO Ϫ

F
T

The last equation shows that maximizing the total entropy with respect to an internal variable is equivalent to minimizing the Free Energy of the small system:
⌬Stotϭ –⌬F/T .
The following pictures illustrate this idea:
Total Entropy
Stot = S - U/T + const.

Free Energy
F = U - TS internal variable

internal variable equilibrium value

equilibrium value

The reason that we shift our attention from the total entropy to the free energy is that the free energy is determined by the properties of the system of interest, with the single parameter T reflecting the influence of the thermal reservoir.

F. Equipartition of Energy
Remember the Equipartition Theorem? We postulated that each of the quadratic terms in the system’s energy has a thermal-average value given by
<energy per quadratic term> = ½ kT, where k is the Boltzmann constant, 1.381 ϫ 10–23 J/K, and T is the absolute temperature in Kelvin. This theorem was stated in Chapter 2 without proof.
117

Physics 213 Elements of Thermal Physics

We can now justify this statement using our knowledge of statistical mechanics. In equilibrium, a system in contact with a thermal reservoir has a minimum free energy.
Consider a small system in contact with a reservoir at temperature T:

U

T

From energy conservation alone, U could be any value between zero and the total system energy. Basically, we want to calculate the average amount of energy U that the reservoir is willing to give up to the small system. To answer this, we simply minimize the free energy
F(U) = U – T S(U)

(10-13)

with respect to U. Statistical mechanics has provided us with S(U) = k ln⍀(U) for a variety of systems. In general the number of accessible states ⍀(U) increases exponentially with the number of particles.
In Chapter 7(B) we found that ⍀(U) = CUN for N one-dimensional oscillators and
⍀(U) = CU3N for N three-dimensional oscillators, assuming systems with many particles
(N >> 1). In Chapter 7(E) we showed that ⍀(U) = CU3N/2 for an ideal monatomic gas, with the same approximation. For these and other many-particle systems, the state functions look something like this:

U

F
U

-TS minimum (dF/dU = 0)

With ⍀(U) = CUN, the free energy of the one-dimensional (1D) oscillator is
F(U) = U – kT (N lnU + lnC),

118

(10-14)

Work and Free Energy Chapter 10

so the derivative with respect to the total energy U of the oscillators is dF/dU = 1 – NkT/U.

(10-15)

Setting dF/dU = 0 and solving for U leads to the equilibrium energy,
U = NkT ,

(10-16)

which is the same result postulated by the Equipartition Theorem, assuming ½ kT kinetic energy and ½ kT potential energy per particle.
NkT is the average energy that the reservoir supplies to the N-oscillator system in equilibrium. Fascinating, isn’t it, that our result from counting microstates, ⍀(U) =
CUN, came from the assumption of equally spaced levels for the oscillator, a fact that is rooted in quantum mechanics.
You can easily apply this method to determine the average energy of N three-dimensional oscillators (the Einstein model of a solid) and the ideal monatomic gas. Here’s a table for your results (again assuming N >>1):

System

⍀(U)

S(U)

dF/dU

Average U

1D
Oscillator

UN

Nk lnU

1 – NkT/U

NkT

3D
Oscillator

U3N

Monatomic gas U3N/2

Compare your results to the Equipartition results, Eqns. (3-5) and (3-12).

G. Paramagnetism—the Free Energy Approach
The spin-½ system is a particularly good example of the interplay between energy and entropy in the principle of free energy minimum. Consider a collection of N spins with magnetic moment ␮ that are in contact with a thermal reservoir at temperature T.
In Chapter 5 we found that the number of accessible microstates for large N has the
Gaussian form with a standard deviation ␴d = N1/2 ,
⍀(m) = C exp(– m2/2N),

(10-17)

where m = Nup – Ndown is the “spin excess.” The corresponding entropy of the spin system with spin excess m is therefore,
S(m) = k ln(⍀) = k (ln(C) – m2/2N) ,

(10-18)

which is simply an inverted parabola, drawn as follows:
119

Physics 213 Elements of Thermal Physics

S(m)

m
-N

0

+N

Notice that the point of maximum entropy corresponds to maximum disorder in spin orientation. The energy of the spin system in magnetic field B is plotted as follows:
U(m) = -mμB
+N μ B
0

N m -N μ B

In some sense, there is a competition between S and U: the number of accessible spin states is a maximum at m = 0, but the number of reservoir states is increased when it takes energy from the spins. As usual, the total entropy of spins-plus-reservoir is maximized in equilibrium. Equivalently the free energy is minimized with respect to U (or m), dF/dm = 0,

(10-19)

F(m) = U(m) – TS(m).

(10-20)

where,

Graphically this effect is represented by combining the last two plots, which we do here for two different temperatures. Notice that the entropy parabola is plotted upside down and is multiplied by T:

120

Work and Free Energy Chapter 10

+N μ B

m equilibrium

U
0

N m -N μ B

F(m) =
U - T1S
U - T2S

As the temperature of the reservoir is raised from T1 to T2, the minimum in free energy moves closer to m = 0, which corresponds to completely random spin orientation, or maximum disorder. Increasing T enhances the effect of the spin entropy S(m). The equilibrium value of m, and thus the total magnetic moment 0 = m␮, is determined by minimizing F(m).
By plugging U(m)/and S(m) into Eq. (10-20) and setting dF/dm = 0, you can easily solve for the equilibrium value of m and obtain Curie’s law for the total magnetic moment,
0=

Nμ 2B B
∝ , kT T

(10-21)

as derived in Chapter 8 by considering the average magnetic moment of a single spin in the high temperature limit. This example shows the underlying importance of free energy and the minimization principle.

121

Physics 213 Elements of Thermal Physics

Exercises
1) Let’s say that you have “invented” a heat engine with efficiency ␧ = (1–Qc/Qh) larger than that of a Carnot engine, ␧c = (1–Tc/Th). Before you rush off to the Patent Office, consider the total entropy change in a heat engine, ⌬Stot = –Qh/Th +Qc/Tc, and show that your engine would decrease the entropy of the universe (⌬Stot < 0) and thus violate the Second Law of Thermodynamics. (The Patent Office doesn’t have
Einstein any more, but they do know about the Second Law.)

2) What is the amount of work that can be extracted from a 75 kg container of boiling water, initially at T = 393 K and allowed to cool to room temperature, 293 K, while driving a Carnot engine? Assume that the specific heat of water is 4184 J/kg K.

3) a) Using free energy, calculate the work done by a Carnot engine as a hot brick cools from 450 K to 300 K. The heat capacity of the brick is C = 1 kJ/K.
b) Calculate the work done by a Carnot engine as a cold brick warms from 150
K to 300 K.

4) In this chapter we introduced the link between free energy and available work using the cycle of a heat engine. Free energy can also be understood by considering just one step in the process: isothermal expansion. Consider a cylinder containing a monatomic ideal gas at constant T doing quasi-static work on an outside body:

Thermal
Reservoir

force = p x area

With U = (3/2)NkT and S = Nk ln(nQV/N), show that the work done by the gas in expanding from Vi to Vf equals its reduction in free energy, - ⌬F = Fi – Ff .
⌬F = ⌬U – T⌬S =
Vf

Wby =

∫ p dV =

Vi

The free energy F(V,T) can be thought of as the work required to compress the gas (at constant T) from infinite volume to volume V.
122

CHAPTER

11

Equilibrium between Particles I

A. Free Energy and Chemical Potential
In the present chapter, we consider the equilibrium between systems that can exchange particles. A wide variety of important problems involve particle exchange between systems at temperature T, for example, ionization of atoms, dissociation of molecules, chemical reactions, carriers in semiconductors and metals, and, more broadly, the science and engineering of materials.
We will now introduce the general methods for solving these great problems. As usual, an isolated system is in equilibrium when its total entropy Stot is a maximum. If the isolated system consists of a small system in thermal equilibrium with a reservoir at temperature T, then equilibrium is determined by a minimum in the free energy F of the small system. This point is summarized in the following equations:
Stot = SO Ϫ

F
T

or

ΔStot = −

ΔF
T

(11-1)

where SO (a constant) is the entropy that the reservoir would have if it had all of the energy (highly unlikely).
If two systems in equilibrium with a reservoir at temperature T are allowed to exchange particles, the equilibrium condition is found by setting the derivative of F = F1 + F2 with respect to N1 equal to zero: dF dF1 dF2 =

=0 dN 1 dN 1 dN 2

(dN2ϭ–dN1)

(11-2)
123

Physics 213 Elements of Thermal Physics

Pictorially, we have:
F = F1 + F2

N1

dF/dN 1 =0
V1

V2 equilibrium value

The derivative of free energy with respect to particle number is so important in determining an equilibrium condition that we define a special name and symbol for it:
␮i ϭ

dFi ϭ chemical potential of subsystem “i” dN i

(11-3)

For two subsystems exchanging particles (one for one), Equation (11-2) shows that the condition for “chemical equilibrium” is:
␮ 1 = ␮2

(11-4)

The chemical potential of a system equals the change in free energy when one particle is added to the system at constant volume. If the two systems are in equilibrium, exchanging a particle doesn’t change the total free energy; i.e., ␮1 = ␮2. In summary, equilibrium for this simple system is determined by:
Maximum total entropy Stot
→ Minimum free energy F
→ Equal chemical potentials ␮

(11-5)

Note the similarities to what we have already learned:
Energy exchange ↔ T1 = T2 ↔ thermal equilibrium
Volume exchange ↔ p1 = p2 ↔ mechanical equilibrium
Particle exchange ↔ ␮1 = ␮2 ↔ chemical equilibrium

(11-6)

In all of the systems that we wish to study, at least one of the subsystems is an ideal gas. Therefore, we must now “bite the bullet” and determine the absolute entropy of an ideal gas, so that we can determine its free energy and chemical potential. The final result is μ = kTln(n/nQ) with n = N/V and nQ given in Eq. (11-11). Applications of this result begin with Section D.

124

Equilibrium between Particles I Chapter 11

B. Absolute Entropy of an Ideal Gas
In Chapter 7 we derived the form of S(N,V,T) for an ideal gas in contact with a thermal reservoir under the assumption that the particles behave “classically.” In other words, we assumed that a particle can have any position and momentum, to an arbitrary accuracy. Nature on the atomic scale is not like that. Position and momentum are linked.
In the early part of the 20th century, scientists discovered that microscopic particles such as electrons and nuclei behave more like waves than billiard balls. One of the statements of this fact is contained in the Heisenberg Uncertainty Principle. It says that a particle’s position and momentum cannot be perfectly defined at the same time. It was found that the uncertainties in momentum and position are limited by the relation,
␦px ␦x Ͼ ប ,

(11-7)

where ប = h/2␲ with h = 6.63 ϫ 10–34 J-s (Planck’s constant). The reason for this uncertainty in momentum and position is that the particles on the atomic scale behave like delocalized waves. The uncertainty principle was an indication of the surprising fact that a particle has a wavelength that is inversely proportional to its momentum:
␭= h/p ,

(11-8)

known as the “de Broglie wavelength” of the particle. (Notice the similarity between this equation and the Uncertainty Principle, if p is replaced by ␦p and ␭ր2␲ by ␦x.)
In a course on quantum mechanics you will see that the square of the wave amplitude is a measure of the probablity (per unit volume) of finding the particle at a particular position in space. A particle is viewed as a “wave packet” with poorly defined momentum and position, as the Uncertainty Principle predicts:

δx

δpx

/δx

Armed with this knowledge, we can understand that there is a minimum volume into which a particle can be confined, depending on its momentum. The minimum cell size is roughly ␭3. This is roughly the cell size ␦V used for counting microstates in statistical mechanics.
Recall that if a box of volume V is divided up into cells of volume ␦V, the total number of cells is M = V/␦V. The number of spatial microstates (neglecting momentum) for one particle is M, and the number of spatial microstates for N identical particles is:
Ω=

MN
N!

125

Physics 213 Elements of Thermal Physics

It is very useful at this stage to employ two “number densities”:
Density of cells: nc = 1/␦V

Density of particles: n = N/V

not to be mistaken with n, the number of moles of gas. With these definitions and the
Stirling approximation, ln N! = N lnN – N, valid for large numbers, you can easily find the entropy associated with the positions of the particles,

⎛ n
S = k lnΩ = Nk ⎜ ln c + 1⎟ .

⎝ n

This analysis has two deficiencies: 1) it doesn’t account for momentum states, and 2) it uses an arbitrary cell density nc = 1/␦V. These two deficiencies can be roughly rectified by assuming that the thermal average energy of a particle is p2/2m = (3/2)kT and taking a cell density equal to 1/␭3 with ␭ = h/p. These mixed classical/quantum considerations yield a “cell density”: nc ഠ 1/␭3 = (p/h)3 = (3mkT/h2)3/2 .
This analysis gives us a good idea of the essential physics, and it turns out to be accurate to within a factor of about 3. As derived in more advanced texts, the precise entropy of an ideal monatomic gas turns out to be:
⎛ n
5⎞
S = Nk ⎜ ln Q + ⎟ ,

n 2⎠

(11-9a)

where
⎛ mkT ⎞ nQ = ⎜
⎝ 2π 2 ⎟⎠

3/ 2

(11-9b)

is known as the “quantum density,” which we identify as the density of quantum cells—the number of thermally accessible cells per unit volume. This equation for the entropy of an ideal gas is known as the Sackur-Tetrode (ST) equation. Considering the V and
T dependences (remember, n = N/V), we recover our classical result:
S = Nk ln(V) +

3
Nk ln(T) + function(N).
2

(11-10)

The term Nk ln(nQ/n) in the ST equation, plus S = k ln⍀, suggest that the number of accessible states is roughly (nQ/n)N, where nQ/n is the total number of quantum cells per particle. When n << nQ, the average distance between particles is much greater than
␭ and the particles behave more like billiard balls. In summary,

126

n / nQ << 1

classical regime (low density)

n / nQ Ն 1

quantum regime (high density).

Equilibrium between Particles I Chapter 11

In the classical regime the probability of any particular cell (or “state”) being occupied is much less than one. In the quantum regime, each state is likely to be occupied by more than one particle, causing interference between wave-like particles. In Appendix 8 the absolute entropy for some common gases is calculated and compared to experimental measurements. The agreement is found to be excellent for monatomic gases. Let’s now consider a numerical example.
For a monatomic gas of Ar at room temperature and pressure (T = 300 K and p = 1 atm): m = 40g/(6.022 ϫ 1023) = 6.64 ϫ 10–26 kg
ប = h/2␲ = 1.055 ϫ 10–34 J-s k = 1.381 ϫ 10–23 J/K
T = 300 K yielding, nQ = 2.47 ϫ 1032 /meter3.
The ideal gas law gives a density of n = N/V = p/kT = 105/(1.381 ϫ 10–23)(300) = 2.45 ϫ 1025/meter3 .
Therefore, we find the ratio nQ/n = 1.02 ϫ 107 for Ar gas at room temperature and pressure. The particle density is about one tenmillionth that of the quantum density for this temperature. Because n << nQ, we do not have to worry about the wavelike properties of the Ar atoms. The entropy for one mole of Ar gas at 300 K and 1 atm is:
S = NAk (ln(1.02 ϫ 107) + 2.5) = R(16.1 + 2.5) = 18.6 R = 155 J/mol-K.
(R = 8.314 J/mol-K)
To facilitate calculations with other gas atoms, we can combine constants in Eq. (11-9b) to get:
⎛ m T ⎞ n Q = 9.88 × 10 meter ⎜

⎝ m p 300 K ⎠
29

3/ 2

–3

,

(11-11a)

where mp is the mass of a proton, which is essentially the mass of a hydrogen atom, mH.
To about 1% accuracy, you may use:

nQϭ1030 meter–3

⎛ m
T ⎞
⎜⎝ m 300 K ⎟⎠
H

3/ 2

,

(11-11b)

which can be remembered because it corresponds to a de Broglie wavelength of about
1Å for H-atoms at T = 300 K. (1Å = 10-10 m) The ratio m/mH is the atomic (or molecular) weight of the particle: m/mH = 40 for Ar and 28 for N2.
127

Physics 213 Elements of Thermal Physics

Also useful is a formula for the particle density (from the ideal gas law): n= p
⎛ p 300 K ⎞
= 2.45 × 1025 meter −3 ⎜
,
⎝ 1atm T ⎟⎠ kT (11-12)

which you can easily verify using 1atm = 1.013 ϫ 105 Pa and k = 1.381 ϫ 10-23J/k

C. Chemical Potential of an Ideal Gas
In Chapters 11 through 13, we will apply the Principle of Minimum Free Energy to a variety of practical problems involving the equilibrium between two or more subsystems.
In each case, at least one of the subsystems is an ideal monatomic gas, whose entropy is:
⎛ n V
⎛ n
5⎞
5⎞
S = Nk ⎜ ln( Q ) + ⎟ = Nk ⎜ ln( Q ) + ⎟ ,


2⎠
2⎠ n N

(11-13)

with U = (3/2)NkT, the free energy, F = U – TS, of the monatomic gas is,
F=

⎛ n V
3
5⎞
NkT − NkT ⎜ ln( Q ) + ⎟ ,

2
N
2⎠

(11-14)

which simplifies to the compact form:


N
F = NkT ⎜ ln(
) − 1⎟ .
⎝ nQ V


(11-15)

This equation gives the free energy in terms of its natural variables (N, V, T).
The chemical potential of an ideal monatomic gas is obtained by taking the derivative of F = kT(N lnN – N ln(nQV) – N) with respect to N, ␮ = (ѨF/ѨN)V,T:
⎛ n ⎞ μ = kT ln ⎜

⎝ nQ ⎠

(11-16)

where we have substituted n = N/V as the density of particles. Notice that for the pictorial example in Section A, ␮1 = ␮2 implies n1 = n2, which is the obvious equilibrium condition: equal densities (and pressures pi = nikT).
For one mole of Ar gas at p = 1 atm and T = 300 K, we have kT = 0.026 eV, nQ = 1030 ϫ (40)3/2 m–3 and n = 2.45 ϫ 1025 m–3, yielding a chemical potential,
␮ ϭ (0.026 eV) ln(9.8 ϫ 10–8) ϭ – 0.42 eV which is plotted as the point in the following graph of ␮(T):

128

Equilibrium between Particles I Chapter 11

μ
300 K
0

T

p = 1 atm
- 0.4 eV

The curve in the graph above shows how ␮ varies as the temperature is changed while holding the pressure constant at 1 atm. Specifically, ␮(T) ϭ kT ln(p/nQkT) with p ϭ
1 atm.
Remember that the chemical potential is the change in free energy when one particle is added to the system at constant volume. If two subsystems with the same ␮ exchange a particle, F remains unchanged (a minimum), implying that the two subsystems are in equilibrium.
Notice that ␮ is negative for the classical regime, n/nQ << 1. The reason for negative
␮ is that the TS term dominates F = U – TS and S increases when a particle is added to the system. In the above example, when one particle is added to the gas, the average energy increases by (3/2)kT = +0.039 eV, but the free energy changes by ␮ ϭ –0.4 eV.
Obviously for dilute gases at ordinary temperatures, the entropy term (–TS) dominates
F. At very low temperatures this classical model of a gas fails, because nQ approaches n.
We will now apply these ideas to some practical problems. Unless stated otherwise, we assume ideal monatomic gases. The extension to diatomic gases is discussed in Appendix
9. For the diatomic gas, nQ in the chemical potential must be replaced by nQZint, where
Zint represents the number of states associated with internal motions of a molecule. We will ignore this complication in this course.

D. Law of Atmospheres
If you have ever visited mile-high Denver, you know that the air is a bit thinner than in central Illinois. It is said that baseballs fly farther in Denver Stadium. We all know that atmospheric pressure decreases as we go up in altitude, and now you are in a position to calculate this effect. As a simple representation of this problem, we connect a tube between two equal volumes of monatomic ideal gas, one on the ground and the other at an altitude h. Gas can flow freely through the tube, but the total gas in the tube is far less than that contained in the two volumes. The problem is to determine the relative pressures of the gases in the two boxes. We assume that the temperature is the same at both elevations. Chemical equilibrium implies ␮1 = ␮2.

N2

h

N1

129

Physics 213 Elements of Thermal Physics

In this case, the molecules in the upper box each have a potential energy of mgh; therefore, U1 = (3/2)N1kT

and

U2 = (3/2)N2kT + N2(mgh).

The mgh term carries through to the free energy, and to the chemical potential, ␮2 = dF2/dN2 (Eq. (11-16)):
⎛n ⎞ μ1 = kT ln ⎜ 1 ⎟
⎝ nQ ⎠

⎛n ⎞ μ 2 = kT ln ⎜ 2 ⎟ + mgh
⎝ nQ ⎠

(11-17)

Setting ␮1 ϭ␮2 immediately yields, kT ln(n2/n1) = – mgh
(notice that the nQ’s cancel) implying, n2/n1 = e–mgh/kT
The ideal gas law, p = (N/V)kT = nkT, tells us that p2/p1 = n2/n1, yielding the final result: p2 = e − h / ho p1 (11-18)

where ho = kT/mg is the characteristic height at which the atmospheric pressure decreases by 1/e from that at sea level. This formula is known as the Law of Atmospheres because it applies to other planets, too. For nitrogen gas (m = 28 ϫ 1.674 ϫ 10–27 kg) at 293 K, you will find, ho = 8.8 km.
The pressure in Denver (at h = 1.6 km) is less than in central Illinois by a factor of exp(–1.6/8.8) = 0.83. Yep, a 17% reduction in gas density would mean a few more homers, unless they moved the fence back.

E. Physical Interpretations of Chemical Potential
There are several ways of viewing chemical potential. In one sense it is a real potential to do work. For an ideal gas (or atoms in solution) at a given temperature, the chemical potential is just kT times the logarithm of density: ␮ = kT ln(n), minus a constant.
Consider the following situation:
High ␮

130

Low ␮

Equilibrium between Particles I Chapter 11

A piston moves isothermally against an external force from volume V1 to V2. The work done per particle may be simply viewed as the change in chemical potential,
Work per particle = –⌬␮ = –kT ln(n2/n1) = kT ln(V2/V1).
Not surprisingly, for N particles this is the isothermal work we derived in Chapter 4.
It is just another way of looking at the problem.
In mechanics, we know that work is associated with a force applied over a distance. If a potential energy is involved, force is the negative gradient in potential energy. In one dimension, F = –d(PE)/dx. For an ideal gas there is no potential energy associated with a density gradient, yet there appears to be a net force pushing particles towards lower densities. We can view this situation in terms of an effective force (sometimes called
“diffusive force”) equal to the negative gradient of chemical potential. Imagine particles released from a wall at high density and diffusing to lower density. These particles could be moving in a solution or another type of gas. At one instant of time the distribution of particles might look like this:

high μ

x

low μ

After a scattering event, each particle is assumed to scatter randomly from other particles, or elastically from the wall. At a given position x, the chemical potential of this gas has the form μ = kT ln(n), and the local drift motion appears to be driven by an effective force per particle given by
Feff = -dμ/dx = -(kT/n) dn/dx , where we have used the chain rule for derivatives and d(ln(n))/dn = 1/n. In the example shown, the effective force per particle is in the positive x direction because the chemical potential is decreasing from left to right, implying dn/dx is negative. This relation between diffusive force and gradient in density can be verified by kinetic theory and is left as an Example problem for those interested.

131

Physics 213 Elements of Thermal Physics

Most importantly, gradients in chemical potential can do chemical work, which can generate electrical power. In the next chapter we will show the use of chemical potential in chemical reactions, such as:
2H2 + O2 ↔ 2H2O + energy high ␮ ↔ low ␮ + energy which is the basis for the hydrogen fuel cell, which generates a voltage V with the recombining of H2 and O2 gases:

H2O (steam)

H2

porous electrodes O2

solute

V
By previously separating out the hydrogen and oxygen gases (at a cost in energy) large differentials in chemical potential are created, which are used to produce electrical work. The hydrogen fuel cell has been used as a power source in the space shuttle and is being developed as an automotive power source. Of course, the input work needed to separate the gases always exceeds the output work from the cell.

132

Equilibrium between Particles I Chapter 11

Exercises
1) a) Show that the chemical potential of helium gas at T = 300 K and p = 1 atm is
␮ = –0.33 eV.

b) Compute U, S, and F for 1 mole of He gas at 1 atm and 300 K.

U

S

F



He

2) a)

Using the ST equation and heat capacity C ϵ ⌬Q/⌬T = TdS/dT, show that the molar specific heats, c = C/n , for an ideal monatomic gas are cv = (3/2)R at constant volume and cp = (5/2)R at constant pressure.

b)

Using Equation (A8-4) in Appendix 8, show that cv = (5/2)R and cp = (7/2)R for the ideal diatomic gas near 300 K, as discussed in Chapter 5.

133

Physics 213 Elements of Thermal Physics

3) What is the pressure on the top of Mt. Everest, at an altitude of 27,000 feet? Assume that the average temperature in the atmosphere is 270 K.

4) Using kinetic theory, prove that the effective force per particle in a density gradient is given by:
Feff = −

kT dn n dx

You will find it useful to define a volume V = A·⌬x and compute the net force on this layer due to the pressure p on the left and right. Use the ideal gas law, p = nkT.
Here is a picture of the situation:

x

p(x)

p(x + Δx)

Δx

5) Derive the law of atmospheres assuming that for a stationary gas distribution the effective diffusive force Feff = -(kT/n) dn/dh at height h from the planet’s surface cancels out the force of gravity, mg.

134

CHAPTER

12

Equilibrium between Particles II

A. Ionization of Atoms
The next challenge for our F-minimum principle is the ionization of atoms. For the first time we are dealing with particles of different masses, so each will have its own quantum density. The simplest process we can imagine is atomic hydrogen ionizing into a proton and an electron:
H ↔ p+ + e–

(12-1)

The ionization energy is simply the binding energy of the hydrogen atom, ⌬ = 13.6 eV. Considering that the average translational kinetic energy of a gas molecule in this room is only
(3/2)kT = (3/2)(0.025 eV) = 0.0375 eV, this ionization process is not going to occur often by random atomic collisions. Also because hydrogen easily binds into H2 molecules at room temperature, we must imagine a high temperature gas (say at the sun’s surface) in order to get a significant number of free hydrogen atoms. (We will consider dissociation of H2 in an Exercise at the end of this chapter.)
In this problem there are three components (H, p, e), so the simple equilibrium condition for two components (␮1 = ␮2) must be generalized. Consider a volume V with NH, Np, Ne numbers of particles at temperature T. Equilibrium corresponds to a minimum in total free energy (like Eq. (2–10) for entropy),
⎛ ∂F ⎞
⎛ ∂F ⎞
⎛ ∂F ⎞
⌬F = ⎜
⌬N H + ⎜
⌬N e = 0 ,
⎟ ⌬N p + ⎜

⎝ ∂N H ⎠
⎝ ∂N e ⎟⎠
⎝ ∂N p ⎠

(12-2)
135

Physics 213 Elements of Thermal Physics

where we have suppressed the subscripts (V, T, Nj, Nk) on the partial derivatives. From the general definition of the chemical potential, μi = (ѨF/ѨNi), and the relation between number changes for the reaction: ⌬NH: ⌬Np: ⌬Ne:: 1:-1:-1, we have:
␮H – ␮p – ␮e = 0.

(12-3a)

This is a simple result to remember:
␮H = ␮p + ␮e for the reaction

H ↔ p + e.

(12-3b)

The chemical potential of the hydrogen atom incorporates the large electrostatic interaction between electron and proton producing the binding energy ⌬ = 13.6 eV. Ignoring the electrostatic interaction between well separated particles in this low-density neutral gas, the three chemical potentials are:
⎛ n ⎞ μ H = kT ln ⎜ H ⎟ − Δ
⎝ n QH ⎠

⎛ np ⎞ μ p = kT ln ⎜

⎝ n Qp ⎠

⎛ n ⎞ μ e = kT ln ⎜ e ⎟
⎝ n Qe ⎠

(12-4)

Using the equilibrium condition (12-3), we immediately have the final result for the gas densities (ni = Ni/V): n QH Δ / kT nH =
= K(T) , e n p n e n Qp n Qe

(12-5)

where K(T), defined by this equation, is known as the “equilibrium constant” because it does not depend on the particle densities. Of course, np = ne , so the equation can be simplified to the following form: nH = K(T) . n 2p

(12-6)

Multiplying by nH and inverting, the fraction of ionized atoms is found to be: np nH

=

1
.
n H K(T)

(12-7)

This interesting result shows that the fraction of ionized hydrogen depends not only on the temperature but also on the density of hydrogen atoms. At a given temperature, ionization becomes more probable at lower hydrogen densities. This is why there are many ions in outer space, despite the low T.
A simple way of understanding this effect is that at low particle densities it is less likely that a free electron and a proton will collide and form a hydrogen atom. The effect is sometimes called “entropy ionization” because two particles (e + p) have more entropy
136

Equilibrium between Particles II Chapter 12

than one (H). You might have thought it was just the kinetic energy per particle that determined the fraction of ionized atoms. We see that entropy is equally important.
Equilibrium involves a balance between energy and entropy (F = U – TS), as seen in
Chap. 10(G).

B. Chemical Equilibrium in Gases
The equilibrium between chemical species is another important application of the Principle of Minimum Free Energy. As before, we are interested in the relative densities (or concentrations) of the different species in a container of volume V. We consider only equilibrium between gases, leaving the subject of phase transformations in liquids and solids to later discussion. A common example of a gas reaction—introduced in your first chemistry course—is the synthesis of ammonia from nitrogen and hydrogen:
N2 + 3H2 ↔ 2NH3 .

(12-8)

The energy of two NH3 molecules is lower by 2⌬ Ϸ 0.9 eV than the energy of one N2 molecule plus three H2 molecules. As Professor Zumdahl of UIUC points out in his book, Chemical Principles, ammonia synthesis is a key process in plant life and in the production of fertilizers. Plants require nitrogen to produce amino acids. It is much easier to remove nitrogen atoms from ammonia than from triply-bonded N2 in the air. Nature has sophisticated ways of breaking down N2 and making nitrogen available to plants (e.g., lightning and ammonia-producing bacteria). We won’t need to invoke lightning or bacteria here because we are concerned with finding equilibrium concentrations, not reaction rates.
Let us simplify the notation by writing the chemical equation as: aA + bB ↔ cC ,

(12-9)

where for the ammonia reaction a,b,c equal the coefficients 1,3,2 and A,B,C denote the molecular types. We label the particle numbers NA, NB, and NC and particle densities nA, nB, and nC. Conservation of atoms means that a change in one of the numbers determines the changes in both the other numbers: ⌬NA: ⌬NB: ⌬NC:: a : b : –c. Following the procedure in the last section, we find: a␮A + b␮B = c␮C for the reaction aA + bB ↔ cC .

(12-10)

Assuming that each component acts as a nearly ideal gas, akT ln(

n nA n
) + bkT ln( B ) = ckT ln( QC ) − cΔ . nC n QA n QB

(12-11)

With a little algebra, we find the basic relation between particle densities: c n QC n cC
=
ecΔ /kT = K(T) b n aA n bB n aQA n QB

(12-12)

137

Physics 213 Elements of Thermal Physics

where K(T) is the equilibrium constant, neglecting internal motions. In dealing with chemical reactions, it is common to use n = N/V in units of moles per volume

(usually mol/Liter)

“concentration”

rather than
(usually particles/meter3)

particles per volume

“particle density.”

Multiplying 1 particle/m3 by (1 m3/1000 L) and (1 mol/6.022ϫ1023 particles), we get the useful conversion factor: 1 particle/m3 = 1.661؋10–27 mol/L.
In the mole-per-liter units, Eqs. (11-11) and (11-12) become:
⎛ m T ⎞ n Q = 1641 mol/L ⎜

⎝ m p 300 K ⎠

n=

3/ 2

(12-13)

p
⎛ p 300 K ⎞
= 4.06 × 10 –2 mol / L ⎜
⎝ 1 atm T ⎟⎠ kT (12-14)

where mp is the mass of a proton. In chemistry, concentration in mol/L is denoted by brackets, nN2 = [N2]. We use the notation nN2, regardless of units.
Using our practical formulas for quantum densities of monatomic gases, you can show that the quantum densities for these gases at T = 300 K are roughly:

gas

m/mp

nQ (per m3)

nQ (mol/L)

H2

2

2.79 ؋ 1030

4.64 ؋ 103

N2

28

1.46 ؋ 1032

2.43 ؋ 105

NH3

17

6.93 ؋ 1031

1.15 ؋ 105

Using these values, we find
2
n QNH3
= 5.45 × 10−7 L2 /mol 2 . n QN2 n3QH2

(12-15)

With 2⌬/kT = 0.9 eV/.026 eV = 38.5, we evaluate the exponential factor: e2⌬/kT = 1.1 ϫ 1015.

(12-16)

Multiplying these two factors, we find the following equilibrium constant for the ammonia reaction is,
K(300 K) = 6 ϫ 108 L2/mol2
138

(neglecting internal motions)

Equilibrium between Particles II Chapter 12

Notice that small uncertainties in ⌬ and T can cause large differences in the predicted
K(T), due to the exponential factor. Reducing the temperature from 300 K to 290 K
(a 3% change) increases K(T) by a factor of about four! The internal motions of the molecules modify both the energy and the entropy of the gas. The effect of internal motions (rotation and vibration) on K is discussed in Appendix 10.
Let us use the equilibrium constant that we derived to estimate the equilibrium concentrations in a reaction. Start with a gas of NH3 at T = 300 K and p = 1 atm. From
Eq. (12-14), one atmosphere of NH3 has the concentration nNH3 = 4.06 ϫ 10–2 mol/L.
Hydrogen and nitrogen are formed in the ratio nH2 = 3nN2 from the chemical equation, so the concentration of N2 is estimated as follows, n 2NH3
= 6 × 108 L2 /mol 2
( n N2 )(3n N2 )3
( n N2 )4 =

(12-17)

( 4.06 × 10−2 mol/L)2
27 × 6 × 108 L2 /mol 2

yielding, nN2 = 5.65 ϫ 10–4 mol/L.

(12-18)

Therefore, N2 and H2 occur at roughly 1.4% and 4.2% of the concentration of NH3 under equilibrium conditions. The initial pressure of NH3 is only slightly reduced at
300 K. Obviously, in the laboratory one should use the empirically measured equilibrium constants. More is said about this problem in Chapter 14 (optional reading). In that chapter, we consider cases where pressure, not volume, is held constant. That corresponds to systems in expandable containers (or under ambient pressure conditions) rather than fixed volumes.

C. Carrier Densities in a Semiconductor
We now consider a very different physical system with our general mathematical tools: mobile charge carriers a semiconductor. A crystal such as silicon has bands of closely spaced energy levels, separated by an energy gap. At T = 0 the crystal is an insulator because a filled band cannot conduct current. Schematically we draw the energy bands:
Conduction band (at T = 0 unpopulated with electrons)
Energy gap, Δ electron hole

Valence band (at T = 0 totally filled with electrons)

139

Physics 213 Elements of Thermal Physics

In a defect-free semiconductor, for every electron promoted to the conduction band, there remains a “hole” in the valence band. A hole acts like a particle, too, because a nearby electron in the valence band can jump into the hole, causing the hole to move.
It is really electrons that carry current, but holes are a way of keeping track of the net motion of electrons in the nearly filled valence band.
Considering our previous use of the Boltzmann factor to populate excited energy levels, you might expect that the number of electrons thermally excited from the valence band to the conduction band is simply Nconduction = Nvalenceexp(–⌬/kT). As found in the cases of atomic ionization and chemical reactions, the result is not so simple. Free energy is involved. Incredible as it may seem, the distributions of states near the conduction-band minimum and near the valence-band maximum have essentially the same form as that of an ideal gas. Because the atomic cores are periodically arranged on a crystal lattice, an electron can actually travel across many lattice sites without scattering (hence, the term “nearly free electron”). The effect of the lattice is to modify the mass of the electron, i.e., its acceleration under an applied force. Here we use the symbols me and mh for the “effective masses” of electrons and holes.
Now we minimize the free energy of this 2-component gas (just like Eq. (12-2) with
␮i = (ѨF/ѨNi)):
⌬F = ␮e⌬Ne + ␮h⌬Nh = 0.

(12-19)

Because an electron and hole in the pure semiconductor are created at the same time,
Ne equals Nh and ⌬Ne = ⌬Nh, leading to the equilibrium condition:
␮e + ␮h = 0.

(12-20)

Ignoring interactions at low densities, we treat the electrons and holes as two ideal gases. In terms of the densities ne = Ne/V and nh = Nh/V, the chemical potentials are:
⎛ n ⎞ μ e = kT ln ⎜ e ⎟ + Δ
⎝ n Qe ⎠

⎛ n ⎞ μ h = kT ln ⎜ h ⎟
⎝ n Qh ⎠

(12-21)

with nQe = 2(mekT/2␲ ប2)3/2 and nQh = 2(mhkT/2␲ ប2)3/2 as the respective quantum densities. The prefactor 2 signifies that electrons and holes can have their spin either
“up or down.” Generally the effective masses of an electron and a hole are not equal.
Setting ␮e = –␮h, the equilibrium condition becomes: nenh = nQenQhe–⌬ / kT ,

140

(12-22)

Equilibrium between Particles II Chapter 12

which simplifies with the definition nQ ϵ (nQenQh)1/2 to: nenh = nQ2 e–⌬ / kT .

(12-23)

For a defect-free semiconductor, the number of electrons in the conduction band equals the number of holes in the valence band; thus, ne = nh = ni ,

(12-24)

where ni is known as the “intrinsic carrier density”: ni = nQe–⌬ / 2kT .

(12-25)

The carrier masses and band gaps are determined by various experimental methods.
Given the m’s and ⌬, one can calculate the density of free carriers at any temperature.
The quantum densities have roughly the same order of magnitude as the quantum density of an electron in free space. Using Eq. (11-11) with 300 K we have:

nQe = 2 ϫ

1030

meter–3

⎛ me ⎞


⎝ mp ⎠

3/ 2

= 2.51 ϫ 1025 meter–3

(12-26)

where mp/me = 1836 is used, and the factor of 2 is from the two spin states.
The following table gives the measured energy gap and quantum density (nQ = (nQenQh)1/2), and the corresponding electron (hole) density Eq. (12-25) for three common semiconductors at T = 300 K:

material

⌬(eV)

nQ (meter–3)

ni (meter–3)

Si

1.14

1.72 ؋ 1025

5.2 ؋ 1015

Ge

0.67

7.21 ؋ 1024

GaAs

1.43

2.63 ؋ 1024

3.0 ؋ 1012

I have left a blank for you to fill in. It is helpful to recognize that kT = 0.026 eV at
300 K. The intrinsic carrier densities vary rapidly with temperature. At room temperature the calculated densities of thermally excited carriers are much lower than the density of atoms in the crystal. For example, silicon has a density of 5 ؋ 1028 atoms/m3. In effect, only 1 in 1013 silicon atoms contribute an electron to the conduction band at 300 K.

141

Physics 213 Elements of Thermal Physics

D. Law of Mass Action: Doped Semiconductors
Most applications of semiconductors require them to conduct current. The conduction and valence bands contain excellent free-particle-like states in which the carriers behave like an ideal gas, but, as we have just seen, the intrinsic density of free carriers is extremely low. In order to raise the carrier density up to useful levels we must add just the right kind of impurity atoms. This process is called doping.
Silicon is a group-IV element; i.e., it has a valence of 4. If we replace one of the silicon atoms with an atom of valence 5, such as phosphorus (P), then it turns out that the extra valence electron is only weakly bound to the impurity atom. At room temperature, this
“extra electron” is easily excited into the conduction band, leaving the phosphorus atom
“ionized” with a net charge of +e .
Our previous analysis still holds: nenh = nQ2 e–⌬ / kT ,

(12-27)

which, with the definition of intrinsic carrier density, ni, simplifies to: nenh = n2i .

(12-28)

This is called the Law of Mass Action for a semiconductor crystal. It says that if the electron density is increased by doping, the density of holes must decrease, because the intrinsic density ni is fixed by the temperature and the physical parameters of the semiconductor. The effect is striking: A common doping level for Si is about one atom in
104, or about 5 ϫ 1024 phosphorous atoms/meter3. At 300 K those extra phosphorous electrons go directly into the conduction band, causing the hole density to decrease to the value

(

5.2 × 1015 n 2i nh =
=
ne
5 × 1024

)

2

= 5.4 × 106 /meter 3

(12-29)

In this case the hole density in the doped crystal is roughly 18 orders of magnitude smaller than the electron density! The situation is reversed if impurity atoms with valence 3 (e.g., boron atoms) are substituted for Si atoms. Then the holes become the
“majority carriers.” Selective impurity doping allows the engineer to fabricate “n” and
“p” layers essential for diodes and transistors.

142

Equilibrium between Particles II Chapter 12

Exercises
1) At what temperature would a gas of nitrogen at 1 atmosphere pressure reach the quantum density? (Assume for this problem that nitrogen is a monatomic gas.)
Knowing the properties of nitrogen, do you think that this condition (n = nQ) is attainable for an ideal gas of N2?

2) Determine the density of ionized hydrogen atoms on the surface of the sun, assuming that non-ionized hydrogen atoms exist at a density of nH = 1 ϫ 1023/m3, and the surface temperature of the sun is 6000 K.

3) Write down the relations between chemical potentials (e.g., ␮A = ␮B) for each of the following gas reactions:
2H2O ↔ 2H2 + O2
N2 + 3H2 ↔ 2NH3
H2 ↔ 2H

4) Let us dope the silicon crystal with Nd phosphorus atoms and assume that the temperature is high enough that all are ionized. The crystal must remain neutral, so there is a constraint on the number of electrons and holes:
Ne = Nh + Nd

(12-30)

showing the negative electrons on the left and the positive holes and ionized donors on the right. In general, to determine the number of electrons and holes, we must solve two equations in two unknowns: ne nh = ni2

and

ne – nh = nd

where nd = Nd/V is the density of donors.
a) Use these two equations to solve for ne in terms of Nd and ni.

(Over)
143

Physics 213 Elements of Thermal Physics

b)

Compute ne for Si when nd = 1014 m–3 and 1017 m–3. Do your answers make sense for these cases where nd % ni and nd & ni, respectively?

5) a)

Taking into account molecular rotations, find the formula for equilibrium constant K(T) for dissociation of hydrogen molecules: H2 ↔ 2H. Because H2 is a diatomic molecule, nQ is replaced by nQZint in the formula for its chemical potential, as described in Appendix 10. Your answer should be in terms of the quantum densities for H and H2, the binding energy ⌬ for H2, and Zint.

b)

Determine how many H atoms are present in one cubic meter of H2 at T =
600 K and p = 1 atm. Use ⌬ = 4 eV and Zint ഠ kT/␧r, where ␧r = 0.03 eV is the quantum of rotational energy.

6) We have seen that entropy tends to a maximum simply because that corresponds to the maximum number of microstates. In the sense that “disorder” means a large number of possibilities, large entropy (many possible microstates) means large disorder. Gases are more disordered than solids and liquids; therefore gases have higher entropy.
The entropy of an isolated system either stays the same (in equilibrium) or increases
(approaching equilibrium). So the entropy of the universe (the ultimate isolated system) is always increasing, a situation sometimes referred to as “the heat death of the universe.”
Now here is the puzzle: If the entropy of the universe is always increasing, how can there be increasing order in the world? How did life originate from an inorganic world? How can a (low entropy) flower grow from a disordered array of dirt, chemicals, heat, and light? How can silicon chips be created out of sand? How would you answer this line of questioning? (We will consider these issues in the next chapter.)

144

CHAPTER

13

Adsorption of Atoms and
Phase Transitions

A. Adsorption of Atoms on a Solid Surface
From the adsorption of pollutant gases by the platinum metal in your car’s catalytic converter to the binding of oxygen by hemoglobin in your blood—from materials physics to biophysics—the interactions of gas molecules with solids and biomolecules impact our lives. We shall see how free energy F (or, equivalently chemical potential, ␮ = dF/dN) controls the balance of these important processes.
Consider first the adsorption of atoms on a solid. Imagine a surface with
M binding sites, each of which can hold one particle with a binding energy ⌬. The surface is inside a container of volume V at temperature T.
Volume V, Temperature T
Ng = # atoms in gas
Ns = # atoms on surface
N = total # atoms = Ns + Ng
M = # binding sites

145

Physics 213 Elements of Thermal Physics

It is useful to work in terms of the gas pressure p = nkT instead of the gas density n = N/V. We define a “quantum pressure” pQ = nQkT. Since n/nQ = p/pQ, the chemical potential of atoms in the gas takes the form,
⎛ p ⎞ μ g = kT ln ⎜ ⎟
⎝ pQ ⎠

(13-1)

Recalling Eq. (11-11) for nQ, a useful form of pQ = (m/2␲ប2)3/2(kT)5/2 is:
⎛ m⎞ pQ = 4.04 × 10 atm ⎜

⎝ mp ⎠
4

3/ 2

⎛ T ⎞

⎜⎝
300 K ⎠

5/ 2

(13-2)

where m = atomic mass, and mp = mass of proton. pQ represents the hypothetical pressure of an ideal gas at a density nQ. It’s about one million atmospheres for Ar at 300K.

The Helmholtz free energy of the molecules adsorbed on the surface is,
FS = US – TSS = –NS⌬ – T k ln(⍀) with Ω=

M!
(M − N S )!N S!

(13-3)

representing the number of ways to arrange NS identical particles on M single-occupancy sites (Chapter 6). Taking the logarithm of ⍀, ln ⍀ = ln M! – ln NS! – ln (M – NS)!.

(13-4)

The derivative of ln ⍀ requires the simple result d(ln N!)/dN = ln N, which you can intuit or prove as an Exercise in this chapter. We find,
⎛ M − NS ⎞ d(ln Ω )
= − ln N S + ln( M − N S ) = ln ⎜ dN S
⎝ N S ⎟⎠

(13-5)

leading to the chemical potential of an atom on the surface: μS =

⎛ M − NS ⎞ dFS = − Δ − kT ln ⎜
.
dN S
⎝ N S ⎟⎠

(13-6)

In equilibrium ␮g = ␮s , so the result is:
M – N S pQ − Δ /kT
=
e
.
NS p 146

(13-7)

Adsorption of Atoms and Phase Transitions Chapter 13

We wish to know the fraction of occupied sites f = NS/M at a given temperature. It is helpful to define a characteristic pressure p0 = pQe–⌬/kT. Equation (13-7) now becomes
(1–f )/f = p0/p, which we can solve for f: f= p p + p0

(13-8)

By adding or subtracting gas atoms to the container, we can control the gas pressure p = NgkT/V. The fraction of occupied surface sites ranges from zero to one as p is increased. This result is graphically represented by plotting f vs. p for several values of T.

f= fraction of occupied sites

1
T1

T2

T3

0.5

p

0

po(T2)

The characteristic pressure for temperature T2 is indicated by the cross. Notice that when p = p0 half of the surface sites are occupied. As the pressure is raised at a fixed temperature a larger fraction of sites will be occupied. A numerical example is provided as an Exercise at the end of the chapter.

B. Oxygen in Myoglobin
Important biological processes involve the interaction of biomolecules with atmospheric gases. The protein hemoglobin in your blood is the carrier of oxygen from your lungs to the cells in your body. A similar molecule, myoglobin, handles oxygen in your muscles.
Myoglobin has one binding site for oxygen, with a binding energy ⌬ ഠ 0.5 eV.
If we define M = number of myoglobin molecules, Ns = number of oxygen molecules bound to myoglobin, and f = NS/M as the fraction of O2 molecules that are bound, then the math is identical to the problem we just solved. Oxygen is a diatomic molecule, so its chemical potential is modified slightly from the atomic case (Eq. (11-16)) due to molecular rotations. pQ is replaced by pQZint, where Zint ഠ 3Zrot ഠ 3(kT/␧r) with ␧r =
.00036 eV, as discussed in Appendix 9. The chemical potential is,
⎛ p ⎞ μ g = kT ln ⎜
⎟,
⎝ pQ Z int ⎠

(13-9)

147

Physics 213 Elements of Thermal Physics

leading to p0 = pQ Zint e–⌬/kT in Eq. (13-8). As an exercise you will be asked to estimate the binding energy of oxygen to a myoglobin molecule.

C. Why Gases Condense
The transformations between gases, liquids, and solids govern the natural world around us. From your knowledge of entropy, you might well ask why phase transitions even occur. It is generally true that gases have more accessible states (they occupy more volume) than liquids or solids, so you might expect that the approach to equilibrium always results in a gas. In other words, if entropy is always increasing, why do (lower entropy) liquids and solids even exist? Indeed, how did life originate in an entropy-increasing world?
The simple answer to that question is that entropy need not increase everywhere. There may be regions of lower entropy (a snowflake) and higher entropy (water vapor) as long as the total entropy of an isolated system is increasing. A low-temperature weather front increases its entropy by removing entropy (i.e., heat) from water vapor to condense a water droplet or snowflake.
This simple answer is not totally satisfying because it only says that condensation can happen, not that it will happen (as it does on a regular basis). The better explanation is in terms of free energy, which explicitly contains not only entropy of a small system but also the attractive forces (binding energies) between atoms and molecules. Minimization of free energy for a system of particles in thermal contact with a reservoir (say, the environment) requires that atoms become molecules, and molecules condense into liquids and solids, at sufficiently low temperatures. The inhomogeneity of nature (yes, even life) is driven by the action of free energy, which incorporates energy and entropy.
We have already seen the effect of free-energy minimization in the production of molecules from atoms, the adsorption of atoms onto a solid surface, and the binding of oxygen in biomolecules. It is a natural step further to understand the basics of phase transformations. We begin with an elementary example—equilibrium between a gas and solid—which provides us with a basic concept of thermodynamics: vapor pressure.

D. Vapor Pressure of a Solid
Consider an ideal gas in contact with a simple solid—one that has binding energy but no internal motions (i.e., no entropy). As in the surface-adsorption case, we put N atoms in a container of constant volume and at temperature T. Assume that an atom condensed in the solid loses an energy ⌬, which must be absorbed by the gas and eventually the reservoir. We are simplifying this problem by ignoring the fact that an atom on the surface is not completely surrounded by other atoms and is somewhat less bound than an atom in the bulk of the solid. (For liquids, this effect is characterized by a “surface tension” that leads to the instability of small droplets.)

148

Adsorption of Atoms and Phase Transitions Chapter 13

Volume V, Temperature T

Ng = # atoms in gas
Ns = # atoms in solid
N = total # atoms = Ns + Ng

Under these simplifying assumptions, F = Fg + Fs. Setting dF/dNg = 0, and using dNg = –dNs we have the solid-gas equilibrium condition,
␮s = ␮g.

(13-10)

Because the solid has no internal motions, its free energy is simply Fs = – Ns⌬. (Perfect order, zero entropy.) The chemical potentials of the solid and gas are:
␮s = dFs/dNs = – ⌬,

(13-11)

⎛ p ⎞ μ g = kT ln ⎜ ⎟
⎝ pQ ⎠ where pQ is the quantum pressure given by Eq. (13-2). The equilibrium condition immediately yields the pressure in the container, p = pQe–⌬ / kT = pvapor

(13-12)

As emphasized by the subscript, this is the vapor pressure of the solid at temperature T— the pressure at which the rate of evaporation exactly equals the rate of adsorption. For equilibrium to exist, we do not have the option of making the pressure of the gas any value we choose! If the pressure is initially higher than the vapor pressure, pvapor(T), the solid will grow. If the pressure is initially lower than the vapor pressure, the solid will shrink, possibly to zero. We can illustrate this process by two diagrams. First we draw
␮(T) for solid and gas at three different pressures, showing the equilibrium condition
(␮s = ␮g) as a dot. Next we plot the resulting “coexistence temperatures” on a p-T phase diagram. The vapor-pressure curve p(T) is also known as a “phase equilibrium curve” or a “coexistence curve.”

149

Physics 213 Elements of Thermal Physics

μ
0

T

μ g = -kT ln

( ) ( pQ p

p Q ∝ T 5/2

)

μ s = -Δ

-Δ p1 p2

p3

p p = p Q e -Δ/kT p3 Solid region

"Coexistence curve"

Gas region

p2 p1 T

If initially we place a cube of ice in an empty freezer, the water molecules evaporate
(“sublime”) from its surface, eventually producing a pressure of pvapor(T). A tiny chip of ice, however, may not have enough molecules to support this pressure alone, so it evaporates away completely, corresponding to the region to the right of the coexistence curve. Realize also that the calculated vapor pressure is only a “partial pressure,” contributing to the total atmospheric pressure in the container.
What about the solid materials around us? For example, a silicon crystal in your computer. What vapor pressure does it try to maintain? Will it evaporate away? Fortunately, the binding energy of an atom in a solid is a few electron volts, and at 300 K the thermal energy per atom is only about kT = 0.026 eV, so it would take a huge thermal fluctuation to kick an atom out of the solid. If this unlikely event happened, the ejected atom would probably just diffuse off into the atmosphere with little chance of returning to the solid.

150

Adsorption of Atoms and Phase Transitions Chapter 13

Do we have to worry about the silicon in our computer chips evaporating away? No, because the rate of evaporation of ordinary solids is extremely slow at room temperature.
Also, the vapor pressure for an ordinary solid at 300 K with ⌬ ഠ 3 eV is extremely small: pQe–⌬/kT = (4.04 ϫ 104 atm)(28)3/2 e–3/.026 = 4.4 ϫ 10–44 atm.

(13-13)

This vapor pressure is much less than the gas pressure in outer space. In this simple model, even if you put the solid in a vacuum chamber at T = 1000 K, only a few evaporated molecules would be required to sustain equilibrium. The solids around us are not likely to evaporate away.
The moral of this story is that we are surrounded by solid materials that are significantly out of contact with their vapor phases. They were produced either by nature or by man under much more extreme conditions of temperature and density than they are now experiencing. The silicon molecules in your computer or watch were grown into a near perfect crystal from a hot silicon liquid, which in turn came from chemical processing of SiO2—sand! As the hot Si liquid is cooled, free-energy minimization drives Si atoms from high-entropy liquid to low-energy solid. On a grander scale, this is how nuclei of atoms formed from elementary particles in the early universe.

E. Solid/Liquid/Gas Phase Transitions
We can extend these ideas to more general cases. A real solid has internal modes of oscillation (vibrations) which give it a non-zero entropy considered in Appendix 10. An atom in a liquid has less binding energy than one in a solid, but the liquid generally has more entropy (it is highly disordered, plus vibrational modes). We can schematically represent these ideas with the following drawings: μ μ
T

0

T

0

μg
Solid-gas
equilibrium

-Δs

μs

Liquid-solid equilibrium -ΔL
-Δs

μs μL Now let’s put all three branches together. Here are 4 graphs at different pressures, which will help us determine the phase diagram. Notice how the chemical potential of the gas passes through the other two branches. At one (p,T) all three phases coexist. In the graphs, Tc represents a coexistence temperature at the given pressure.

151

Physics 213 Elements of Thermal Physics

T

T

Tc

T

Tc

μ

p1

Tc p2 p3

p

T
Tc

Tc

Phase Diagram: p4 p4

μ

Gas

μg

Liquid

μL

Solid

μs

p3

Liquid
Solid

p2

Gas

p1

T

In the ␮(T) diagrams, we have plotted the three branches of chemical potential at successively higher pressures: p1 through p4. The pressure p2 and corresponding coexistence temperature Tc mark the “triple point” of the system. The actual phase diagram for a given system depends on the shapes of ␮(T) for the liquid and gas phases, which in turn depend on the particular “structure” of the liquid or solid.
Recall that ␮ is the change in free energy of a particular phase when a particle is added to it. Notice that there are crossings of the branches that do not correspond to an equilibrium state, because the free energy of the system is lowered by choosing a lower branch at that temperature. Let’s look more closely at the case with a constant pressure p3.
Phase Diagram:

T
Tc1

p

Tc2

Solid
Liquid

μ(p3,T)

p3
Gas

T
Tc1

152

Tc2

Adsorption of Atoms and Phase Transitions Chapter 13

The heavy line on the left indicates the equilibrium ␮(p3,T) as one increases the temperature at a constant pressure. The system proceeds from pure solid, to solid/liquid coexistence at Tc1, to pure liquid, to liquid/gas coexistence at Tc2, and finally to pure gas.
The trajectory for this process on the p-T phase diagram is shown at the right. This process is achieved by adding heat to the system while keeping the pressure constant.
Let’s examine in detail the liquid/gas part of the transition:

Heater

The system travels from points 0 through 3 on the following diagrams: p Solid

T

Liquid
3

0

gas liquid 3 liquid + gas

Tc2

Gas

1

2

0

T
Tc2

L

heat input Q

Notice that a lot happens at the coexistence point, TC2. With the addition of heat, the system is being converted from pure liquid (at point 1) to pure gas (at point 2). The amount of heat required to make this complete conversion is known as the “heat of transformation” or “latent heat” of the liquid, denoted L.
Because the process is conducted at constant pressure, the latent heat is not simply the energy needed to free the atoms from their bonds to the liquid, i.e., the difference in internal energies of the liquid and gas phases. While the liquid/gas conversion proceeds, work is being done on the environment (shown above as the gravitational force of a weight, but usually atmospheric pressure). The First Law of Thermodynamics tells us
Q = heat input = ⌬U + p⌬V ,

(13-14)

153

Physics 213 Elements of Thermal Physics

showing that the heat source must provide energy for doing work, as well as for breaking bonds. Because so many processes such as phase transitions and chemical reactions are conducted at constant pressure, it is common to tabulate the state function;
H = U + pV ,

(13-15)

known as the “enthalpy” of a substance, usually given per mole of substance. Enthalpy is understandably known as the “heat content” of a material. If Hl and Hg are the enthalpies of the particular liquid and gas at a standard pressure, then ⌬Hlg = Hg – Hl = latent heat of the transition.
⌬Hlg is the energy required to evaporate all the liquid at its boiling point. Here is a table of the latent heats and entropy changes (⌬S = ⌬Hlg /Tc) per mole for some common substances at atmospheric pressure:

Latent Heats of Evaporation

Gas

Boiling Pt. (K)

⌬H (J/mol)

⌬S (J/molؒK)

Helium

4.2

92

22

Nitrogen

77.3

5610

72

Oxygen

90.1

6820

76

H 2O

373

44000

119

F. Model of Liquid–Gas Condensation
So far we have considered relatively low densities of non-interacting particles, which obey the ideal gas law: pV = NkT .

(13-16)

Liquids and solids form because there are attractive interactions between atoms or molecules. When the temperature of the gas is reduced, the attractive interactions can cause the gas to condense into a liquid, where the molecules pack together densely in order to lower their potential energy. The effects of interactions between particles is incorporated neatly in the van der Waals equation of state (see the Exercise at the end of this chapter):

N 2a ⎞
⎜⎝ p + V 2 ⎟⎠ ( V − Nb ) = NkT .

(13-17)

Here, V is the volume of the container, and Nb is total space displaced by the N finitesized particles, each having volume b. The term N2a/V2 causes reduction in pressure p
(for a given N and V) due to the attractive potential between particles.

154

Adsorption of Atoms and Phase Transitions Chapter 13

It is particularly informative to observe the system at constant temperature. At low temperatures and high densities the corrections to the ideal gas law greatly distort the isothermal curves of the ideal gas:

p pV NkT

T3

T2

p1

2

1

T1

Variable Force

V
Consider the lowest curve, where the system is in contact with a thermal reservoir at temperature T1, and V is controlled by adjusting the force. The system pressure p = force/A (which includes patm) is recorded as V is varied. In the region between points
1 and 2, the gas condenses into a liquid of constant density, and there is a coexistence between liquid and gas. As the volume is decreased from point 1 to point 2, the fraction of liquid increases until, at point 2, only liquid remains in the container. The region below the dashed curve in the figure corresponds to the mixed phase, or coexistence, region of the phase diagram.
This type of experiment allows us to determine the complete “equation of state,” p(V,T), of a particular substance. However, we have no way of directly measuring either the internal energy change ⌬U or the entropy change ⌬S in isothermal experiments, because we cannot measure the flow of heat between the system and the thermal reservoir.
However, our earlier experiment—adding heat at constant pressure—provided us with the latent heat of the transition as well as the phase boundaries in the p-T plane. Little by little, bit by bit, nature reveals itself.

155

Physics 213 Elements of Thermal Physics

Exercises
1) Prove that d(lnN!)/dN = ln N, which was used in the problem where atoms were bound to sites on a solid surface.

2) a) Determine the fraction of surface sites occupied with Ar atoms if the site energy is ⌬ = 0.3, 0.4, and 0.5 eV, assuming a gas pressure of 1 atm and T = 300 K.

b)

Show that the crossover from near-zero to near-full occupancy occurs when
␮ = –⌬. That is, f = ½ when the chemical potential equals the energy of a bound atom. For a particular surface, ⌬ is fixed and ␮ is controlled by the pressure.

3) For a body temperature of 310 K and a partial pressure of oxygen in the muscle of
0.5 atm, let us imagine that the fraction of myoglobin sites occupied by oxygen is
50%. Determine the binding energy ⌬ for myoglobin in this hypothetical example.
Be sure to replace pQ with pQZint in the formula for the chemical potential of O2, where Zint ഠ 230 for O2, as given in Appendix 9.

4) A 1 kg block of ice at 273 K melts completely in 6 hours. a) Using the latent heat of fusion for water as 3.335 ϫ 105 J/kg, determine the average power provided by the surroundings at 293 K. b) Calculate the total change in entropy for the melting process, neglecting any heat rise in the water.

m

h

PE grav = mgh = pV
156

5) Recall the free energy of an ideal gas given by Eq. (11-15). By minimizing free energy, derive the gas law of an interacting gas of particles under the assumptions:
1) each particle takes up a volume b, so the volume available to the gas becomes
V – Nb, and 2) there is an attractive interaction between particles. Assume that the average potential energy felt by a single particle is proportional to the average density of other particles, PEint = –aN/V; therefore, the total potential energy for N particles is –aN2/V. The total potential energy that must be included in F is
PEgrav + PEint. The result is the van der Waals equation which is responsible for liquid-gas condensation!

CHAPTER

14

Processes at Constant Pressure*

A. Gibbs Free Energy
In previous chapters we have minimized Helmholtz free energy at constant volume to determine equilibrium conditions. Frequently, phase transitions and chemical reactions are observed under conditions of constant pressure (e.g., atmospheric pressure). In the case of constant pressure, there is a similar useful function known as the Gibbs free energy.
In the case of fixed volume, a small system is in thermal contact with a reservoir whose entropy depends on the energy it provides to the small system: SR = S0 – U/T, where 1/T = (dSR/dUR)V and dU = –dUR.
Consider now that the small system is in contact with a thermal and pressure reservoir:

Reservoir, entropy = S R volume = V R

p, T p small system, entropy = S, volume = V

*Optional reading in Physics 213.

157

Physics 213 Elements of Thermal Physics

The entropy of the reservoir depends on both the energy U and volume V of the small system. For simplicity, let’s assume that the reservoir is an ideal gas with pVR = NRkT.
Because VR >> V, we assume that p is a constant. Entropy depends on volume as SR ϰ NRk lnVR; therefore, (dSR/dVR)T = NRk/VR = p/T. The entropy of the reservoir is:
SR = S0 – (1/T)U – (p/T)V ,

(14-1)

where S0 is the reservoir’s entropy when U = 0 and V = 0. The total entropy is:
Stot = SR + S = S0 – (U + pV – TS)/T , or, Stot = S0 – G/T ,

(14-2)

G = U + pV – TS

(14-3)

where we have defined,

as the Gibbs free energy. In terms of enthalpy, G = H – TS. In essence, the Gibbs energy takes into account the amount of work required when the small system changes its volume at constant pressure: ⌬G = ⌬U + p⌬V – T⌬S.
Equation (14-2) shows that for systems at constant pressure and temperature, a minimum in the Gibbs energy defines a maximum in Stot, and therefore determines the equilibrium conditions. In short,
⌬Stot = –⌬G/T .

(14-4)

Let us examine the ideal gases that we considered in Chapter 11, but now under the condition of constant pressure. What is G for an ideal gas? From the above definition of Gibbs free energy, we see that G = F + pV, and F has the form (Eq.(11-15)),
⎛ ⎛ n ⎞ ⎞
F = NkT ⎜ ln ⎜
⎟ − 1⎟ .
⎝ ⎝ nQ ⎠ ⎠

(14-5)

Using pV = NkT and changing densities to pressures, n/nQ = p/pQ, we have
⎛ ⎛ p ⎞ ⎞
G = F + pV = NkT ⎜ ln ⎜ ⎟ − 1⎟ + NkT .
⎝ ⎝ pQ ⎠ ⎠

(14-6)

Amazingly, the NkT terms cancel; therefore, the Gibbs free energy of an ideal monatomic gas takes on a wonderfully simple form:

⎛ p ⎞
G = NkT ln ⎜ ⎟ = Nμ
⎝ pQ ⎠
158

pQ = nQkT

(14-7)

Processes at Constant Pressure Chapter 14

where ␮ is the chemical potential defined in Chapter 11, and pQ is the quantum pressure of the gas. Notice also that ln(p/pQ) does not depend on the number of molecules
N, so that G(N,p,T) = N ␮(p,T). (These simplifications occur also for a diatomic gas, for which U = (5/2)NkT, because the entropy in that case has a 7/2 replacing 5/2. See
Appendix 9.)
Chemical potential is the Gibbs free energy per particle: ␮ = G/N. This fact is consistent with the definition of chemical potential as the increase in Helmholtz free energy when one molecule is added at constant volume: ␮ = dF/dN. In that case, the addition of a molecule increases the pressure, which is a consequence of dealing with the
Helmholtz free energy. Chemists tabulate G because most reactions are performed at athmospheric (or constant) pressure. The relation between G and ␮ is a little simpler than between F and ␮.
Quite generally a 3-component reaction is given by, aA + bB ↔ cC .

(14-8)

The total Gibbs free energy for this reaction becomes,
G = NA␮A + NB␮B + NC␮C .

(14-9)

Considering the relations between the dNi, as we did for Eqs. (12-2) and (12-10), the equilibrium condition, dG = 0, implies, a␮A ϩ b␮B – c␮C = 0.

(14-10)

For reactions between ideal gases, the chemical potentials are
⎛ p ⎞ μ i = kT ln ⎜ i ⎟ − Δ i ,
⎝ pQi ⎠

(14-11)

where ⌬i is the energy released (binding energy) when a molecule is formed from the reactants. Commonly ⌬i is set to zero for elemental molecules such as N2 and H2. For polyatomic gases, pQ must be replaced by pQZint, where Zint accounts for the internal motions of the molecules. (See Appendix 9.)
The equilibrium relations between the chemical potentials are the same as derived in the constant-volume case (Chapter 12), but now the ␮’s are written in terms of “partial pressures” pi = nikT, rather than densities ni. In an Exercise at the end of this chapter, you are asked to determine the form of the equilibrium constant when using partial pressures, rather than densities or concentrations (see Eqs. (12-13) – (12-15)).

159

Physics 213 Elements of Thermal Physics

B. Vapor Pressures of Liquids—General Aspects
Real solids and liquids are usually not so “easy” to model as the previous examples.
However, a little math manipulation using the equilibrium condition (␮l = ␮g) between phases yields a very general property for the vapor pressure pvapor(T) of any liquid or solid, which we abbreviate as p(T). If we expand the differential for one of the phases by using the product rule for differentials,
⌬G = ⌬U + ⌬(pV) – ⌬ (TS)
= ⌬U + V⌬p + p⌬V – S⌬T – T⌬S

(14-12)

and substitute the thermodynamic identity (FLT plus ⌬Q = T⌬S),
⌬U = T⌬S – p⌬V

(14-13)

⌬G = V⌬p – S⌬T .

(14-14)

we are left with,

At this point it is useful to define G, V, and S per mole of material, denoted g, v, and
s. Eq. (14-14) becomes ⌬g = v ⌬p – s ⌬T. Now consider the coexistence curve p(T).
Because ␮l = ␮g along the curve, we have ⌬␮l = ⌬␮g and ⌬gl = ⌬gg, yielding, vl ⌬p – sl ⌬T = vg ⌬p – sg ⌬T.

(14-15)

sg − s l
Δp
.
=
ΔT v g − v l

(14-16)

Rearranging terms,

With vg – vl = ⌬v the volume change per mole, and ⌬s = ⌬hlg/T the entropy change per mole, the slope of a liquid-gas phase boundary takes on the general form, dp ⌬h lg
,
= dT T ⌬v

(14-17)

where ⌬hlg ≡ ⌬Hlg/NA is the latent heat per mole. Because the liquid is much denser than the gas, the volume change per mole of material is given by the ideal gas law: V/n
= RT/p. The above equation, known as the Clausius-Clapeyron equation, is very useful.
By measuring how the boiling point depends on pressure (i.e., following along the phase boundary), one can determine the latent heat of a liquid. An example is given in the exercises for this chapter. Alternatively, if one knows the latent heat, one can determine the pressure dependence of the boiling point of the liquid.
In most cases a substance expands when solid turns into liquid (⌬v > 0) and the process requires a positive heat of transformation (⌬Hlg > 0). Therefore, the slope of the coexistence curve dp/dT is positive. Water is unusual in that it contracts upon melting, implying that dp/dT is negative. To illustrate this difference, the phase diagrams for water and carbon dioxide are plotted below, along with a table of critical pressures and temperatures: 160

Processes at Constant Pressure Chapter 14

H2O

p

CO 2

p

Liquid

Pcp

Liquid

Solid

Solid

Ptp
Gas

Gas

T
Ttp

T

Tcp

pcp

Tcp

ptp

Ttp

H2O

218 atm

374°C

.006 atm

.0098°C

CO2

72.8 atm

31°C

5.1 atm

–56.6°C

(Values from Zumdahl’s “Chemical Principles.”)

C. Chemical Reactions at Constant Pressure
Consider the following example involving the gas reaction, N2 + 3H2 ↔ 2NH3. What is the energy required to produce NH3 from N2 and H2, under constant pressure? From our discussion of enthalpy, the total thermal energy required to cause this reaction is
⌬H = HNH3 – (HH2 + HN2).
Chemical tables do not give absolute values of H for a substance. Instead they list the standard enthalpy of formation ⌬Hfo, which is defined as the energy required to form 1 mole of a compound from its elements, with all substances in their standard states (T = 298 K, p = 1 atm). Thus, ⌬Hfo = zero for elemental gases (e.g., N2, H2, O2, Ar), liquids (Hg, Br) and solids (Cu, Al). Here is a subset of entries from the Appendix in Zumdahl’s book
(so = molar entropy; and ⌬Gfo = ⌬Hfo – T⌬so is the Gibbs energy required to form one mole of the substance):

Substance and state ⌬Hfo
(kJ/mol)

so
(J/mol-K)

⌬Gfo
(kJ/mol)

N2

(gas)

0

192

0

H2

(gas)

0

131

0

NH3

(gas)

–46

192.5

–16.5

O2

(gas)

0

205

0

H 2O

(gas)

–242

189

–229

H2O

(liquid)

–286

70

–237

161

Physics 213 Elements of Thermal Physics

In Chapter 12, we assumed that an energy 2⌬ = 0.9 eV (= 1.44 ϫ 10–19J) is released when two molecules of NH3 are created from one molecule of N2 and 3 molecules of
H2. This value is consistent with the empirical data given in the above table. First realize that the internal energy change is related to the standard enthalpy change by
⌬Uo = ⌬Ho – ⌬(pV).
Since each ammonia molecule has binding energy ⌬, the energy needed to create 1 mole
(NA = 6.023 ϫ 1023 molecules) of NH3 in a “vacuum” is:
⌬Uo = – NA⌬ = – NA ϫ (1.44 ϫ 10–19J)/2 = – 43.4 kJ.

p = 1 atm

However, to create 2 moles of NH3 from 1 mole of N2 and 3 moles of H2, all at 1 atm, there must be a factor of 2 decrease in volume because 4 moles of H2 and N2 gas are replaced by 2 moles of NH3. (Remember that one mole of an ideal gas has the same volume as a mixture of ideal gases adding up to one mole.) Therefore, to produce 2 moles of NH3, we have ⌬(pV) = –2RT, and for 1 mole of NH3,
⌬(pV) = – 1RT = – 8.314 J ϫ 298 K = – 2.5 kJ, for a total enthalpy of,
⌬Hfo = ⌬U + ⌬(pV) = – 43.4 kJ – 2.5 kJ = – 45.9 kJ, which is consistent with the value given in the table. In fact, I obtained the value of
2⌬ = 0.9 eV from the empirically measured ⌬Hfo = – 46 kJ/mol given in this table, working the problem in reverse.

162

Processes at Constant Pressure Chapter 14

Exercises
1) In chemical reactions, the law of mass action is frequently given in terms of partial pressures of the components, pi = NikT/V = niRT/V, where ni = Ni/NA. For the reaction aA ϩ bB → cC, the equilibrium relation is: pcC = K p (T), paA pbB where Kp denotes the equilibrium constant applicable to partial pressures. In Chapter
11 we estimated the (concentration-based) equilibrium constant for the ammonia reaction to be K(300 K) = 6 ϫ 108 L2/mol2. Show what the general relation is between K and Kp and evaluate Kp(300 K) for the ammonia reaction.

2) A hypothetical liquid boils at 40°F in atmospheric pressure, and it boils at 45°F when the pressure is raised to 1.1 atm. Use the Clausius-Clapeyron equation (14-17) to determine the latent heat of this liquid.

3) Using the table given in Section C, determine the amount of energy 2⌬ (in eV) required to form two molecules of H2O from two molecules of H2 and one molecule of O2. This is twice the binding energy of the water molecule.

4) a) Using the Clausius-Clapeyron equation (14-17), derive an equation for the vapor pressure p(T) of a liquid in terms of the gas constant R and the latent heat per mole L.

b) With a vapor pressure of the form (13-12) derived for a solid-gas phase transition, show the relationship between the latent heat per mole L and the binding energy ⌬ of a molecule in the liquid, thus connecting the macroscopic and microscopic worlds.
163

Physics 213 Elements of Thermal Physics

164

APPENDIX

1

Vibrations in Molecules and Solids—
Normal Modes*

In Chapter 1 we introduced the problem of vibrations in a diatomic molecule, simply represented by the following diagram (with a spring replacing the actual binding forces):

m

κ

m

␬ = spring constant

To compute the vibrational frequency of this object, we wrote two coupled differential equations for the displacements u1 and u2 of ball
1 and 2, m d2u1/dt2 = –␬(u1 – u2) m d2u2/dt2 = –␬(u2 – u1).
These equations are turned into two linear equations in two unknowns by assuming solutions of the form, u1 = A1sin ␻t and u2 = A2sin ␻t, where
A1 and A2 are the vibrational amplitudes of atoms 1 and 2. Rearranging terms a little, we have,
(␬ – m ␻2) A1 – ␬ A2 = 0
–␬ A1 + (␬ – m ␻2) A2 = 0

*Optional reading for Physics 213

165

Physics 213 Elements of Thermal Physics

These two equations can be written in matrix form:
⎛ κ − mω 2
⎞ ⎛ A1
−κ


2 ⎟ ⎜
−κ
κ − mω ⎠ ⎝ A 2


⎞ ⎛ 0 ⎞
⎟ =⎜
⎟⎠ ⎝ 0 ⎟⎠

The two linear equations have a non-zero solution only if the determinant of the matrix equals zero. Dividing by ␬ and defining X = m␻2/␬, we see that the matrix has the form:
(1 − X )

−1

−1

(1 − X )

=0

With the definition of the determinant, given at the end of the appendix, you will find the solutions X = 0 and 2, giving ␻ = 0 and (2␬/m)1/2. The solution ␻ = 0 corresponds to the center-of-mass motion of the molecule.
The adventurous reader may follow this procedure in order to determine the resonant frequencies associated with the triatomic linear molecule:

m

κ

m

κ

m

This exercise is a little more complicated (involving a 3ϫ3 matrix), but the general procedure is extremely important. The definition of a 3ϫ3 determinant is given at the end of this chapter. The three coupled equations of motion are: m d2u1/dt2 = –␬(u1 – u2) m d2u2/dt2 = –␬(u2 – u1) – ␬(u2 – u3)

(A1-1)

m d2u3/dt2 = –␬(u3 – u2)
What we are doing here is finding the “normal modes” of a system of oscillators, i.e., the natural resonances of the system. You will find that there are two compressional resonant frequencies associated with the triatomic linear molecule. In general, a system of N masses connected by springs and allowed to vibrate in one dimension will have N – 1 normal modes, each with a distinct frequency. N masses in 3 dimensions have 3N degrees of freedom. For a solid, these 3N degrees of freedom equal (3 c.m. translational) + (3 rotational) + (3N–6 vibrational).
Finding the normal-mode frequencies is the first step in completely solving the problem, which would also tell us the ratios of amplitudes A1:A2: .. :AN of the atoms in a particular normal mode. In this course, we will not solve for the amplitudes, but the computer program demonstrated in lecture shows the results for various situations. You
166

Vibrations in Molecules and Solids—Normal Modes Appendix 1

may experiment with this normal-mode simulation, called NORMALMODE, written by Professor Jim Smith at UIUC and loaded onto the computers in the lab room.
Professor Smith’s computer simulation shows the actual motions associated with the normal modes for (up to) 20 masses. The results are quite amazing. The normal modes turn out to be standing waves. See the figure for N = 5 at the end of this section. For
20 atoms, there are 19 different wavelengths for these standing waves, corresponding to the N – 1 = 19 different frequencies. The wavelengths of the vibrational waves are:
␭n = 2L/n,

(A1-2)

where L = Na is roughly the length of the line of atoms and n is an integer ranging from 1 to 19. The displacement of atoms in the nth standing wave is roughly given by: u(x,t) = A (sin knx) (sin ␻nt),

(A1-3)

where kn = 2␲/␭n is the wavenumber, and ␻n’s are determined by finding the roots of the 20th order equation (“diagonalizing” the 20ϫ20 matrix).
You may notice that the equation for a standing wave is a little different from that of a traveling wave, u(x,t) = A sin (knx – ␻nt).

(A1-4)

In fact, a standing wave is the superposition of two traveling waves, one moving in the
+x direction, sin (knx – ␻nt), and one moving in the –x direction, sin (knx + ␻nt), as you can verify by using the trigonometric identity, sin A + sin B = 2sin((A+B)/2) sin((A–B)/2).
The speed of each traveling wave is v = ␻n/kn = fn␭n.
Definition of 2ϫ2 and 3ϫ3 determinants:
A B
= AD − BC,
C D

A B C
D E F = AEI + BFG + CDH – AFH – BDI – CEG
G H I

Exercises
1) Imagine a spring that stretches 6 cm when a 1 kg mass is hung on it. Its spring constant is ␬ =
. Is the “spring constant” between the two hydrogen atoms in a hydrogen molecule larger or smaller than this? Take a guess, then compute ␬H2 using the measured vibrational frequency of the diatomic molecule H2: f = ␻/2␲ = 6.5 ϫ 1013 Hz.
2) Calculate the normal mode frequencies of a hypothetical diatomic molecule bonded to a solid:

167

κ

m

κ

m

Physics 213 Elements of Thermal Physics

No r ma l Mo d e s fo r N = 5

L = Na

At rest

a

d i s pl ac ement

168

u

n = 1

λ1 = 2L

n = 2

λ2 = L

n = 3

λ 3 = 2 L /3

n = 4

λ 4 = L /2

APPENDIX

2

The Stirling Cycle

The purpose of a heat engine is to convert heat into work. A straightforward heat engine that finds itself in some practical applications is the
Stirling engine. The cycle of the Stirling engine has four steps, which we will take one at a time. Throughout, W = work by the gas.
We start with the gas at room temperature, Tc. In Step 1, we fix the volume of the gas cylinder and immerse it in boiling water. The idea is to give it some energy to do work in the next process.
Step 1

p p = NkT/V
Va
Q1

Th
Tc

Boiling water, Th = 373 K

Va

V

169

Physics 213 Elements of Thermal Physics

This is clearly an irreversible process, because heat does not flow from cold to hot.
Because no work is done (p⌬V = 0), the first law tells us:
Q1 = ⌬U = Cv (Th – Tc) = ␣ Nk (Th – Tc).

(Nk = nR)

In Step 2, the hot gas performs isothermal work by pushing out the piston.
Step 2

p

Vb
W2
Va
Q2

Th
Tc

Boiling water, Th = 373 K

Va

Vb

V

The work done in step 2 by the gas is:
Vb

Vb

dV
V
= NkTh ln b
Va V
Va

W2 = ∫ pdV = NkTh ∫
Va

Because the gas’s temperature is held constant, its internal energy does not change.
From the First Law:
⌬U2 = Q2 – W2 = 0.
The heat transferred by the hot reservoir to the gas is
Q2 = W2 = NkTh ln

Vb
Va

The total heat transferred from the hot reservoir in steps 1 and 2 is
Qh = Q1 + Q2 = ␣Nk(Th – Tc ) + NkTh ln

Vb
Va

We have successfully converted thermal energy from the hot reservoir into work. Now we must reset the system to its original condition. In Step 3, we begin re-setting the gas temperature by again fixing the volume and taking the container out of the hot bath into the room:
170

The Stirling Cycle Appendix 2

Step 3

p

Vb
Q3

Th
Tc

Room temperature, Tc = 293 K

Va

Vb

V

This is an irreversible process. No work is done. The heat transferred to the environment is ␣Nk(Th – Tc ).
Finally, we arrive at Step 4, which recompresses the gas at low temperature:
Step 4

p

Vb

W4

Va
Q4

Th
Tc

Room temperature, Tc = 293 K

Va

Vb

V

The work required in Step 4 to reset the gas isothermally at Tc is given by
Va

Va

dV
V
= NkTc ln a
Vb V
Vb

W4 = ∫ pdV = NkTc ∫
Vb

171

Physics 213 Elements of Thermal Physics

Note that the work done by the gas is negative because VbϾVa; i.e., we do work on the gas. The heat transfer in step 4 is calculated as in step 2.
The total work done by the gas in the entire cycle is:
Wby = W2 + W4 = Nk (Th – Tc) ln

Vb
Va

The efficiency of any engine is εϵ work done by the system
Wby
= heat extracted from thee hot reservoir
Qh

For the Stirling cycle, we have the final result

Nk(Th − Tc )ln ε= Vb
Va

V αNk(Th − Tc ) + NkTh ln b
Va

(Th − Tc )ln
=

Vb
Va

α (Th − Tc ) + Th ln

Vb
Va

For a volume ratio Vb/Va = 2, you can show that the efficiency is 14.6%. Notice that as the volume ratio becomes large, the efficiency of the Stirling cycle approaches that of the Carnot cycle, ␧ = 1 – Tc/Th (= 21% in this case).

172

APPENDIX

3

Statistical Tools

A. Statistical Definitions and Concepts
The statistics of many particles involves probabilities. We introduce our discussion of statistics by considering the results of a hypothetical exam:
9
8
7

Number of papers with score s,
N(s)

6
5
4
3
2
1
0

0

1

2

3

4

5

Score, s

173

Physics 213 Elements of Thermal Physics

This graph tells the final results of the exam. There are no uncertainties or probabilities involved. There is, however, an experiment that you can do with this distribution that involves probabilities. Take all the papers, shuffle them, and ask, for example, “What is the probability that I will pick out a paper with an exam score of 2?” The exam distribution given above now becomes a probability distribution. The probability of picking out an exam with score s is:
P(s) = N(s) / ⌺ N(s) = N(s) / N,

(A3-1)

where N is the total number of exam papers. (N = 20 in the example.) A plot of the probability of choosing a paper with score s is therefore,
.45

P (s)

.40
.35

Probability of choosing a paper with score s

.30
.25
.20
.15
.10
.05
0

0

1

2

3

4

5

Score, s

Here we are dealing with the actual probability, rather than a probability density P(s), like P(E) considered in Chapter 3. The probability of choosing a score s is P(s), where
P(s) is a pure number ranging between 0 and 1.
The probabilities summed over all possible values of s add to 1:
Σ P( s ) = 1. s (A3-2)

In the above example, the sum is from s = 0 to 5. The average value of s is:
< s > = Σ sP( s ) s (A3-3)

which, you can verify, equals 3.60 for the above example. Notice that sometimes a bar over the variable is used instead of < >. I mention this because both designations are used in the literature.

174

Statistical Tools Appendix 3

Finally, we need a measure of the width of the distribution. The “deviation” defined by
⌬s = s – <s> has the average value:
< Δs > = Σ( Δs )P ( s ) = 0 , s (A3-4)

because ⌬s can be positive or negative. To gauge the width of the distribution, we must define the standard deviation, or root-mean-square deviation, σ d = < ( Δs )2 > = [ Σ (⌬s)2 P(s) ]1/2 .

(A3-5)

s

In the example above, verify that ␴d = 1.07 by summing from s = 0 to s = 5.
Sometimes it is convenient to describe a discrete system with a continuous function.
We simply evaluate the function at discrete points.
P (s)

P(s)

s
Notice the difference between probability P(s) and probability density P(s), which is by definition a continuous function. They are related by P(s) = P(s) s, where s is the interval between the discrete values of s.
You may be worried about dealing with the statistics of continuous variables because it involves integrals rather than sums. In fact, sums are often more difficult to compute than integrals, unless you have a computer program that happily computes either sums or integrals. Generally, when we give you a problem with integrals, we will tell you how to evaluate the particular integral. Appendix 4 is a table of all the integrals we will need in this course.
Finally, just a few general statements about probabilities. There is no uncertainty about a probability distribution. It is a very definite function (e.g., P(E) = CE1/2exp(–E/kT) for the ideal gas) or result (e.g., the distribution of grades given above). The fuzziness comes about when one uses these distribution functions to predict the occurrence of a given event. For example, if one asks “in a given experiment, how many molecules will I find within a certain range of energies?” or “after 100 tries, how many times will
I pick an exam with a score of 3 from the stack of exams?”, the answers to these questions have an uncertainty.
If you could count the molecules within the range ⌬E, the result would be pretty close to N P(E) ⌬E. Likewise, if you “shuffle the pile and choose an exam” 100 times, a histogram of the results will closely resemble the P(s) graph above, multiplied by the number of experiments, 100.
Such is the way of statistics.

175

Physics 213 Elements of Thermal Physics

B. The Gaussian Distribution
A particularly useful function in statistics is the Gaussian function, which may be written as a function of the continuous variable x:

e -x

2

1
Area under curve = π 1/2

.607

x
0 1

A Gaussian probability density is written:

C

P(x)

2
2
P(x) = Ce -x /2 σ d

Area under curve = C (2π σ d 2 ) 1/2 = 1

.607 C

x

0 σd

(A3-6)

where ␴d = (<(⌬x)2>)1/2 is the standard deviation of the distribution, as proven below. A displacement of x = ␴d from the peak of the distribution corresponds to P(␴d) = C e–1/2 =
0.607 C. A probability density must be normalized by integrating over all possible x values: +ϱ

∫ P(x)dx = 1.

−ϱ

Consulting the table in Appendix 4, we find,


1/ 2

⎛ π⎞
− ax
∫− ϱ e dx = ⎜⎝ a ⎟⎠ ,
2

(A3-7)

which yields the normalization constant
C = (2␲␴d2)–1/2 .

176

(A3-8)

Statistical Tools Appendix 3

The probability density is therefore,
1

P( x ) =

2πσ

2 d e− x

2

/ 2σ 2d

.

(A3-9)

The mean square displacement is given by,
< x2 > = Ύ x2 P(x) dx.

(A3-10)

Consulting our integral table again, we use


⎛ π ⎞
2 − ax
∫− ϱ x e dx = ⎜⎝ 4a3 ⎟⎠

1/ 2

2

(A3-11)

to show
< x2 > = ␴d2 .

(A3-12)

The ␴d in the Gaussian formula is indeed the root mean square (rms) deviation, or standard deviation, as stated earlier.

C. Gaussian Approximation to the Binomial Distribution
For the case of N spins at B = 0, we can approximate the binomial distribution with a Gaussian function by setting x → m, where m = Nup – Ndown. The probability density
(A3-9) becomes,
P( m ) =

1
2␲σ

2 d e− m

2

/ 2σ d2

However, for a given problem, m is either an odd or even integer depending on whether
N is odd or even, i.e., ⌬m = 2. Thus, in order to evaluate the probability P(m) of observing the spin excess m, we multiply the probability density by the interval ⌬m = 2 (see
Section A):
P(m) = P(m) и 2

and

␴d2 = < m2 > = N

leading to the final result,
P( m ) =

2
1
e − m / 2N
2␲⌵

177

Physics 213 Elements of Thermal Physics

D. Standard Deviations Applied to the Random Walk
The standard deviation ␴d is determined from,
␴d2 ϵ < x2 > = ∫ x2 P(x) dx.
In applying the Gaussian formula to a binomial distribution, we must determine ␴d in terms of the binomial “step.” For the random walk problem, the net displacement for
M equal-length steps is x = ⌺si where the step size si = ± ᐉx. Therefore,
<x2> = <(⌺si) (⌺sj)> = <(⌺si2)> + <(⌺sisj)> .
The second term vanishes due to the equal number of + and – terms, leaving the first term with exactly M terms, all equaling <si2> = ᐉx2. So,
␴d2 = <x2> = Mᐉx2
This analysis explains why statistical fluctuations in a “random walk” are proportional to the square root of the number of “steps,” ␴d ϰ M½.
A binomial distribution with large M is well approximated by a Gaussian probability density, Eq.(A13-9) evaluated at discrete points, e.g., x = mᐉx (m = integer):

M right

M steps of constant size ᐉx and net displacement x: and P(x) with x = (Mright – Mleft)ᐉx = mᐉx

M left

␴d2 = Mᐉx2

or, m M steps of mean size ᐉx and net displacement x:
P(x) with x = ⌺si, (si variable) and ␴d2 = 2 Mᐉx2

The factor of 2 in the latter case comes about because the probability of a step length s (always positive) has the form p(s) = (1/ ᐉx)exp(–s/ ᐉx). From the integral table in Appendix 4, we find
∫ p(s) ds = 1
<s> = ∫ s p(s) d s = ᐉx, and
<s2> = ∫ s2 p(s) d s = 2 ᐉx2 , where ᐉx is the mean step length. We therefore conclude
< ⌺si2> = ⌺<si2> = ⌺2ᐉx2 = 2 Mᐉx2 , as stated in Section C of Chapter 5.

178

APPENDIX

4

Table of Integrals

Here is a table of commonly used integrals involving the Boltzmann,
Gaussian, and Planck distributions. a and b are constants. ϱ ∫e

− bx

dx =

0

ϱ

1 b dx =

2! b3 dx =

3! b4 dx =

n! b n+1

∫x e

2 − bx

0

ϱ

∫x e

3 − bx

0

ϱ

∫x e

n − bx

0

ϱ

∫e
0

− x2

1 dx = π 2

ϱ

− ax
∫ xe dx =
2

0

ϱ

3 − ax
∫ x e dx =
0

2

1
2a

1
2a 2

ϱ

∫e

− ax 2

Positive integer n, b > 0 dx =

0

ϱ

∫x e

2 − ax 2

1 π
2 a

dx =

π
4 a 3/ 2

dx =

3 π
8a 5/ 2

0

ϱ

∫x e

4 − ax 2

0

To find ∫E½ e–bE dE, etc., use E = x2

ϱ

x3 π4 = dx ∫0 e x − 1 15

179

Physics 213 Elements of Thermal Physics

180

APPENDIX

5

Exclusion Principle and Identical
Particles

Each quantum-mechanical state of an electron can hold only a single electron. Multiple occupancy is forbidden. For example, the 1s orbital of an atom has two possible states for an electron: 1s↑ and 1s↓, where the ↑ and ↓ represent the electron spin. The Pauli Exclusion Principle prohibits more than one electron from occupying each state. This is the basis for the periodic table of atoms, which have electronic configurations 1s22s23p6… according to the exclusion principle. Single occupancy is the rule for electrons (and for all particles with half-integral spin, known as fermions).
Let us now apply this single-occupancy rule to our prior example with bins (Chapter 6) and see what happens. Begin with two distinguishable particles, A and B and figure out the possibilities:

181

Physics 213 Elements of Thermal Physics

⍀ = ______ (distinguishable particles, single occupancy)
Your answer should be 12. The result for the general case of M bins and N particles is:

Ω=

M!
( M − N )!

N distinguishable particles, M single occupancy bins

As an example, imagine 4 distinguishable particles arranged in 10 single-occupancy bins:
A

B

C

D

As we have seen, the total number of possible arrangements is
Ω=

M!
10!
=
= 5040.
( M − N )! 6!

If we now change to 4 identical particles, we have overcounted the number of arrangements. Consider the four specific bins occupied by A, B, C, and D above. What is the total number of ways those bins can be occupied?
A can be in 4 positions
Each of these has 3 possibilities for B
Each of these has 2 possibilities for C
Leaving only one spot for D

Axxx xAxx xxAx xxxA
Bxx xBx xxB
Cx xC
D,

yielding 4! = 24 permutations of ABCD. Therefore the number of total arrangements for identical particles is:
⍀ = 5040/24 = 210.

182

Exclusion Principle and Identical Particles Appendix 5

In general, changing from distinct to identical particles reduces the number of microstates by N!.
Ω=

M!
( M − N )! N !

N identical particles, M single occupancy bins

This result is exact. For large N and M and in the dilute gas limit, N<<M, this latter result becomes approximately equal to the unlimited-occupancy case for identical particles
(Equation (6-3)),
M!
MN

.
( M − N )! N ! N !

To see that this latter result is reasonable, consider the case M = 100, N = 4:

M!
100 × 99 × 98 × 97 M 4
=

( M − N )! N !
4 × 3× 2× 1
4!
In using this approximation, we are restricted to the “low density limit,” also known as the “classical limit.” This result should come as no surprise because in the low-density limit (N<<M), there is little chance for particles to attempt to occupy the same state.
In considering the ideal gas at low densities we use the latter form for ⍀ because it is algebraically simpler.
In dealing with factorialized numbers, there is a very useful formula known as Stirling’s formula, valid for N>>1,
N ! = 2πN N N e − N ln(N!) = N ln N – N + (1/2) ln(2␲N) , which, for large numbers, is well approximated by ln(N!) ഠ N ln N – N .

183

Physics 213 Elements of Thermal Physics

184

APPENDIX

6

Sum over States and Average Energy

As we have seen in Chapters 8 and 9, the “constant” C multiplying the
Boltzmann factor in Eq. (8-8) is not really a constant. The probabilities must all add up to 1, so C depends on both the distribution of states and the temperature. Consider a hypothetical distribution of states at the following two temperatures:

C1
Pn = C1exp(-βEn)

low T

E
E1

E2 E3

E4 . . . . . . .

Pn = C2exp(-βEn)

C2

higher T

E
E1

E2 E3

E4 . . . . . . .

185

Physics 213 Elements of Thermal Physics

In each case, C must be chosen such that the sum of probabilities over all states equals one: ⌺Cexp(–␤En) = 1. This means that C = 1/⌺exp(–␤En). It is convenient to define the “the sum over states” or “partition function”,
Z = Σ e − βE

n

(A6-1)

n

such that,
Pn =

e −βE
.
Z n (A6-2)

Z = Z(T) represents the number of states that are likely to be occupied at temperature
T, just as ⍀(U) represents the number of accessible states for an isolated system with energy U. If the function Z(T) is known, you can determine the average energy of the system by noticing what happens when you take the derivative of Z with respect to ␤: dZ = − Σ E n e − βE n dβ

n

The average energy is defined by

< E > = Σ E n Pn = n 1
Σ E n e – βE ;
Zn
n

Therefore,

<E>=−

1 dZ
.
Z dβ

(␤ = 1/kT)

(A6-3)

The harmonic oscillator discussed in Chapter 8 has a simple ladder of energy levels,

ε
0

ε











. . .

En = nε

In this case the sum over states becomes:
Z = Σ e − β E n = Σ e − β nε = Σ x n n n

n

(A6-4) where 186

x ϵ e–␤␧

Sum over States and Average Energy Appendix 6

The sum over states is evaluated using a neat little math trick:
Z = Σn x n = 1 + x + x 2 + x 3 +

= 1 + x(1 + x + x 2 +

) = 1 + xZ

(A6-5)

which yields the result,

Z=

1
1− x

(A6-6)

Using Equations (A6-3) and (A6-6), the average energy of a harmonic oscillator in contact with a thermal reservoir becomes:

<E> =

ε e −1 βε (A6-7)

187

Physics 213 Elements of Thermal Physics

188

APPENDIX

7

Debye Specific Heat of a Solid*

The statistical counting of phonons in a solid is almost identical to that for photons, given in Chapter 9. The main differences in these two cases are 1) we must replace the velocity of light with the velocity of sound, and 2) there is an upper limit to the vibrational frequencies of a solid, fmax ഠ vsound/a, where a is the atomic spacing in the lattice. In contrast, photon frequencies have no upper limit.
At low temperatures the total vibrational energy of a solid increases as U ϰ T4 but when kT becomes comparable to ␧ = hfmax, the thermal energy assumes the dependence U = 3nRT given by the Equipartition
Theorem. The crossover between these regimes occurs at a temperature where kT is roughly equal to hfmax.
Consequently, the solid at high temperature has a constant specific heat given by 3R, whereas at low temperature the solid has a specific heat
(per mole) of the form, cv = (1/n) dU/dT ഠ 234 R (T/TD)3

(A7-1)

*Reference material not generally covered in Physics 213
189

Physics 213 Elements of Thermal Physics

where TD is known as the Debye temperature, characteristic of a particular solid. These ideas are sketched below for a crystal of diamond.

Cv
3R

Diamond atoms ( Carbon
Atomic Weight = 12g/mol )

Cv ∝ T3

T
0

500K

1000K

1500K

The equation above shows that the specific heat of a solid achieves about one-half of its high temperature value (i.e., 234R(T/TD)3 = 3R/2) at approximately (1.5/234)1/3 =
19% of the Debye temperature. The Debye temperature depends on the masses of the atoms in the solid and the strength of the atomic bonds. For typical solids, the Debye temperature is in the range 100–500 K, so the specific heat of most solids at room temperature is close to the Equipartition value 3R.
For crystalline diamond, having light carbon atoms and stiff covalent bonds, the maximum phonon frequency is high and the corresponding Debye temperature is measured to be 2230 K. This implies that for diamond, the crossover temperature is about 400
K, as indicated in the graph above.

Exercise
1) How much heat must be removed from a 1-gram crystal of diamond in order to lower its temperature by 10 K when its initial temperature is, a) 1000 K, b) 100 K,
c) 10 K? Give a physical explanation for your results.

190

APPENDIX

8

Absolute Entropy of an Ideal Gas

In Chapter 11, we showed the entropy of a classical gas of indistinguishable particles has an entropy associated with its spatial coordinates,
⎛ n

S = k ln Ω = Nk ⎜ ln c + 1⎟
⎝ n

where nc = 1/␦V is the density of cells chosen, and n = N/V is the density of particles. When the wave nature of the particles is taken into account, the entropy of the ideal gas is given by the Sakur-Tetrode equation,
⎛ n
5⎞
S(N,V,T) = Nk ⎜ ln Q + ⎟ ,

n 2⎠

(A8-1)

with, the quantum density.
⎛ mkT ⎞ nQ = ⎜
⎝ 2π 2 ⎟⎠

3/ 2

(A8-2)

191

Physics 213 Elements of Thermal Physics

Practical forms for n and nQ (to about 1% accuracy) are: n= p
⎛ p 300 K ⎞
= 2.45 × 1025 meter −3 ⎜
⎝ 1atm T ⎟⎠ kT ⎛ m T ⎞ n Q = 10 meter ⎜

⎝ m p 300 K ⎠
30

3/ 2

−3

where m/mp is the molecular weight of a particle, and mp Ϸ mH.
Sometimes it is convenient to deal with the entropy per mole of material, i.e., the entropy for NA = 6.022 ϫ 1023 particles. With Nk = nR from Chapter 5, the molar entropy, s = S/n, of an ideal monatomic gas is,
⎛ n
5⎞
s = R ⎜ ln( Q ) + ⎟ ϵ strans

n
2⎠

(A8-3)

where R = 8.314 J/mol-K is the gas constant. This entropy applies only to translational degrees of freedom, and so is denoted strans. Diatomic molecules have rotational and vibrational degrees of freedom that also contribute to the total entropy, as we will consider in Appendix 9.
Below we tabulate the calculated quantum density and entropy for two monatomic gases at their boiling temperature for p = 1 atm. Also listed are the measured entropies under these conditions, as discussed below. We also list the translational entropy, strans (Eq.
(A8-3)), for some diatomic and polyatomic gases at 300 K and 1 atmosphere. We wish to compare the calculated entropy strans to the experimentally measured total entropy smeas:

monatomic

diatomic

polyatomic

192

strans smeas (J/mol-K) (J/mol-K)

gas

m/mp

T (K)

nQ (per m3)

Ar

40

87.3

3.93 ϫ 1031

129.5

129.8

Ne

20

27.2

2.42 ϫ 1030

96.5

96.4

H2

2

300

2.79 ϫ 1030

118

131

N2

28

300

1.46 ϫ 1032

150

192

O2

32

300

1.79 ϫ 1032

152

205

H2O

18

300

7.55 ϫ 1031

145

189

NH3

17

300

6.93 ϫ

144

193

1031

Absolute Entropy of an Ideal Gas Appendix 8

The excellent agreement between experiment and theory for the two monatomic gases confirms the validity of statistical mechanics and quantum mechanics. The discrepancy between Strans and Smeas for diatomic and polyatomic molecules implies that they gain entropy from degrees of freedom that we have not accounted for, i.e., their “internal” motions. The agreement between experiment and theory for the monatomic gases is pretty amazing. Think about it. The measurement of absolute entropy requires the experimenter to cool the particles down to near-zero Kelvin, where they form a solid. Small increments of heat ⌬Q are added to the system and the temperature T is recorded at each step. The measured increments in entropy, ⌬S = ⌬Q/T, are added up as the solid heats up, turns into a liquid, warms further, and vaporizes into gas. The values of smeas in the tables above represent the sum, S = ⌺(⌬Q/T) per mole, over this torturous path involving two phase transitions. Yet, the end result is accurately predicted by the statistical properties of an ideal gas of quantum particles!
For diatomic molecules at room temperature, the internal motions are largely associated with rotations. In Appendix 9, we discuss the entropy associated with translation plus rotations of diatomic molecules such as N2, and H2, which has the form:
⎛ n
7⎞
S = Strans + Srot = Nk ⎜ ln( Q Z r ) + ⎟

2⎠ n (A8-4)

where Zr ഠ kT/␧r and ␧r is a quantum of rotational energy. Zr roughly equals the average number of rotational energy levels that are occupied per particle.

Exercise
1) a) Compute the absolute molar entropy of a gas of N2 at 300 K, assuming no internal motions of the molecules, and compare your answer to the number given in the table on page 192. b) Now compute the absolute entropy at 300 K by including internal rotations of the molecules. Use Equation (A8-4) and assume that the quantum of rotation is ␧r = 0.00026 eV. Is it closer to the measured value in the table? 10% is a reasonable discrepancy considering the approximations made.

193

Physics 213 Elements of Thermal Physics

194

APPENDIX

9

Entropy and Diatomic Molecules*

In this section we will consider the entropy of ideal diatomic gases. In
Chapter 11 we stated that the entropy of a monatomic gas with particle density n = N/V is
⎛ n
5⎞
S = k ln(Ω ) = Nk ⎜ ln Q + ⎟ ,

n 2⎠

where nQ = (mkT/2␲ ប2)3/2. The monatomic gas has only translational degrees of freedom, so this formula is also denoted Strans.
In addition to translating, diatomic molecules can rotate and vibrate.
Quantum mechanics dictates that these motions have discrete energy levels. The entropy associated with these motions involves a sum over these quantum states. Entropy is an additive property, so we simply add these contributions to the translational entropy: S = Strans + Srot + Svib.

vibration

First consider vibrations of a diatomic molecule with mass 2m and spring constant ␬. The quantum-mechanical harmonic oscillator has equally spaced energy levels, En = n␧v = nhf, with f = (2␬/m)1/2/2␲ the frequency of vibration. In Chapter 5, we defined q = U/␧v as the number of energy quanta in the system. The number of accessible states for a system of N oscillators is,
Ω=

( q + N − 1)! q N

.
( N − 1)!q !
N!

*Reference material generally not covered in Physics 213.

rotation

(A9-1)
195

Physics 213 Elements of Thermal Physics

In the last step we have assumed that N and q are large numbers and that q >> N, the
“classical limit.” Taking the logarithm and using the Stirling approximation, ln(N!) ഠ
NlnN – N,

⎞ q U


S vib ≈ Nk ⎜ ln( ) + 1⎟ = Nk ⎜ ln(
) + 1⎟

⎝ N
⎝ Nε v

(A9-2)
⎛ kT

S vib ≈ Nk ⎜ ln(
) + 1⎟ when KT > ε v εv ⎝

In the last step, we used the result U ഠ NkT from Chapter 8, as also predicted by the
Equipartition theorem (Chapter 3).
[Detail: The ratio (kT/␧v) in the above equation is actually an approximation for the exact sum over states for the harmonic oscillator, Zvib = 1/(1–e–␧/kT), which does not assume kT > ␧v. This “sum over states” was derived in Appendix 6. For most diatomic molecules at 300 K, kT > ␧v ; therefore, Zvib ഠ 1 and <E>/kT < 1, implying Svib is small compared to other contributions to the entropy.]
Rotational energy levels are determined by the quantization of angular momentum. It turns out that the number of states with energy less than E equals N(E) = E/␧r. (For more details, see the discussion by Schroeder, referenced in the Preface.) This result is similar to the result for vibrations, N(E) = E/␧v. The rotational entropy for diatomic molecules has a form similar to the vibrational entropy,
Srot = Nk(ln(Zr) + 1)

(A9-3)

where Zr ഠ kT/␧r when kT >> ␧r.
Rotational energy levels are generally more closely spaced than those of vibration. For example, for hydrogen (H2), ␧v ഠ .27 eV and ␧r ഠ .015 eV, compared to kT = .026 eV at 300 K. For molecules like H2 and N2 only rotations (not vibrations) are thermally active at 300 K, and the total entropy has the form
⎛ n
7⎞
S = Strans + Srot = Nk ⎜ ln( Q Z r ) + ⎟

n
2⎠

(A9-4)

and Zr ഠ kT/␧r when kT >> ␧r. See Kittel and Kroemer (ref. Preface), p. 83, for a more exact treatment.

196

Entropy and Diatomic Molecules Appendix 9

Finally, there may be spin degrees of freedom. As discussed in Chapter 5, electrons have spin = ½, which gives rise to two energy levels, E = ± ␮B (“spin-up” and “spin-down”) in a magnetic field B. For N spins the total number of states, ⍀, equals 2N; therefore,
Sspin1/2 = k ln⍀ = Nk ln(2).
In general,
Sspin = Nk lnZspin.

(A9-5)

For B = 0, Zspin equals the number of spin levels, 2S + 1, where S is the total spin quantum number. To determine the spin S of an atom or molecule, one must consider the number of unpaired electrons. H2 and N2 have no unpaired electrons, so Sspin = Nk ln(1) = 0.
For O2, the ground molecular level has two unpaired electrons with S = 1, providing three magnetic sublevels, ms = -1, 0, +1, that lead to Sspin1 = Nk ln(3).
Adding the terms S = Strans + Srot + Svib + Sspin involves multiplying the arguments of the logarithms, i.e., replacing nQ with nQZint, or pQ with pQZint, where Zint = ZvibZrotZspin.
Zint represents the contribution of internal motions to the number of thermally accessible states for a molecule. This procedure also applies to F, G and chemical potential
␮. Specifically,
⎛ p ⎞
⎛ n ⎞
= k ln ⎜ μ = k ln ⎜


(A9-6)
⎝ pQ Z int ⎠
⎝ n Q Z int ⎠ with n = N/V. The following table gives rough values of ␧’s and Z’s for common diatomic molecules at 300 K, where kT = 0.026 eV:

molecule m/mp

␧r (eV)

Zrot ഠ kT/␧r

Zvib

Zint =
ZvibZrotZspin

H2

2

.015

2

ഠ1

ഠ2

N2

28

.00026

100

ഠ1

ഠ 100

O2

32

.00036

72

ഠ 1.07

ഠ 230

We can now see if internal motions account for the difference in Smeas and Strans that we tabulated in Appendix 8. Adding the contributions from rotation and spin using the values of Zint in the table yields the results: (Recall molar entropy, s = S/n = R ln(Zint))

gas

Zint

strans

srot

strans + srot smeas

H2

2

118

14

132

131

N2

100

150

47

197

192

O2

230

152

54

206

205

[all entropies in units of (J/mol-K)]
197

Physics 213 Elements of Thermal Physics

The measured values of entropy are taken from the Appendix in Zumdahl’s text Chemical Principles.
The estimates of Strans + Srot given here should not be taken too seriously. The assumption Zr ഠ kT/␧rot tends to overestimate this contribution to the entropy. The approximation kT/␧rot >> 1 is not well met for H2. Considering the approximations, the agreements within a few percent are quite acceptable. Mainly, however, this table shows that the internal motions of the molecules contribute significantly to the total entropy of a molecular gas.
Internal motions also affect the equilibrium constants in chemical reactions. Basically, one must replace nQ with nQZint for each of the molecules, where Zint represents the number of states due to internal motions. For the reaction aA + bB ↔ cC, this produces another factor,
Z cint C
Z aint A Z bint B

multiplying the right side of Eq. (12-12), and included in k(T).
Internal motions affect the heat capacities of ideal gases. Specifically, Cv = ⌬Q/⌬T =
T⌬S/⌬T. When rotations are present, the temperature dependence of S is Nk ln(T)5/2 because nQ has a T3/2 dependence and Zrot ϰ T . Therefore,
Cv = T dS/dT = (5/2)Nk , as predicted by the Equipartition Theorem for diatomic molecules.

198

APPENDIX

10

Vapor Pressure of a Vibrating Solid*

In Chapter 13 we derived the vapor pressure of a simple solid—one with no internal motions or entropy. Real solids have internal motions and therefore significant entropy. It is interesting to see how the entropy of a vibrating solid affects the vapor pressure. Do you think that allowing vibrations in the solid will Increase or Decrease the vapor pressure of the solid? Take a guess: I
D
. Let’s do the problem. It requires us to know the entropy associated with oscillations.
Assume that the solid has a “typical” vibration energy ␧v and that kT > ␧v. We considered the harmonic oscillator in Chapters 7 and 8.
The energy of Ns harmonic oscillators able to vibrate in 3 dimensions is Uvib = 3NskT. In Appendix 9 we show that the entropy is,
⎛ kT

S vib ≈ 3N S k ⎜ ln(
) + 1⎟ εv ⎝


(A10-1)

As considered in Chapter 13, an atom in the solid is bound by ⌬; therefore, U = –Ns⌬ ϩ 3NskT. The Helmholtz free energy, F = U – TS, for the solid is:
⎛ ⎛ kT ⎞ ⎞
FS = − N S Δ + 3N S kT − 3N S kT ⎜ ln ⎜
⎟ + 1⎟
⎝ ⎝ εv ⎠ ⎠
⎛ kT ⎞
= − N S Δ − 3N S kT ln ⎜
.
⎝ ε v ⎟⎠
*Reference material generally not covered in Physics 213.

(A10-2)
199

Physics 213 Elements of Thermal Physics

We differentiate Fs to get the chemical potential: μS =

⎛ kT ⎞
⎛ kT ⎞ dFS = − Δ − 3kT ln ⎜
= − Δ − 3kT ln ⎜
.
⎟ dN S
⎝ εv ⎠
⎝ ε v ⎟⎠

(A10-3)

Setting ␮s equal to the chemical potential of the gas, ␮g = kT ln(p/pQ), we find the vapor pressure of the harmonic solid with binding energy ⌬,
⎛ ε ⎞ p vapor = pQ ⎜ v ⎟ e − Δ /kT .
⎝ 3kT ⎠

(A10-4)

Vibrations have produced an extra factor, not present in Eq. (13-12). For typical solids at 300 K, the ratio ␧v/3kT ranges from 0.1 to 0.3. Therefore, vibrations in the solid cause a decrease in the vapor pressure. How can that be? Doesn’t the added motion in the solid increase the possibility of knocking an atom into the gas, thereby increasing the vapor pressure?
The answer is in the influence of entropy. By allowing vibrations, we have created more states in the solid. More states in the solid mean more probability of finding atoms in the solid, and thus lower vapor pressure. This effect is illustrated graphically by plotting the chemical potential for several pressures, and plotting the corresponding phase diagram (as in Chapter 13). μ T

0

μ g = -kT ln

( ) (

p1

p2

p Q ∝ T 5/2

μ s = -⌬

no vibrations

μ s = -⌬ - 3kTln

p3

)

pQ p kT

( ) εv with vibrations

p p = p Q e -⌬/kT p3 Solid region with vibrations

Gas region

p2 p1 200

T

no vibrations

Solutions to Exercises

Introduction
1) 1, 10, 45, 120, 210, 252, 210, 120, 45, 10, 1
2)

W = ∫pdV = NkTΎ(1/V)dV = NkTln(V2/V1).

Chapter 1
1) Using Won = ⌬(KEcm) + ⌬U, where U is the PE when the spring is included in the system:
Mass alone: ∫ Fdu = mvf2/2 + 0
Mass plus spring: 0 = mvf2/2 + ␬(u2 – uo2)/2
Because ∫ Fdu = –∫ ␬u du = –␬(u2 – uo2)/2, the equations are equivalent.
2) a) vcm = (2Fd/m)1/2 from c.m. Eq., not Work-Energy Eq.
b) Considering the “point” of contact between tire and road, can you see that the force applied by the earth acts through zero distance? The earth isn’t moving, and a force acting through zero distance does no work. If the earth were providing the energy for the car to move (by doing work), we would not need fossil fuel. c) Internal chemical energy U converted to c.m. energy.
d) Work-Energy Eq.: Won = ⌬(KEcm) + ⌬U = 0 implies ⌬U = mv2cm/2. 201

Physics 213 Elements of Thermal Physics

3) Believe Equation (1-14). In one of the two cases, a constant force acts through a greater distance and imparts rotational motion, but the center-of-mass acceleration…
4) a) c.m. Eq.: Fdcm = mv2cm/2 = 18 J, vcm = 2.12 cm/s.
b) Work-Energy Eq: FD = mv2cm/2 + ⌬U = 21 J, ⌬U = 21 J – 18 J = 3 J.

Chapter 2
1) Consider how T behaves as energy is added to the system. Only one dependence is reasonable.
T

T

T

U

U

(a)

U

(b)

(c)

2) Stot = (3/2)k (10 ln(U1) + 40 ln(U2)) , U2 = Utot – U1 dStot/dU1 = (3/2)k (10/U1 – 40/U2) = 0 in equilibrium
U2 = 4 U1 in equilibrium (maximum entropy)

Stot

S2
S1
U1
Equilibrium

U tot

3) S = (3/2)Nk ln(U) + Nk ln(V) + constants
(ѨS/ѨU)V = (3/2)Nk/U; (ѨS/ѨV)U = Nk/V
(ѨS/ѨU)V = 1/T; 1/T = (3/2)Nk/U gives U = (3/2)NkT for ideal monatomic gas.
⌬S = (ѨS/ѨU)V dV + (ѨS/ѨV)U dU and dU = -pdV
⌬S = (1/T) dU + (Nk/V) dV = (-(1/T)p + (Nk/V)) dV.
202

Solutions to Exercises Physics 213

As in Eq. (2-9), for non-zero dV, the coefficient must vanish in equilibrium
(⌬S = 0):
(1/T)p = (Nk/V) reduces to pV = NkT, the ideal gas law.
4) S = S1 + S2 . For the process shown, ⌬S = dU1/T1 + dU2/T2 = -Q/T1 + Q/T2
⌬S = Q(1/T2 – 1/T1) is negative for T1 < T2 implying decreasing entropy, which is in violation of the Second Law of Thermodynamics.

Chapter 3
1) dS/dU = 1/T = (3/2) Nk/U dS = (3/2) Nk dU/U Integrating:
S2 – S1 = (3/2) Nk (ln(U2) – ln(U1))
S = (3/2) Nk ln(U) + constant for an ideal monatomic gas
2) a) Ideal gas law:
Diatomic gas:

pV = NkT = (105 Pa)(2 ϫ 10–3 m3) = 200 J = 2 liter-atm/K
U = (5/2) NkT = 500 J Assume pure N2 below:

b) n = pV/RT = (1atm)(2 liter)/(.082 liter-atm/mol-K)(300 K) = .0813 mol
For N2 gas, mgas = (.0813 mol)(28 g/mol) = 2.3 g = 2.3 “pennys”
c) mgh = 500 J, h = (500 J)/(10–3 kg)(10 m/s2) = 50,000 meters
d) penny: (1/2)mvf2 = 500 J, vf = (1000 J/.001 kg)1/2 = 1000 m/s gas: (1/2)m<v2> = (3/2)NkT (translational energy only) average speed: v = (3kT/m)1/2 = 517 m/s using m = (.028 kg/mol)/(6.022 ϫ 1023atom/mol) = 4.65 ϫ 10–26 kg and kT = (1.38 ϫ 10–23J/K)(300 K) = 4.14 ϫ 10–21 J. The purpose of this exercise is to give a sense for how fast the molecules are moving.
e) ⌬U = cv ⌬T = (5/2) Nk (3 K) = 5 J
3) mgh = CV ⌬T. CV = 3nR = 3(8.314 J/mol-K)(1 mol/.012 kg)(1 kg) = 2079 J/K.
⌬T = 10 J / (2079 J/K) = .005 K. Not much.
4)

E/kT

.1

.25

.5

1.0

1.5

2.0

3.0

(E/kT)1/2e–E/kT

.286

.389

.428

.368

.273

.191

.086

203

Physics 213 Elements of Thermal Physics

Maxwell-Boltzmann Distribution for an ideal gas of particles

.4

(E/kT)1/2e-E/kT
.2

E/kT

0
0

1

2

3

average at 1.5 kT peak at .5 kT

The area in the rectangle is about ¼ the total area under the curve. Therefore the probability is in the range 20–30%. In Chapter 9 we will see how to compute the probability more accurately with integrals.

Chapter 4
1) This process is described in Appendix 2. Here ⌬T = 100 K.
␣nR = (3/2)(.1 mol)(8.314 J/mol-K) = 1.247 J/K, ln(Vb/Va) = .693

Stage

Q

W

⌬U

1

124.7 J

0

124.7 J

2

215.0 J

215.0 J

0

3

– 124.7 J

0

– 124.7 J

4

– 156.7J

– 156.7J

0

␧ = 17% between boiling and freezing water
2) For a diatomic gas, ␣ = 5/2 and ␥ = (␣ + 1)/␣ = 7/5.
U = ␣nRT = ␣pV nR = pV/T = .00333 l-atm/K

p a FLT:

3

ΔU = Q - W

1

c

b
2

V

204

Solutions to Exercises Physics 213

Process 1: VbTb␣ = VaTa␣ → Tb = Ta(Va/Vb)1/␣ = 227 K
(Q = 0) paVa␥ = paVb␥ → pb = pa(Va/Vb)␥ = .379 atm
Wby = –⌬U = ␣(paVa – pbVb) = .605 l-atm
Process 2: pbVb/Tb = pcVc/Tc → pc = pb(Vb/Vc) = .758 atm
(⌬U = 0) Wby = nRTb ln(Vc/Vb) = –.524 l-atm = Q
Process 3: Q = ⌬U = ␣nR(Ta – Tc) = .608 l-atm
(Wby = 0)

(Tc = Tb)

3) a) dQ = dU + pdV. For non-interacting (ideal) gases, U = U(N,T).
At constant V, dV = 0. Therefore, CV = (dQ/dT)V = dU/dT.
For an ideal gas, V = nRT/p, so Cp = dU/dT + p(dV/dT) = CV + nR even for
CV(T).
b) CV = ␣nR and Cp = CV + nR = ␣nR + nR = (␣ + 1)nR = (␣ + 1)CV/␣.
c) Diatomic gas in the high-T limit where rotations and vibrations are active.

Chapter 5
1) a) ⍀(8,5) = 8!/5!3! = 8 ϫ 7 = 56
P(8,5) = ⍀(8,5)/28 = 56/256 = .219, or about 22%.
b) P(400) = (2/␲N)1/2 exp(–0/2N) = (2/␲N)1/2 = .028, or about 3%.
This is a distribution with a width (<m2>)1/2 = N1/2 = 28.3.
2) m = 2Nup – N, so Nup = (N+m)/2 and Ndown = N – Nup = (N–m)/2.
5) For the Ar with mass m = .040 kg/6.022 ϫ 1023 = 6.64 ϫ 10–26 kg, v = (3kT/m)1/2 = (3 ϫ 1.38 ϫ 10–23 ϫ 300/6.64 ϫ 10–26) = 432 m/s
ഞ = 0.1 ␮m = 10–7 m
D = v ഞ / 3 = (432 m/s )(10–7)/3 = 1.44 ϫ 10–5 m2/s
Diffusion distance in 1 sec = (6Dt)1/2 = (6 ϫ 1.44 ϫ 10–5 ϫ 1)–1/2 = .94 cm
Diffusion distance in 3600 sec = .94 cm ϫ (3600)1/2 = 56 cm
2.5
2.0
1.5

rrms(cm)
1.0
0.5

t (s)

0.0
0

1

2

3

4

5

6

205

Physics 213 Elements of Thermal Physics

6) exp(–x12/2␴d2) = ½ x12/2␴d2 = ln 2 x1 = (2ln 2)1/2 ␴d = 1.177 ␴d
FWHM = 2x1 = 2.355 ␴d

e -x

2 2 σ 2 d 1

0.5

x
0 x1

7) a) For the electrons with mass m = 9.11 ϫ 10–31 kg, v = (3kT/m)1/2 = (3 ϫ 1.38 ϫ 10–23 ϫ 300/9.11 ϫ 10–31) = 1.16 ϫ 105 m/s
ഞ = v␶ = (1.16 ϫ 105)(10–11) = 1.16 ␮m
D = v ഞ / 3 = (1.16 ϫ 105 m/s )(1.16 ϫ 10–6)/3 = .045 m2/s
b) Diffusion distance in lifetime = (2D␶0)1/2 = (2 ϫ .045 ϫ 10–6)–1/2 = .3 mm

v





D

N2
Molecules

500 m/s

0.2 ns

0.1 ␮m

1.67 ϫ 10–5 m2/s

electron gas 1.16 ϫ 105 m/s 0.01 ns

1.16 ␮m

.045 m2/s

For comparison, in 1␮s an N2 molecule would diffuse approximately (2D␶0)1/2 =
(2 ϫ 1.67 ϫ 10–5 ϫ 10–6)–1/2 = 5.8 ␮m, or 50 times smaller distance than the electron, even though the N2 molecule collides 20 times more often. The reason for the quicker diffusion of the electron is that it is much lighter than the molecule.

Chapter 6
1) log (MN/N!) = NlogM Ϫ Nlog N ϩ N = N(log(M/N) ϩ 1)
= 1020(log (103) ϩ 1) = 4 ϫ 1020
As expected, less than 23 ϫ 1020 for distinguishable particles
2) a) Counting states or using the formula: ⍀ = ⍀1⍀2 = 23 × 21 = 16, ␴ = ln(16) =
2.77.
b) We expect position 3 because the number of particles per cell are the same on both sides. (See calculation in part c for a justification.)
c) ⍀ = 1331 + 2321 + 3311 = 3 + 16 + 27 = 46; ␴ = ln(46) = 3.83
If no partition, ␴ = 4 ln 4 = 5.55. Removing constraints increases S.
206

Solutions to Exercises Physics 213

3) Number of cells = M = V/␦V; ⍀ = MN/N!
⍀f/⍀i = (Vf/Vi)N = (1.01Vf /Vi)N = (1.01)1000 = 21,000
The number of microstates increases very rapidly with V, even for just 1000 particles.
4) ⍀ = VN/N!; therefore, SHe = k(N lnV – ln N!) = SAr .
Sinitial = 2k(N lnV – ln N!) and Sfinal = 2k(N ln(2V) – ln N!). ln(2V) = lnV + ln2; therefore, ⌬S = 2Nk ln2 = entropy of mixing.
For identical gases: Sinitial = 2k(N lnV – ln N!) and Sfinal = k(2N ln(2V) – ln(2N)!)
With ln(2N)! = 2N ln2N – 2N, we have Sfinal = k(2N ln(2V) – 2N ln(2N) + 2N)
Sfinal = 2k(N(lnV + ln2) – N(lnN + ln2) + N) = 2k(N lnV– N lnN + N) = Sinitial
Adding and removing a partition does not change the gas (or S) in this case. Because entropy does not change, the process is reversible, consistent with the Second Law.
Without the N! term in the latter case, Sinitial = 2k(N lnV) and Sfinal = k(2N ln(2V)), leading to the unphysical result that ⌬S = 2Nk ln2. All atoms of the same type are identical to one another, a fact that was not appreciated in J.W. Gibbs time, leading some scientists to reject his (now famous) theory of statistical mechanics. (Gibbs
Paradox)

Chapter 7
1)

0.5
0.4

Pn

.33
.250
.179
.121
.071
.037
.012

0.3
0.2
0.1
0.0
0

1

2

3

4

5

2) a) By a factor of about 299 = 6.3 ϫ 1029.

6

×ε

b) a factor of 11.

3) Units: p1 = 1 atm = 105 Pa, V1 = 1 liter = 10–3 m3 = V2/10
Ideal gas law: Nk = pV/T = 100/300 = 1/3.
a) ⌬Sv = Nk ln(V2/V1) = (1/3)(ln 10) = .767 J/K.
b) VT␣ = constant with ␣ = 3/2, so T2 = T1(V1/V2)2/3 = 64.6 K.
c) ⌬ST = CV ln (T2/T1) = – .767 J/K with CV = (3/2)Nk.
d) ⌬S = Nk ln(V2/V1) + CV ln (T2/T1) = 0
Entropy change is zero in an adiabatic process (Q = 0)

207

Physics 213 Elements of Thermal Physics

4) a) ⌬S = ncvln(Tf /Ti) = (5/2) nR ln(2/3) = 2.5 (2) (8.314) (–.405) = – 16.96 J/K
b) ⌬S = ncp ln(Tf /Ti) = (7/2) nR ln(2/3) = 3.5 (2) (8.314) (–.405) = – 23.57 J/K
5) a) kT is the average energy per oscillator and ␧ = hf is the energy of a quantum, so kT/␧ = 6.25 quanta/oscillator, for a total of q = 3N ϫ 6.25 = 1.88 ϫ 1023 quanta in the solid. b) From ⍀ = q3N–1/(3N–1)!, ignore the 1’s, and use Stirling’s relation,
␴ = ln ⍀ = 3N(ln(q/3N) +1) = 8.5 ϫ 1022. Notice that the amount of entropy for a vibrating solid at room temperature is roughly equal to the number of oscillators (3N) times the logarithm of the # of quanta per oscillator (ln 6.25).

Chapter 8
1) a) ␮B/kT = (9.27 ϫ 10–24 J/T)(1T)/(1.38 ϫ 10–23 J/K)(2K) = 0.336
Pup/Pdown =e2␮B/kT = e0.672 = 1.96. There are nearly twice as many spins pointing up as down under these conditions. Nup/Ndown = Pup/Pdown = 1.96. b) ␮B/kT = ln(10) = 2.3, B = (2.3/2) kT/␮ = 3.42 Tesla. c) M = N␮2B/kT = 0.336 N␮ =
0.31 J/T. Also, N␮ tanh(␮B/kT) = 0.300 J/T.
2) e␤␧ ഠ 1 + ␤␧ → <E> = ␧/(e␤␧ – 1) ഠ ␤–1 = kT,

U ഠ N<E> = NkT

3) <En> = ␧ / (e␧/ kT–1) kT = 1.38 ϫ 10–23 ϫ 300 = .414 ϫ 10–20 J
␧/kT = .5/.414 = 1.21
–20
<En> = ␧ /(3.345 –1) = (.426) ␧ = .213 ϫ 10 J
4) a) kT = 1.38 ϫ 10–23 ϫ 300 = .414 ϫ 10–20 J
␧/kT = .5/.414 = 1.21 define a shortand, X = e–␧/ kT = .299
Pn = e–En/ kT / ⌺ e–En/ kT
⌺ e–En/ kT = 1 + X + X2 + X3 = 1.415
Probability: Po = 1/1.415 = .707
Probability: P1 = .299/1.415 = .211
Probability: P2 = .0893/1.415 = .063
Probability: P3 = .0267/1.415 = .019
Check that sum of N’s equals 100:

Population: N0 = 100 Po = 70.7
Population: N1 = 100 P1 = 21.1
Population: N2 = 100 P2 = 6.31
Population: N3 = 100 P3 = 1.9
99.9

b) <En> = ⌺ Pn En = .707 ϫ 0 + .211 ϫ ␧ + .063 ϫ 2␧ + .019 ϫ 3␧ = .394␧
This result is a little lower than the harmonic oscillator case (.426␧) in which there is some population of levels at higher energy levels than 3␧.

1

<En>

Pn

0
208

ε





En

Solutions to Exercises Physics 213

Chapter 9
1) a) First determine C from the normalization condition:
1 = Ύ P(E) dE = C Ύ E2 e–E/kT dE = C (kT)3 2! giving C = 1/2(kT)3
Average energy:
<E> = Ύ E P(E) dE = C Ύ E3 e–E/kT dE = C (kT)4 3! = 3 kT
b) The energy of a particle is (1/2)m(vx2 + vy2 + vz2) + (1/2)␬(x2 + y2 + z2).
Equipartition says that each quadratic term gets (1/2) kT, totaling 3 kT.
2) a) The volume of the sphere is V = M/␳ = 1kg/(2.7g/cm3)
= 3.7 ϫ 10–4 m3 = 4␲R3/3. The surface area is A = 4␲R2
= 2.49 ϫ 10–2 m2, so the total energy radiated per second is
JU ϫ A = (5.670 ϫ 10–8 W/m2 K4 T4) (293 K)4 (2.49 ϫ 10–2 m2) = 10.4 W
b) At 300 K, the specific heat of aluminum is a constant at 3R = 3 ϫ
(8.314 J/mol-K) = 24.9 J/mol-K and the number of moles of Al in the 1 kg sphere is n = (1 mole/0.027 kg )(1 kg) = 37 moles, so the heat capacity near
300K is Cv = 3nR = 922 J/K. Assuming an average temperature of 283 K in the above equation, the energy loss of the sphere per second is,
⌬U/⌬t = Cv ⌬T/⌬t = 9.1 watts
Therefore we find the time to cool 20 K is,
⌬t = Cv (⌬T) / (9.1 W) = (922 J/K) (20 K) / (9.1 W)
ഠ 2000 seconds = 33 minutes.

Chapter 10
1) ␧ > ␧c implies Qc/Qh > Tc/Th and thus Qc/Tc > Qh/Th, which, in turn, implies that
⌬Stot = –Qh/Th +Qc/Tc Ͻ 0. Entropy decreases, against the law.
An example of a real engine is one in which there is a heat leak directly between the hot and the cold reservoir. The total entropy change due to this heat leak is ⌬S
= –Qleak/Th +Qleak/Tc = Qleak(1/Tc – 1/Th) = ␧cQleak/Tc. It is straightforward to show that the work done by the real engine equals the work done by a Carnot engine minus the heat loss due to friction and leakage, i.e., Wby = ␧cQhtot – Tc⌬S.
2) Without much thought, you might have blindly plugged into the formulas and obtained the following results:
Wby = ␧ Qh for a Carnot cycle (most efficient engine).
␧ = 1 – Tc/Th = 1 – 393/293 = .254
Qh = cv M ⌬T = 4184 ϫ 75 ϫ 100 = 31.4 megajoules
Wby = .254 ϫ 3.14 ϫ 107 = 7.97 ϫ 106 J = 8.0 megajoules (Wrong!)
The ␧ and W computed above are incorrect because the difference in temperature between the hot and cold reservoirs is varying. This isn’t such a trivial problem, is it?
Try again, realizing that Th and the efficiency are changing with time. Set Th = T:

209

Physics 213 Elements of Thermal Physics

Wby = Ύ ␧ dQ = – Ύ (1 – 293/T) cvM dT integrating from 393 to 293.
= – cvM ( Ύ dT - Ύ (293/T) dT )
= – cvM (– 100 K – 293 ln(293/393) )
= – cvM (– 100 K + 86 K ) = 4.38 ϫ 106 = 4.38 megajoules
Do you see why the output is only about half the previous (erroneous) result? The average ⌬T (and therefore Qh) is roughly half that used in the incorrect calculation.
The next problem shows a simpler approach with free energy.
3) Wby = –⌬F = –⌬U + Tenv⌬S = –C⌬T + TenvC ln(Tf/Ti),
a) Wby = 150 J –121.64 J = 28.36 J
b) Wby = –150 J +207.94 J = 57.94 J
4)

Tenv = 300 K,

C=1 kJ/k

ΔF = ΔU − TΔS = (3 / 2) NkΔT − NkTΔ(ln( n Q V / N ))
Δ(ln( n Q V / N )) = Δ(ln( V )) = NkT(lnVf - lnVi )

ΔT = 0

ΔF = − NkT ln(Vf /Vi )
Wby =

Vf

Vf

Vi

Vi

∫ pdV =

∫ ( NkT / V)dV = NkT ln(V /V ) f i

Chapter 11
1) a) n = 2.45 ϫ 1025 m–3 nQ = 9.88 ϫ 1029 m–3 (4)3/2 = 7.9 ϫ 1030 m–3
␮ = kT ln(n/nQ) = (0.026 eV)(–12.7) = –0.33 eV.
b) U = (3/2) nRT = (3/2)(8.314 J/K)(300 K) = 3741 J
S = nR(ln(nQ/n) + 5/2) = (8.314)(ln(3.22 ϫ 105) + 2.5) = 126 J/K
F = U – TS = 3741 J – (300 K)(126 J/K) = – 34.1 kJ
2) a) At constant volume nQV = (mkT/2␲ប2)3/2V ϰ T3/2. s = S/n = Rln(nQ/n) + const. s(T) = Rln(T3/2) + const. = (3/2)Rln(T) + const. cV = T(ds/dT) = T (3/2)R/T = (3/2)R
At constant pressure, V = NkT/p and nQV/N ϰ T5/2 s(T) = (5/2) Rln(T) + const. cV = T(ds/dT) = T (5/2)R/T = (5/2)R
b) At constant V, (nQV/N)(kT/␧r) ϰ T5/2 s(T) ϰ (5/2)Rln(T), cV = (5/2)R
At constant pressure, (nQV/N)(kT/␧r) ϰ T7/2 s(T) ϰ (7/2)Rln(T), cV = (7/2)R
3) ho = kT/mg = (1.381 ϫ 10–23)(270)/(28 ϫ 1.674 ϫ 10–27)(9.8) = 8117 meters p = (1 atm) exp(–104/8117) = 0.29 atm. You need a breathing apparatus.

210

Solutions to Exercises Physics 213

4) Flayer = A(p(x) – p(x +⌬x)) = − A
Feffective =

Flayer
N

=

Flayer nAΔx =−

dp dn Δx = − kT
A Δx dx dx

kT dn dμ =− n dx dx Q.E.D.

5) Feff – mg = 0, –(kT/n) dn/dh = mg, dn/dh = –(mg/kT)n = –n/ho , ho= kT/mg
Pressure p = nkT, so dp/dh = –(p/ho) with the solution p = poexp(-h/ho).

Chapter 12
1) nQ = 1030(28)3/2(T/300)3/2 m–3 n = 2.45 ϫ 1025 (300/T) m–3
6
5/2 nQ/ n = 6.0 ϫ 10 (T/300) = 1 when T = 0.6 K. N2 is not an ideal gas at T = 0.6
K. It liquifies at 77 K. The ideal gas equations do not apply.
2) Use nQ = 1030 m–3 (m/mp)3/2(T/300 K)3/2 me/mp = 1/1836
Notice that nQH = nQp to good approximation, neglecting spin.
3) 2␮H2O = 2␮H2 + ␮O2

␮N2 + 3␮H2 = 2␮NH3

␮H2 = 2␮H

4) a) ne2 – nend – ni2 = 0 ne = nd/2 + (nd2 + 4ni2)1/2
b) ni = 5.2 ϫ 1015 m–3 at 300 K, and nd = 1014 m–3, so ne = 5.25 ϫ 1015 m–3
When nd << ni then ne ഠ ni. Impurities have little effect.
If nd = 1017 m–3 then ne = 1.0027 ϫ 1017 m–3.
When nd >> ni then ne ഠ nd. Impurities have a big effect.
= K (T)
5) a) n1 / n 2 = ⎡⎣ n Q1 / n Q 2Z int ⎤⎦ e
25
30
b) n1 = sqrt {(1.225 ϫ 10 )(10 )(.03/.052)e–76.9} = 5.3 ϫ 1010 /m3
2

2

– Δ / kT

Chapter 13
1) Because N! = 1 и 2 и 3 … N, then lnN! = ln1 + ln2 + …lnN. Taking dN = 1, we see that d(logN!)/dN = lnN! – ln(N–1)! = ln(N!/(N–1)!) = lnN.
2) a) pQ = nQkT = (9.88 ϫ 1029)(40)3/2 (1.38 ϫ 10–23)(300) = 1.03 ϫ 1012 Pa
= 1.02 ϫ 107 atm e–⌬/kT = 9.75 ϫ 10–6, 2.08 ϫ 10–7, 4.45 ϫ 10–2 f = 1/(1 + po) = 0.01, 0.321, 0.957
b) po = 1 implies f = Ns/M = ½, so the left side of Eq. (13-7) equals 1, and p/pQ = exp(–⌬/kT). The chemical potential of the gas is ␮ = kT ln(p/pQ) = –⌬. In this case, ␮ = (.026 eV)(–16.1) = –0.42 eV.
3) pQ = (4 ϫ 104 atm) (32)3/2 (310/300)5/2 = 7.86 ϫ 106 atm pQZint = (7.86 ϫ 106 atm) ϫ 211 = 1.8 ϫ 109 atm f = ½ means p = po = 0.2 atm kT = (8.617 ϫ 105)(310) = .0267 eV
–⌬/kT
po = pQZinte
= 0.2 atm
⌬ = kT ln(pQZint/po) = 0.61 eV
4) a) Q = 3.335 ϫ 105 J/kg ϫ 1 kg = 3.34 ϫ 105 J, t = 6 ϫ 3600 = 21600 s
Power = Q/t = 3.34 ϫ 105 J / 21600 s = 15 watts.
b) ⌬Stot = ⌬SH2O + ⌬Sroom
= 3.34 ϫ 105 J (1/273 K – 1/293 K) = 83.5 J/K
211

Physics 213 Elements of Thermal Physics

5) F = U – TS = NkT(ln(N/nQ(V–Nb)) – 1) + pV – aN2/V dF/dV = –NkT/(V–Nb) + p + aN2/V2 = 0 →
(p + aN2/V2)(V – Nb) = NkT
(van der Waals equation)

Chapter 14 pcQ n cQ
= a b (kT)c −a− b
1)
paQA pQb n QA n Q
C

C

B

because

(mgh = pV)

pQ = nQkT

B

Kp = (kT)–2 K for ammonia (p in Pascal units)
Kp = (RT)–2 K for ammonia (p in atmosphere units)
RT = .0821 L-atm/mol-K ϫ 300 K = 24.6 L-atm/mol, (RT)2 = 607 (L-atm/mol)2
Kp = 6 ϫ 108 L2/mol2 /607 (L-atm/mol)2 = 1 ϫ 106 atm–2
2) First we must take care of all the units. A 5° Fahrenheit change corresponds to
5 ϫ (5/9) = 2.8° Centigrade or Kelvin change, and the average temperature of 42°F equals 279 K. One atmosphere is 105 Pascal, so ⌬p = 104 Pa. The latent heat is
L1 = T (⌬p/⌬T) (RT/p)
= (279 K)2 (104 Pa/2.8 K) (8.314 J/mol-K)/(105 Pa)
= 23 kJ/mol.
3) 2H2 + O2 ↔ 2H2O, 2 moles of H2O from 3 moles of reactants implies ⌬(pV) = 1RT, or ⌬(pV) = 0.5 RT = 1.25 kJ for 1 mole of H2O.
⌬Uo = ⌬Ho – ⌬(pV) = –242 kJ/mol – 1.25 kJ/mol = –243 kJ/mol = 2.56 eV
4) a) dp/dT = L/T⌬Vm = Lp/RT2 dp/p = (L/R)dT/T2
Integrating: ln(p) = Ϫ(L/R)(1/T) ϩ constant
Solution: p = (const.) exp (ϪL/RT)
b) L/RT = ⌬/kT and nR = Nk
Result: nL = N⌬ = total binding of N molecules

Appendix 1
1) ␬ = mg/⌬x = 163 Nt/m. ␬H2 = m␻2/2 = (1.67 ϫ 10–27 kg)(2␲ ϫ 6.5 ϫ 1013)2/2 ഠ
140 Nt/m. I chose this spring to give you a feeling for ␬H2.
2) ␻ = ((3 ±͙5)/2␬/m)1/2 = (.38␬/m)1/2 and (2.62␬/m)1/2.

212

Solutions to Exercises Physics 213

Appendix 7
1) CV = n cv and n = 1g / (12g/mol) = .083 mole.
a) Considering the plot of cv given in the text, 1000 K is in the high temperature limit where cv = 3R (see Chapter 5 for derivation).
CV = .083 ϫ 3 ϫ 8.314 J/mol-K = 2.08 J/K
Q = CV ⌬T = 2.08 J/K ϫ 10 K = 20.8 J
b) From the graph this is roughly in the low temperature (quantum) limit,
CV = n 234 R (T/TD)3 = (.083) 234 ϫ 8.31 ϫ (T/2230)3 = 1.45 (10–8 T3)
Q = Ύ CV dT = 1.45 ϫ 10–8 Ύ T3 dT = 1.45 ϫ 10–8 [T24 – T14]/4
Q = 3.63 ϫ 10–9 [1004 – 904] = .125 J
c) Also in the low temperature limit,
Q = 3.63 ϫ 10–9 [104 – 04] = 3.63 ϫ 10–5 J which is 570,000 times less heat than needed to make the 10 K change at
1000 K. Most of the vibrational modes are in their ground level, and a temperature rise cannot promote them to the next quantum level.

Appendix 8
1) a) n = 2.45 ϫ 1025 m–3 nQ/n = 5.98 ϫ 106 at 300 K. strans = R(ln(nQ/n) + 5/2) = 8.314(15.6 + 2.5) = 18.1R = 150.5 J/mol-K.
b) kT/␧r = .026/.00026 = 100. srot = R(ln(kT/␧r)+1) = 5.6R = 46.6 J/mol-K. strans + srot = 150.5 + 46.6 = 197 J/mol-K.
Very close to smeas = 192 J/mol-K.

213

Physics 213 Elements of Thermal Physics

214

Index

A
Accessible states
Absolute ................................................................................. 125, 127, 191
Harmonic oscillator ............................................................... 105, 186, 195
Ideal gas ............................................................. 67, 85, 105, 118, 126, 148
Adiabatic process ..............................................................ix, 32–37, 40, 85, 207
Adsorption ........................................................................... xviii, 145, 148–149
Ammonia reaction ........................................................................................137
Average value ...................................................................... 22–23, 28, 174–175
Avogadro constant .................................................................................ix, xi, 21

B
Binomial distribution ....................................ix, xx, 46, 48–51, 60, 68, 177–178
Boltzmann constant....................................... ix, xi, 23, 79–80, 91–92, 117, 185
Boltzmann distribution ............................................................. xviii, 91–92, 98
Boltzmann factor ........................................ ix, 27–28, 88, 91, 99, 102, 140, 185
Bricks doing work.................................................................................114–116

C
c.m. equation .............................................................................................. 5, 10
Carnot cycle
Efficiency ............................................................. 38–39, 41, 172, 209–210
Carrier densities ...........................................................................................141
Center of mass .................................................................................. 4–5, 10, 31
Chemical potential .......................................................................................124
Diatomic gas .......................................................................... 129, 133, 159
Interpretation.........................................................................................130
Liquid .............................................................................................151–152
Monatomic gas.......................................................... ix, 128–129, 133, 143
Solid ....................................................................... 145, 151–152, 160, 200
Classical thermodynamics ..................................................................xiv, 15, 68

215

Physics 213 Index

Clausius–Clapeyron equation ..............................................................160, 163
Coexistence curve .................................................................................150, 160
Curie’s law ...................................................................................... 93, 121–122

D
Debye solid ...........................................................................................189–190
Diatomic molecules ........................................................................................24
Abundances ..............................................................................................21
Entropy .................................................................. 192–193, 195–196, 198
Thermal energy ................................................................................. 25, 96
Diffusion
Differential equation ...............................................................................53
Electrons in silicon ..................................................................................57
Diffusive force ......................................................................................131–132

E
Einstein solid ..........................................................................................86, 119
Energy ..............................................................................................................1
Bands ......................................................................................................139
Center of mass ...........................................................................................6
Internal................................................................................................. 6, 24
Kinetic ........................................................................................................2
Potential .....................................................................................................7
Rotational...................................................................................................6
Vibrational .................................................................................................8
Energy levels ................................................................................................. xiv
1D electron in a box ................................................................................74
H–atom ....................................................................................................74
Harmonic oscillator ........................................... 74, 99, 186, 195–196, 208
Enthalpy ...........................................................................ix, 154, 158, 161, 163
Entropy
And heat .......................................................... x, xvii, 36, 39, 111, 144, 155
Conventional ....................................................................................... x, 79
Diatomic gas ....................................................................................85, 129
Molar................................................................................ 29, 161, 192–193
Monatomic gas........ 18, 29, 83, 85, 103, 119, 126–129, 158, 192, 195, 202
Vibrational ............................................................... 14, 114, 151, 192, 196
Volume exchange ..........................................................xvii, 64, 78, 81, 124
Entropy ionization .......................................................................................136
Entropy maximization ........................................................................ 16, 64, 68
Equilibrium ....................................................................................................32
Chemical ................................................................................................137
F–minimum ...................................................................................116, 124
Statistical ..................................................................................................59
216

Index Physics 213

Equilibrium constant....................................................................................136
H–ionization ..........................................................................................136
H2–dissociation ......................................................................................143
With internal motions ...................................................................138, 199
Equilibrium value ...........................................................................................60
Equipartition Theorem ..................................................... 23, 80, 103, 117

F
First Law of Thermodynamics (FLT)............................................................31
Free energy (F) .....................................................................................115–117
Ideal gas .................................................................................................128
Paramagnet ............................................................................................119
Surface states..........................................................................................146
Free expansion ..............................................................................................114
Fuel cell ........................................................................................................132
Fundamental postulate of statistical mechanics................................ xvi, 49, 59

G
Gaussian distribution ................................................... 50, 52–54, 57, 176–177
Gibbs free energy ......................................................................ix, xix, 157–159
Gibbs paradox.................................................................................................71

H
Harmonic oscillator ........................................................................... 74–77, 95
Average energy...............................................................................186–187
Sum over states .............................................................. 185–187, 195–196
Heat ................................................................................................................31
And engines........................................................................................31–44
And entropy ..............................................................xvii, 37, 111–115, 154
Heat capacity ..................................................................................................24
Constant pressure ....................................................................................44
Harmonic oscillator .................................................................................86
Ideal gas ...................................................................................................25
Solid .........................................................................................................25
Heat conduction .............................................................................................54
Heat current (J) ........................................................................................54–55
Heat pump .......................................................................................... 40, 42, 55

I
Ideal gas law............................................................................................26, 122
Integrals (table).............................................................................................179
Irreversibility .................................................................. xvi–xvii, 13–16, 68–69
Isothermal process....................................................................................33–37
217

Physics 213 Index

L
Latent heat ........................................................... 153–156, 160, 163, 211–212
Law of atmospheres .............................................................................129–130
Law of mass action ...............................................................................142, 163
Liquid–gas condensation......................................................................154–156
Logarithms ..................................................................................................... xx

M
Magnetic moment .............................................................. 46–50, 92, 119–121
Magnetism ......................................................................................................46
Maxwell–Boltzmann distribution.........................................................102–103
Mean free path .......................................................................................52, 178
Microstates ........................................xvi, xx, 45–47, 59–65, 68–69, 75–79, 181
Moles .................................................................................................. 21, 24, 26
Momentum conservation ........................................................................... 4, 13
Myoglobin .................................................................................... 147–148, 156

N
Newton’s Second Law .......................................................................... 2–5, 7–8
Normal modes ...................................................................... 9, 13–14, 165–167

O
Occupancy rules .....................................................................................62, 181

P
Paramagnetism .......................................................................................91, 119
Phase diagrams .....................................................................................150–155
Solid–gas ................................................................................................150
Solid–liquid ............................................................................................151
Solid–liquid–gas .............................................................................152, 161
Photons .................................................................................................103, 105
Planck Radiation Law ..................................................................................106
Planck’s constant ............................................................................ 74, 104, 125
Polymer ....................................................................................................94–95
Pressure ..........................................................................................................22
Probability ........................................................................xvi, 27–29, 45, 49–50
Probability density ..................................................... 27–28, 99–102, 174–178
Process energies .............................................................................................32

218

Index Physics 213

Q q–formula for harmonic oscillator .................................................................77
Quadratic terms......................................................................................23, 117
Quantum density .................................................. 126–127, 141, 143, 191–192
Quantum mechanics........................................... xiv, 46, 64, 119, 125, 193, 195
Quantum pressure ........................................................................ 146, 149, 159
Quasi–static processes ....................................................................................32

R
Random walk ..........................................................................................50, 178
Refrigerator ..............................................................................................40–42
Reversible process .......................................................... 34, 111–115, 206–207

S
Second Law of Thermodynamics ...................... xvi, 14, 16, 40, 68–69, 82, 122
Specific Heat ..........................................................ix, 24, 26, 97, 122, 189–190
State functions .......................................................................... 32, 36, 112, 118
Statistical mechanics........................xiv, 23, 27–28, 49, 59, 71, 79, 88, 118, 125
Statistical tools..............................................................................................173
Stefan–Boltzmann Law ................................................................................106
Stirling cycle ................................................................................... 43, 169–172
Stirling’s approximation ................................................................... 69–71, 183
Sum over states ................................................................................... 74–77, 95

T
Temperature
Definition ..................................................................xvii, 15, 18, 23, 78–82
Scales ........................................................................................................23
Thermal conductivity ...............................................................................54–55
Thermal equilibrium .......................................15, 73, 78, 81–82, 116, 123–124
Thermal Radiation ...............................................................................105–106
Thermal reservoir ................. 33–34, 36, 38, 87–88, 95, 99, 116–119, 123, 157
Thermodynamic identity .....................................................................112, 160

V
Van der Waals equation ........................................................................154, 156
Vapor pressure ...................................................... 148–151, 160, 163, 199–200

219

Physics 213 Index

W
Work .............................................................................................................3–5
Adiabatic ................................................................................ 32–34, 36–40
Engines ................................................................ 31–32, 38, 112, 114–116
Isothermal .................................................................... 32, 36–39, 131, 170

220

You May Also Find These Documents Helpful

Related Topics