One thing I was blown away by from Thermodynamics is that you can derive a perfectly reasonable and intuitive definition of Temperature that makes no reference whatsoever to kinetic energy, colliding particles, jiggling -- any microscopic dynamics whatsoever.
So... let& #39;s do it!
So... let& #39;s do it!
Thermo works with macroscopic systems: forgetting all knowledge of the fine-scale workings of the system & only focusing on characteristics we can observe macroscopically.
We call these systems "simple." (homogenous, isotropic, uncharged, unaffected by electromag./grav. fields)
We call these systems "simple." (homogenous, isotropic, uncharged, unaffected by electromag./grav. fields)
In these "simple" systems, we can characterize the state of the system *completely* by simply describing:
- the energy U
- the volume V
- the amounts of each of it& #39;s chemical components (usually mole numbers) N₁, N₂, etc.
- the energy U
- the volume V
- the amounts of each of it& #39;s chemical components (usually mole numbers) N₁, N₂, etc.
If a system can be completely described/characterized in this way, we say the system is in an "equilibrium state."
This definition is circular: Thermodynamics deals with equilibrium states, but equilibrium states are states whose properties are consistent with thermodynamics!
This definition is circular: Thermodynamics deals with equilibrium states, but equilibrium states are states whose properties are consistent with thermodynamics!
These state variables are so special we give them a name: "extensive" -- they depend on the "extent" of the system:
If you take two identical systems with the same state variables and combine them, the state variables describing the combined system are *twice* the original ones.
If you take two identical systems with the same state variables and combine them, the state variables describing the combined system are *twice* the original ones.
The basic problem of Thermo is this:
- Given 2 or more simple systems, form a single composite system.
- Keep this composite system closed (no change in total U, V, N)
- Determine the equilibrium state if you remove internal constraints (allow them to exchange U, V, N)
- Given 2 or more simple systems, form a single composite system.
- Keep this composite system closed (no change in total U, V, N)
- Determine the equilibrium state if you remove internal constraints (allow them to exchange U, V, N)
If you& #39;re me, in an intro Thermo class, you might think... okay... that& #39;s obviously impossible.
We& #39;ll have to come up with a new theory for every composite system. Think of all the different systems with energy, volume, & different numbers/types of particles!
Cool class, thx!
We& #39;ll have to come up with a new theory for every composite system. Think of all the different systems with energy, volume, & different numbers/types of particles!
Cool class, thx!
But, as it turns out, all you need is a nice little function called the "entropy".
Now, don& #39;t run, I know.
This isn& #39;t the scary, confusing entropy you& #39;re used to from Stat. Mech.*
This is Thermo entropy, and it& #39;s so much gentler and more subtle.
*(it& #39;s the same one, but shh)
Now, don& #39;t run, I know.
This isn& #39;t the scary, confusing entropy you& #39;re used to from Stat. Mech.*
This is Thermo entropy, and it& #39;s so much gentler and more subtle.
*(it& #39;s the same one, but shh)
In Thermo the entropy is defined like this:
The entropy is a function S of the extensive parameters: S(U, V, N₁, N₂, ...) with the following property ⟶ the values obtained by the extensive parameters in the absence of internal constraints are those that maximize the entropy.
The entropy is a function S of the extensive parameters: S(U, V, N₁, N₂, ...) with the following property ⟶ the values obtained by the extensive parameters in the absence of internal constraints are those that maximize the entropy.
Entropy is more powerful w/ a few additional (reasonable) properties we force the function to have:
(1) The entropy of a composite system is the sum of the entropies of the constituents
(2) It& #39;s continuous and differentiable
(3) It& #39;s monotonically increasing as a function of U
(1) The entropy of a composite system is the sum of the entropies of the constituents
(2) It& #39;s continuous and differentiable
(3) It& #39;s monotonically increasing as a function of U
(There& #39;s one other property, which we won& #39;t actually need here, but will be important when we define the temperature: the entropy vanishes in the state for which (dU/dS)_(V, N) = 0. This is often called the Third Law of Thermodynamics, very important in the history of the field)
Now, I& #39;ll point something out, which may make this whole enterprise feel more familiar.
If we can define the entropy S in terms of U, V, and N, with S monotonically increasing in U, can& #39;t we define U in terms of S, V, and N with U monotonically *decreasing* in S?
Yup!
If we can define the entropy S in terms of U, V, and N, with S monotonically increasing in U, can& #39;t we define U in terms of S, V, and N with U monotonically *decreasing* in S?
Yup!
In that case, the definitions above would give us a different extremum principle:
- The values obtained by the extensive variables in the absence of internal constraints are those which *minimize* the energy U over all the different possible S, V, N values!
Sound familiar?
- The values obtained by the extensive variables in the absence of internal constraints are those which *minimize* the energy U over all the different possible S, V, N values!
Sound familiar?
Now, we& #39;re going to pull a trick.
When I first saw this trick I thought it was annoying. A fancy, symbol based Three-card Monte.
But just trust me.
Let& #39;s stick with the energy representation for now -- the system is described by:
U(S, V, N₁, N₂, ...)
When I first saw this trick I thought it was annoying. A fancy, symbol based Three-card Monte.
But just trust me.
Let& #39;s stick with the energy representation for now -- the system is described by:
U(S, V, N₁, N₂, ...)
Let& #39;s say we change the energy by a little (dU).
How can it change?
Well, the entropy can change, as can the volume, and the number of particles of each type -- all independently.
In other words:
dU = (dU/dS) dS + (dU/dV) dV + (dU/N₁) dN₁ + (dU/dN₂) dN₂ + ...
How can it change?
Well, the entropy can change, as can the volume, and the number of particles of each type -- all independently.
In other words:
dU = (dU/dS) dS + (dU/dV) dV + (dU/N₁) dN₁ + (dU/dN₂) dN₂ + ...
Let& #39;s give each of these (partial) derivatives a name:
- (dU/dS) will be called T or "the temperature"
- (dU/dV) will be called -P or "the negative of the pressure"
- (dU/dN) will be called μ or "the chemical potential"
So:
dU = T dS - P dV + μ₁ dN₁ + μ₂ dN₂ + ...
- (dU/dS) will be called T or "the temperature"
- (dU/dV) will be called -P or "the negative of the pressure"
- (dU/dN) will be called μ or "the chemical potential"
So:
dU = T dS - P dV + μ₁ dN₁ + μ₂ dN₂ + ...
These could be any symbols or any names, we just *happen* to be giving them these names for no particular reason at all
All this theorizing is nice, but let& #39;s get practical, eh?
Let& #39;s construct a closed system:
- Two chambers separated by a wall that is fixed, and impermeable to each subsystems particles (no exchange of V, or N between them), but which *does* allow for exchange of energy.
Let& #39;s construct a closed system:
- Two chambers separated by a wall that is fixed, and impermeable to each subsystems particles (no exchange of V, or N between them), but which *does* allow for exchange of energy.
Because the whole composite system is closed (no exchange of U, V, or N with the outside), we know that the total energy is constant!
So we can define a conservation law for the energies U₁ and U₂ of chambers 1 and 2:
U₁ + U₂ = 0 ⟶ dU₁ + dU₂ = 0 ⟶ dU₁ = - dU₂
So we can define a conservation law for the energies U₁ and U₂ of chambers 1 and 2:
U₁ + U₂ = 0 ⟶ dU₁ + dU₂ = 0 ⟶ dU₁ = - dU₂
Our definitions above told us that the values our extensive variables take on in the absence of internal constraints are those that maximize the entropy, so we& #39;re looking for the state where dS = 0 (the maximum happens when the function stops increasing).
So, since entropy is additive over the subsystems, let& #39;s look at the composite entropy:
S = S₁(U₁, V₁, N₁) + S₂(U₂, V₂, N₂).
Because V and N can& #39;t change, the change in entropy will only be due to U!
dS = (dS₁/dU₁)dU₁ + (dS₂/dU₂) dU₂
S = S₁(U₁, V₁, N₁) + S₂(U₂, V₂, N₂).
Because V and N can& #39;t change, the change in entropy will only be due to U!
dS = (dS₁/dU₁)dU₁ + (dS₂/dU₂) dU₂
But hey! Those partial derivatives look familiar...
Since (dU/dS) was defined to be equal to T:
(dS/dU) = 1/T !
So, our equation above becomes:
dS = (1/T₁) dU₁ + (1/T₂) dU₂
Since (dU/dS) was defined to be equal to T:
(dS/dU) = 1/T !
So, our equation above becomes:
dS = (1/T₁) dU₁ + (1/T₂) dU₂
From our conservation law from up above, we can substitute dU₂ for -dU₁ to get:
dS = (1/T₁) dU₁ - (1/T₂) dU₁ = (1/T₁ - 1/T₂) dU₁
dS = (1/T₁) dU₁ - (1/T₂) dU₁ = (1/T₁ - 1/T₂) dU₁
But remember, we& #39;re looking for where dS = 0!
If dS = (1/T₁ - 1/T₂) dU₁, the only way dS can be zero is if:
- dU₁ = 0 (no energy transfer between subsystems -- BORING)
or
- (1/T₁ - 1/T₂) ⟶ T₁ = T₂
The systems will exchange energy until the temperatures are equal!
If dS = (1/T₁ - 1/T₂) dU₁, the only way dS can be zero is if:
- dU₁ = 0 (no energy transfer between subsystems -- BORING)
or
- (1/T₁ - 1/T₂) ⟶ T₁ = T₂
The systems will exchange energy until the temperatures are equal!
Not only that, but let& #39;s look at what happens *before* the temperatures are equal: let& #39;s say T₁ > T₂.
Well, we know the composite systems& #39; extensive variables change to maximize the entropy, so, if we& #39;re not there yet 𝚫S > 0
Well, we know the composite systems& #39; extensive variables change to maximize the entropy, so, if we& #39;re not there yet 𝚫S > 0
Well, if
𝚫S ≃ (1/T₁ - 1/T₂) 𝚫U₁
then if T₁ > T₂, the only way to have 𝚫S > 0 will be if 𝚫U₁ < 0!
If the temperature in chamber 1 starts out larger than chamber 2, the way the entropy will be maximized will be for chamber 1 to *lose energy* to chamber 2!
𝚫S ≃ (1/T₁ - 1/T₂) 𝚫U₁
then if T₁ > T₂, the only way to have 𝚫S > 0 will be if 𝚫U₁ < 0!
If the temperature in chamber 1 starts out larger than chamber 2, the way the entropy will be maximized will be for chamber 1 to *lose energy* to chamber 2!
With only a few simple, reasonable definitions for this made up function, the entropy, and made up names for partial derivatives, we get a definition of temperature that, when applied, behaves *exactly* like the temperature we& #39;re used to!
Systems at higher temperature put in contact with systems at lower temperature lose energy to the latter until the temperatures are equal!