Connecting the microcanonical with the (grand) canonical ensemble

In summary: This is an approximation that is valid for low interaction strengths, non-degeneracy, and no entanglement. Here's why:1. Interaction strength: In a strongly interacting gas, the atom's Hamiltonian will depend on parameters that are outside the single-atom system, and so you need to take the whole gas into account. In a weakly interacting gas, I believe you can get away with a mean-field solution, which will have its own Maxwellian statistics.2. Quantum degeneracy: If a gas is deeply degenerate, then you cannot distinguish one molecule from its neighbors, so it makes no sense to define a one-molecule system. Even if you could, you
  • #1
Philip Koeck
782
219
I'm trying to sort out how the microcanonical picture is connected to the canonical and the grand canonical.

If I consider a Helium gas, not necessarily with low density, in an isolated container (fixed energy and particle number) I can use the microcanonical ensemble to arrive at the BE-distribution, and, in the low occupancy limit, the Gibbs distribution. The temperature and chemical potential arise from the Lagrange multipliers that account for constant energy and particle number.

Now, according to some authors, I can also select a small number of atoms and treat them as system and the rest of the gas as heat bath.
This means that there is no physical barrier between the system and the heat bath.
This system can exchange energy and particles with the heat bath and this leads to temperature and chemical potential.
If I work out the expected number of atoms in each energy level I again get the BE-distribution.

There is nothing special about the atoms in the system, so I could chose any small subset of the atoms in the whole gas as system.
Therefore whatever I find for the system should also be true for the heat bath.

Does this sound right so far?

Then I'll go a step further and decide that the system should consist of a single atom.
Is this still a normal and sensible thing to do?
The first problem I face now is that it doesn't make sense to allow for particle exchange with the heat bath, I believe, (since the system is supposed to contain a single atom). So, where does the chemical potential come from now?
 
Science news on Phys.org
  • #2
There's a lot to discuss here.

Philip Koeck said:
I can use the microcanonical ensemble to arrive at the BE-distribution, and, in the low occupancy limit, the Gibbs distribution. The temperature and chemical potential arise from the Lagrange multipliers that account for constant energy and particle number.
In the microcanonical ensemble, you usually derive the entropy (for example, the Sackur-Tetrode equation for ideal gases) and then calculate the temperature and chemical potential using thermodynamic relations. Specifically, ##\frac{1}{T} = \frac{\partial S}{\partial E}## and ##\mu = -T\frac{\partial S}{\partial N}##. The method you're describing, with Lagrange multipliers, is used for the canonical ensemble. Unless you're talking about some fancy method that I haven't seen before?

Philip Koeck said:
Now, according to some authors, I can also select a small number of atoms and treat them as system and the rest of the gas as heat bath.
Philip Koeck said:
There is nothing special about the atoms in the system, so I could chose any small subset of the atoms in the whole gas as system.
Therefore whatever I find for the system should also be true for the heat bath.

Does this sound right so far?
This is an approximation that is valid for low interaction strengths, non-degeneracy, and no entanglement. Here's why:

1. Interaction strength: In a strongly interacting gas, the atom's Hamiltonian will depend on parameters that are outside the single-atom system, and so you need to take the whole gas into account. In a weakly interacting gas, I believe you can get away with a mean-field solution, which will have its own Maxwellian statistics.

2. Quantum degeneracy: If a gas is deeply degenerate, then you cannot distinguish one molecule from its neighbors, so it makes no sense to define a one-molecule system. Even if you could, you will still run into the same problem as with interactions, where one molecule will be correlated to the rest of the gas and a one-molecule system will not contain enough information.

3. Entanglement: Once again, the issue is correlations. When the atoms are correlated to the whole gas, you cannot take a subset of the gas as its own system.

You can resolve all these issues at once by using a Gibbsian picture of statistical thermodynamics, in which the system is the entire gas and the "heat bath" is a reservoir of infinitely many virtual copies of the gas. It's very similar to the way the Born rule works in the Copenhagen interpretation. With this correction, you can tackle strongly correlated systems.

There is a great discussion of this issue in Schrodinger's book "Statistical Thermodynamics" (ISBN-13: 978-0486661018), which is concise and cheap. If these kinds of questions keep you up at night, then I highly recommend reading it. Keep in mind that the book is very old, and it doesn't discuss the microcanonical ensemble at all.
 
  • Like
Likes DrClaude and Philip Koeck
  • #3
Twigg said:
In the microcanonical ensemble, you usually derive the entropy (for example, the Sackur-Tetrode equation for ideal gases) and then calculate the temperature and chemical potential using thermodynamic relations. Specifically, ##\frac{1}{T} = \frac{\partial S}{\partial E}## and ##\mu = -T\frac{\partial S}{\partial N}##. The method you're describing, with Lagrange multipliers, is used for the canonical ensemble. Unless you're talking about some fancy method that I haven't seen before?

I've written the following text about this: https://www.researchgate.net/publication/330675047_An_introduction_to_the_statistical_physics_of_distinguishable_and_indistinguishable_particles_in_an_ideal_gas
(I'm grateful for comments.)

My derivation might be a bit unusual, but I'm pretty sure it's microcanonical. I consider only an isolated system with fixed energy and usually also fixed number of particles.

The first result I get is the distribution of particles among energy levels with unspecified Lagrange multipliers. Then I use the connection between entropy and W and thermodynamic relations to give the multipliers physical meaning.
In the low occupancy limit, for an ideal gas, I can derive expressions for entropy (Sackur-Tetrode) and chemical potential and also the Maxwell-Boltzmann distributions.

Please ignore the part on distinguishable particles if you don't like that sort of thing.

Thanks for pointing out Schrödinger's book. I'm very motivated to read it.
 
  • #4
Twigg said:
This is an approximation that is valid for low interaction strengths, non-degeneracy, and no entanglement. Here's why:

1. Interaction strength: In a strongly interacting gas, the atom's Hamiltonian will depend on parameters that are outside the single-atom system, and so you need to take the whole gas into account. In a weakly interacting gas, I believe you can get away with a mean-field solution, which will have its own Maxwellian statistics.

2. Quantum degeneracy: If a gas is deeply degenerate, then you cannot distinguish one molecule from its neighbors, so it makes no sense to define a one-molecule system. Even if you could, you will still run into the same problem as with interactions, where one molecule will be correlated to the rest of the gas and a one-molecule system will not contain enough information.

3. Entanglement: Once again, the issue is correlations. When the atoms are correlated to the whole gas, you cannot take a subset of the gas as its own system.
Just to make sure: Are you talking about regarding a small subset of atoms in a gas as system and the rest as heat bath, or do these comments only apply to regarding a single atom as the system?

I'm also wondering whether these conditions actually mean that you can only apply this model to an (almost) ideal gas.
In that case I would say that the occupancy is so low that the gas is described by Boltzmann -statistics no matter whether the particles are bosons or fermions.
 
  • #5
Sorry, I haven't had time to make a thorough reply. Here's a quick one.

I feel like I haven't been very clear and I want to clear a few things up.

When I say you can't take a subsystem consisting of a single atom in a quantum degenerate gas, what I mean is that you can't take a subsystem that consists of one particular atom. In other words, you can't follow one atom around like you can in a Maxwellian gas, because this is a quantum gas and the atoms are indistinguishable. Similar logic applies to other kinds of correlated gases (interacting, entangled, etc).

However, if what you're saying is you want to define a system that is a box of a certain volume that has an average occupancy number of 1 atom, that is totally allowed with a grand canonical ensemble. This is the approach I see used 99.9% of the time in research papers. That is why the math that's in your article works and is the standard for dealing with non-interacting quantum gases. (However, most of what you derived is actually the grand canonical ensemble. The only time I saw a microcanonical was when you derived Sackur-Tetrode.)

There is a quantum-compatible version of the microcanonical ensemble, it's just hard and I've never seen it taught in a class before (and I've never seen it used before, except as a novelty). There is a treatment of it in Landau and Lifshitz, Volume 5, Chapter 1. Sections 1-5 build up the classical microcanonical from Liouville's theorem, and sections 6-8 build up the quantum microcanonical from the density matrix.
 
  • Like
Likes Philip Koeck
  • #6
Twigg said:
When I say you can't take a subsystem consisting of a single atom in a quantum degenerate gas, what I mean is that you can't take a subsystem that consists of one particular atom. In other words, you can't follow one atom around like you can in a Maxwellian gas, because this is a quantum gas and the atoms are indistinguishable. Similar logic applies to other kinds of correlated gases (interacting, entangled, etc).

However, if what you're saying is you want to define a system that is a box of a certain volume that has an average occupancy number of 1 atom, that is totally allowed with a grand canonical ensemble. This is the approach I see used 99.9% of the time in research papers.
I was actually thinking of indistinguishable particles and what you say makes complete sense for them. It seems that many textbooks fail to discuss this. They make it sound like the system is actually 1 atom.
 
  • #7
Twigg said:
(However, most of what you derived is actually the grand canonical ensemble. The only time I saw a microcanonical was when you derived Sackur-Tetrode.)
Now I'm wondering if I've understood what microcanonical etc. means.
How do you see that my derivations are grand canonical?
 
  • #8
Your derivations are grand canonical because N is not fixed. You allow the occupancy of a single state to be a variable, and that's not possible in the microcanonical because N is fixed. This is a good thing! It's the right way to tackle indistinguishable atoms. This means you can break a degenerate gas down into small subsystems with varying occupancy, and these subsystems can be treated as being in equilibrium with the surrounding gas.

If what you were doing was microcanonical, you would need to already know the Hamiltonian for you gas, solve for it's energy spectrum (for non-interacting gas, that's typically the particle-in-a-box eigenstates), and integrate the density of states over all phase space to get the total number of microstates to get the entropy, like you did when you derived Sackur-Tetrode and like what's presented with a little more generality in Landau and Lifshitz.

To get even more into the weeds, even Landau's procedure is only valid for unentangled states. If you had a gas at T=0 where some atoms' energy was entangled with other atoms', then you can forget about getting the entropy from a density of states calculation, since the atoms are in states that cannot be resolved into product states of the energy eigenstates. In general, the only way you can get the entropy of an arbitrary quantum system is to take its density matrix and calculate the von Neumann entropy, but this kind of defeats the point of statistical mechanics because it requires complete knowledge of the system's microscopic state from the start.

Just to reiterate, in most of the systems that I see studied, you can solve pretty much everything that's solvable with just the grand canonical ensemble. Even for weakly interacting gases, you can transform them into non-interacting gases of quasi-particles with the mean-field approximation and Bogoliubov transformation. For an example, check out the Gross-Pitaevskii equation for weakly-interacting Bose gases which works by transforming the many-body state of the interacting gas into a Hartree-Fock state in a new basis (transforms from atom states to quasiparticle states). The math is very heavy but it's worthwhile.
 
  • #9
Twigg said:
Your derivations are grand canonical because N is not fixed. You allow the occupancy of a single state to be a variable, and that's not possible in the microcanonical because N is fixed. This is a good thing! It's the right way to tackle indistinguishable atoms. This means you can break a degenerate gas down into small subsystems with varying occupancy, and these subsystems can be treated as being in equilibrium with the surrounding gas.
If what you were doing was microcanonical, you would need to already know the Hamiltonian for you gas, solve for it's energy spectrum (for non-interacting gas, that's typically the particle-in-a-box eigenstates), and integrate the density of states over all phase space to get the total number of microstates to get the entropy, like you did when you derived Sackur-Tetrode and like what's presented with a little more generality in Landau and Lifshitz.
I'm still a bit puzzled.
I do assume that N is fixed and that's how I get the multiplier α, which then gives me μ.
If I don't assume constant N then I don't get α and μ (μ is zero?).

Maybe I should point out something that might be confusing: In my text the energy levels and states belong to single particles, not to the system.
The system has a fixed energy E (and a fixed N).
So I'm treating a situation where each particle moves in a constant external potential and all the other particles only act as a mean field contributing to this potential or not at all.
For an ideal gas I'm treating every molecule as an independent particle in a box.
So, yes, I do know the Hamiltonian to begin with.
 
  • #10
A very illuminating treatment of the relation between the different ensembles can be found in Sommerfeld, Lectures on Theoretical physics, Vol. 5 (Darwin-Fowler method).
 
  • Like
Likes Philip Koeck
  • #11
Sorry for my delay in getting back to this thread. I have a major deadline tomorrow and am cramming. I'll be back here once I get a chance to take my time and be thorough! :)
 
  • Like
Likes Philip Koeck
  • #12
OK, sorry for the delay. This response is unfortunately going to be quicker than I'd like but oh well

In hindsight, I believe that the arguments presented in your review are purely canonical/grand-canonical. I'll explain why below:

Super short version: You exclusively use the canonical / grand canonical ensemble in your article. The microcanonical does not have partition functions with a simple closed form. You have to work in phase space (or a quantum analog) to use the microcanonical ensemble. However, it doesn't really matter because the canonical ensemble is also an asymptotically great approximation to the microcanonical ensemble for ##N \rightarrow \infty##.

Long version:
Following Section 2.2 of "Statistical Mechanics" by Pathria and Beale, let's consider a classical gas of N particles with phase-space distribution function ##\rho (q,p)## where ##q## is the generalized coordinate with 3N components (3 for each particle) and where ##p## is the generalized momentum with 3N components. At equilibrium, we know that ##\frac{\partial \rho}{\partial t} = 0##, so by Liouville's theorem $$[\rho,H] = \sum_{i=1}^{3N} \left( \frac{\partial \rho}{\partial q} \frac{\partial H}{\partial p} - \frac{\partial \rho}{\partial p} \frac{\partial H}{\partial q} \right)= 0$$

One subset of solutions to this diff eq is when $$\rho(q,p) = const.$$ This implies that any function ##f(q,p)## has an expectation value $$\langle f \rangle = \frac{1}{\omega} \int_\omega f(q,p) d\omega$$ where ##\omega## represents the total phase space volume.

Another set of solutions to the above diff eq is when $$\rho(q,p) = \rho(H(q,p)),$$ in other words when the distribution function only depends on the Hamiltonian (energy) and not on ##q## or ##p## independently.

This second solution (##\rho(q,p) = \rho(H(q,p))##) corresponds to the canonical ensemble. After all, in the canonical ensemble, ##\rho(q,p) \propto \exp(-H(q,p) / k_B T)##. Also, clearly the second solution encompasses the first solution since you could always say that a constant distribution ##(\rho(q,p) = const.)## is a special case of ##\rho(q,p) = \rho(H(q,p))## for which ##\rho(H(q,p)) = const.## is the most trivial function of ##H(q,p)##. This first solution, with a constant phase-space distribution, actually corresponds to the microcanonical ensemble.

It may sound crazy that the microcanonical ensemble has ##\rho(q,p) = const.## because it's supposed to agree with the canonical ensemble ##\rho(q,p) \propto \exp(-H(q,p) / k_B T)##. How could they agree if the probabilities aren't the same? The answer is because in the microcanonical ensemble, the energy E is fixed and thus the phase space distribution is given by $$\rho(q,p) = \begin{cases} \frac{1}{\omega} & \text{for } H(q,p) = E \\ 0 & \text{otherwise} \end{cases}$$ where ##\omega## is the phase space volume of the surface ##H(q,p) = E## (it's just the normalization constant). So within the restricted phase space, the Hamiltonian is constant, and thus the microcanonical ensemble is both a solution of the first kind and a solution of the second kind. Contrast this to the case of the canonical ensemble, where the energy E is allowed to fluctuate since the system is closed (in contact with a reservoir) not isolated.

In your derivations, you always fall back on the partition function. For example, you write a single-particle partition function $$ Z_1 = \int_0^{\infty} e^{-\beta E(k)} g(k) dk $$ where g(k) is density of states and E(k) is the single-particle energy. The corresponding many-particle partition function is $$Z_N = Z_1^N$$. This clearly disagrees with the microcanonical ensemble because in the microcanonical ensemble, the total, multi-particle energy is fixed and so there are constraints on the limits of integration.

How do you resolve the fact that the microcanonical ensemble and the canonical ensemble give the same results? Well, for ##N \rightarrow \infty##, the phase space volume that the system occupies asymptotically converges to the volume of the highest constant-energy surface (i.e., hyper-surface defined by ##H(q,p) = E##). (You will see this approximation used in many textbook derivations of Sackur-Tetrode.)

Here's a more intuitive way to think about it. A system X that is a gas in thermal equilibrium with a heat bath and a system Y that is a gas in a perfectly insulated container do not follow equivalent statistics. System X's total energy will have thermal fluctuations, and system Y's will not. However, as the number of particles in both gases increases, system X's total energy fluctuations will become smaller and smaller relative to the average total energy. Specifically, these fluctuations can be derived from the heat capacity in the canonical ensemble, and you get $$\frac{\Delta E_{rms}}{\langle E \rangle} = \frac{\sqrt{k_B T^2 C_v}}{\langle E \rangle} \sim \frac{1}{\sqrt{N}}$$ (See here for a derivation.) Thus, as ##N \rightarrow \infty##, system X (a gas in thermal contact with a reservoir a.k.a. canonical ensemble) looks asymptotically equivalent to system Y (a perfectly insulated gas a.k.a. the microcanonical ensemble).
 
  • Like
Likes Philip Koeck and vanhees71
  • #13
Just a quick check:
I was following the definition of assemblies given in Blundell's book, which I quote below.
It sounds much simpler than what I read here at PF and also in https://www2.ph.ed.ac.uk/~mevans/sp/sp2.pdf.
Is Blundell oversimplifying?

From Blundell's book:
We are using probability to describe thermal systems and our approach
is to imagine repeating an experiment to measure a property of a system
again and again because we cannot control the microscopic properties
(as described by the system’s microstates). In an attempt to formalize
this, Josiah Willard Gibbs in 1878 introduced a concept known as an
ensemble. This is an idealization in which one consider making a large
number of mental ‘photocopies’ of the system, each one of which represents
a possible state the system could be in. There are three main
ensembles that tend to be used in thermal physics:
(1) The microcanonical ensemble: an ensemble of systems that
each have the same fixed energy.
(2) The canonical ensemble: an ensemble of systems, each of which
can exchange its energy with a large reservoir of heat. As we shall
see, this fixes (and defines) the temperature of the system.
(3) The grand canonical ensemble: an ensemble of systems, each
of which can exchange both energy and particles with a large reservoir.
(This fixes the system’s temperature and a quantity known
as the system’s chemical potential. We will not consider this again
until Chapter 22 and it can be ignored for the present.)
 
  • Like
Likes Twigg
  • #14
This discussion is consistent with what I presented above. For the microcanonical, fixed energy defines a 6N-1 dimensional hypersurface in phase space. I'm not unnecessarily introducing high-level math here: You actually have to integrate out the 6N-1 dimensional hypersurface to get the number of microstates (and thus the entropy). Trust me, I avoid high level math like the plague o0) This annoying integration is why you seldom see the microcanonical ensemble outside of the lecture hall. (Edit: corrected 3N-1 to 6N-1, since phase space has 3+3 degrees of freedom per particle.)

The valuable part of what I showed above was that as ##N \rightarrow \infty##, the canonical asymptotically converges to the microcanonical, since the spread in energy of the canonical goes as ##\frac{\delta E}{E} \propto \frac{1}{\sqrt{N}}##. You can actually prove this without doing any math on phase space, using just the derivation of energy fluctuations in the canonical ensemble, which I linked in my last post.

The "mental photocopies" approach is exactly what Schrodinger advocates in his book. I just liked Schrodinger's in-depth discussion of the issue and how it improves over the Maxwellian picture.

Sorry for quick reply, in a rush! Hope this post is helpful
 
Last edited:
  • Like
Likes vanhees71 and Philip Koeck
  • #15
I wonder if it would be possible to use PF to organize an informal reading course.
The idea would be to choose a good book on statistical physics or maybe some online material and go through it chapter by chapter. Every participant could then ask questions until everything is crystal clear.
 
  • Love
  • Like
Likes Twigg and vanhees71
  • #16
Twigg said:
In your derivations, you always fall back on the partition function. For example, you write a single-particle partition function $$ Z_1 = \int_0^{\infty} e^{-\beta E(k)} g(k) dk $$ where g(k) is density of states and E(k) is the single-particle energy. The corresponding many-particle partition function is $$Z_N = Z_1^N$$. This clearly disagrees with the microcanonical ensemble because in the microcanonical ensemble, the total, multi-particle energy is fixed and so there are constraints on the limits of integration.
I don't quite see a problem there.
The gi in my text are states available to individual particles.
These states can be empty, the way I see it.
Then in section 6 I introduce g(k) dk as the density of states for an ideal gas.
This is just the density of states available to an individual particle in a box taken straight from a textbook.
Again the states can be empty.

When I introduce the single particle partition function I only do that as a mathematical trick and I never introduce the many-particle partition function.
So, in a sense, the integral I introduce ( maybe I shouldn't call it partition function) has nothing to do with the particles. It is purely defined by the empty container.

I have to emphasize that I never consider many-particle states in my text. I just think of individual particles that can distribute themselves in a fixed landscape of energy levels and states.
I realize that this doesn't agree with modern quantum mechanics.

There is one major problem that I see, though, and I've pointed it out in another thread:
Although I assume that the total energy of all the particles together is fixed and finite the distribution functions I get allow for an individual particle to have an energy that is higher than the total energy and even infinite. Strange!
 
Last edited:
  • Like
Likes vanhees71
  • #17
Philip Koeck said:
I don't quite see a problem there.
The gi in my text are states available to individual particles.
These states can be empty, the way I see it.
Then in section 6 I introduce g(k) dk as the density of states for an ideal gas.
This is just the density of states available to an individual particle in a box taken straight from a textbook.
Again the states can be empty.

When I introduce the single particle partition function I only do that as a mathematical trick and I never introduce the many-particle partition function.
So, in a sense, the integral I introduce ( maybe I shouldn't call it partition function) has nothing to do with the particles. It is purely defined by the empty container.

I have to emphasize that I never consider many-particle states in my text. I just think of individual particles that can distribute themselves in a fixed landscape of energy levels and states.
I realize that this doesn't agree with modern quantum mechanics.

There is one major problem that I see, though, and I've pointed it out in another thread:
Although I assume that the total energy of all the particles together is fixed and finite the distribution functions I get allow for an individual particle to have an energy that is higher than the total energy and even infinite. Strange!
Here's my own reply:
I've just seen that I express the total energy as a weighted sum of Gibbs factors and that's how I get the partition function.
The source of the problem you point out seems to be that according to the results of my derivation (for example the Gibbs distribution at low occupancy) individual particles can have energies above the total energy.
 
Last edited:
  • Like
Likes Twigg
  • #18
Philip Koeck said:
The source of the problem you point out seems to be that according to the results of my derivation (for example the Gibbs distribution at low occupancy) individual particles can have energies above the total energy.
You could have energies above or below the total energy in the canonical. In the microcanonical, there is only one allowed value for the total energy. Does that help?
 
  • Like
Likes Philip Koeck
  • #19
Philip Koeck said:
Here's my own reply:
I've just seen that I express the total energy as a weighted sum of Gibbs factors and that's how I get the partition function.
The source of the problem you point out seems to be that according to the results of my derivation (for example the Gibbs distribution at low occupancy) individual particles can have energies above the total energy.
In the canonical ensemble only the average energy is fixed. So there is in principle no upper limit. Of course there must be a lower limit, because otherwise there's no canonical ensemble definable at all.

Physically the canonical ensemble represents an open system, which exchanges energy (but not particles) with some (much) larger system ("heat bath"). That's why in the Boltzmann distribution of e.g., a non-relativistic gas of non-interacting particles,
$$f(\vec{p})=g \exp \left (-\frac{\vec{p}^2}{2m k_{\text{B}} T} \right),$$
there's no upper limit of a particle's energy.

What's fixed in the canonical ensemble is not the energy (range) but the temperature, which is all that characterizes the "heat bath".
 
  • Like
Likes Twigg
  • #20
Philip Koeck said:
I'm trying to sort out how the microcanonical picture is connected to the canonical and the grand canonical.

If I consider a Helium gas, not necessarily with low density, in an isolated container (fixed energy and particle number) I can use the microcanonical ensemble to arrive at the BE-distribution, and, in the low occupancy limit, the Gibbs distribution. The temperature and chemical potential arise from the Lagrange multipliers that account for constant energy and particle number.

Now, according to some authors, I can also select a small number of atoms and treat them as system and the rest of the gas as heat bath.
This means that there is no physical barrier between the system and the heat bath.
This system can exchange energy and particles with the heat bath and this leads to temperature and chemical potential.
If I work out the expected number of atoms in each energy level I again get the BE-distribution.

There is nothing special about the atoms in the system, so I could chose any small subset of the atoms in the whole gas as system.
Therefore whatever I find for the system should also be true for the heat bath.

Does this sound right so far?

Then I'll go a step further and decide that the system should consist of a single atom.
Is this still a normal and sensible thing to do?
The first problem I face now is that it doesn't make sense to allow for particle exchange with the heat bath, I believe, (since the system is supposed to contain a single atom). So, where does the chemical potential come from now?
I've thought about something similar... Imagine a single atom in a balloon, where the balloon is in an absolute vacuum. Give the atom some kind of heat energy. So does the balloon have a uniform expansion, or is it all over the place?
 
  • #21
valenumr said:
I've thought about something similar... Imagine a single atom in a balloon, where the balloon is in an absolute vacuum. Give the atom some kind of heat energy. So does the balloon have a uniform expansion, or is it all over the place?
I don't understand what you are asking. Can you explain?
 
  • #22
Philip Koeck said:
I don't understand what you are asking. Can you explain?
I mean, imagine a gas consisting of one atom. Can you measure the air pressure of it's container? The pressure is a macro observable I guess. How many atoms do you need for it to become observable?
 
  • Like
Likes Philip Koeck
  • #23
valenumr said:
I mean, imagine a gas consisting of one atom. Can you measure the air pressure of it's container? The pressure is a macro observable I guess. How many atoms do you need for it to become observable?
There's certainly a gray zone where a collection of atoms becomes a gas.
I'm sure that's also a field of study within physics, but I don't think it's statistical physics.
It's certainly an interesting question.

Just to clarify: When I said that an individual atom or a small subvolume of a gas that contains one atom on average could be treated as a system I meant that there is still a gas with very many atoms that this subvolume belongs to. Otherwise it wouldn't make sense to use statistical physics, I would say.
 
  • Like
Likes vanhees71
  • #24
vanhees71 said:
In the canonical ensemble only the average energy is fixed. So there is in principle no upper limit. Of course there must be a lower limit, because otherwise there's no canonical ensemble definable at all.

Physically the canonical ensemble represents an open system, which exchanges energy (but not particles) with some (much) larger system ("heat bath"). That's why in the Boltzmann distribution of e.g., a non-relativistic gas of non-interacting particles,
$$f(\vec{p})=g \exp \left (-\frac{\vec{p}^2}{2m k_{\text{B}} T} \right),$$
there's no upper limit of a particle's energy.

What's fixed in the canonical ensemble is not the energy (range) but the temperature, which is all that characterizes the "heat bath".
I've had a look at the initial pages of Hill's textbook.
Essentially he uses the same mathematical methods that I use in my text, but he discusses an assembly of closed systems that together form an insulated "supersystem" whereas I discuss particles in an insulated system.
So in Hill's treatment the energies belong to systems whereas in mine they belong to individual particles.
Maybe Hill could be a good book to read, if there is interest in a "reading course".
Or is there better choice?
 
  • #25
Which textbook do you precisely mean? What you describe seems a bit vague. What is an "insulated supersystem".

If you want to describe a closed many-body system in thermal equilibrium, it's of course described by the microcanonical ensemble, i.e., an ensemble where the (relevant) additive conserved quantities are fixed.
 
  • #26
vanhees71 said:
Which textbook do you precisely mean? What you describe seems a bit vague. What is an "insulated supersystem".

If you want to describe a closed many-body system in thermal equilibrium, it's of course described by the microcanonical ensemble, i.e., an ensemble where the (relevant) additive conserved quantities are fixed.
An Introduction to Statistical Thermodynamics by Terrel L. Hill.

"Supersystem" is just a word he uses to describe how an assembly of closed systems (real or imagined) that are in thermal contact with each other, but don't exchange particles, can be joined to an insulated system.
I think it's just his way of introducing Gibbs' approach.

Somebody on PF recommended Hill's book and it seems readible to me.

I'm open to other suggestions, of course, maybe Sommerfeld?
 
  • Like
Likes vanhees71
  • #27
I'd not use Sommerfeld's vol. 5. Though it's a good book, particularly the part on kinetic theory, it's too much thermodynamics in the beginning for my taste ;-)).

I'd start right away with the statistical approach. So Hill is a good choice, though I'm not too familiar with it.

Another nice introductory book is the Statistical Mechanics volume of the Berkeley Physics Course (the "little Reif").
 
  • #28
Sorry, I've been behind on this thread. Just got finished with my qualifying exam and taking a breather!

Philip Koeck said:
When I introduce the single particle partition function I only do that as a mathematical trick and I never introduce the many-particle partition function.
I think there's one thing I forgot to respond to in this post (#16). The main mathematical detail that makes your article's approach firmly canonical is the use of the single-particle partition function. Let me illustrate.

Suppose you had a N=2 particle ensemble, for the sake of argument.

If this ensemble is in thermal contact with a heat bath (i.e., canonical), then you can write a single-particle partition function with a definite temperature ##Z_1 = \int \exp[-\beta H(q,p)] dq dp##, and then square it to get the 2-particle partition function ##Z_2 = Z_1^2##.

Now if the ensemble is isolated, then it's more of a pain. You know the total energy of the ensemble ##E_{tot}## is fixed. So, the overall partition function would look like this: $$\begin{align*} Z_2 &= \int_{H_2(q_2,p_2) = E_{tot} - H_1(q_1,p_1)} \left( \exp[-\beta H_2(q_2,p_2)] \int \exp[-\beta H_1(q_1,p_1)] dq_1 dp_1 \right) dq_2 dp_2 \\ &= \exp[-\beta E_{tot}] \int_{H_2(q_2,p_2) = E_{tot} - H_1(q_1,p_1)} \left( \int dq_1 dp_1 \right) dq_2 dp_2 \\ &\neq Z_1^2\end{align*}$$
The important thing here is that the limits of integration are now coupled by the fact that the total energy is fixed. Another way to put this is that the energy of the one particle is correlated with the energy of the other. As a result, the partition function has to be solved by integrating the volume of the constant-energy surface in phase space.
I hope this makes it clear how your article's math is canonical.

The other question I wasn't sure about for a while was why your math turned out canonical even though you used a Lagrange multiplier to constrain the total internal energy of the system. I believe the answer is that using Sterling's approximation relaxes the constraint on the total energy. If you somehow, by some serious math acrobatics, were able to complete the optimization problem without using Sterling's, then you would end up with a true microcanonical ensemble. However, using Sterling's approximation, you get a microcanonical ensemble in the limit as ##N \rightarrow \infty##, which is equivalently a canonical ensemble as shown in previous posts.

Philip Koeck said:
I wonder if it would be possible to use PF to organize an informal reading course.
The idea would be to choose a good book on statistical physics or maybe some online material and go through it chapter by chapter. Every participant could then ask questions until everything is crystal clear.
I like that idea! I'd certainly be willing to try!
 
  • Like
Likes vanhees71 and Philip Koeck
  • #29
Twigg said:
Philip Koeck said:
I wonder if it would be possible to use PF to organize an informal reading course.
The idea would be to choose a good book on statistical physics or maybe some online material and go through it chapter by chapter. Every participant could then ask questions until everything is crystal clear.

I like that idea! I'd certainly be willing to try!
Also answering vanHees!

Maybe one could start with chapter 3 in Reif and the loop back to the first 2 chapters in case that's necessary.
How do we do this? Just start a thread and hope for interest?
 
  • #30
Twigg said:
I think there's one thing I forgot to respond to in this post (#16). The main mathematical detail that makes your article's approach firmly canonical is the use of the single-particle partition function. Let me illustrate.

Suppose you had a N=2 particle ensemble, for the sake of argument.

If this ensemble is in thermal contact with a heat bath (i.e., canonical), then you can write a single-particle partition function with a definite temperature ##Z_1 = \int \exp[-\beta H(q,p)] dq dp##, and then square it to get the 2-particle partition function ##Z_2 = Z_1^2##.

Now if the ensemble is isolated, then it's more of a pain. You know the total energy of the ensemble ##E_{tot}## is fixed. So, the overall partition function would look like this: $$\begin{align*} Z_2 &= \int_{H_2(q_2,p_2) = E_{tot} - H_1(q_1,p_1)} \left( \exp[-\beta H_2(q_2,p_2)] \int \exp[-\beta H_1(q_1,p_1)] dq_1 dp_1 \right) dq_2 dp_2 \\ &= \exp[-\beta E_{tot}] \int_{H_2(q_2,p_2) = E_{tot} - H_1(q_1,p_1)} \left( \int dq_1 dp_1 \right) dq_2 dp_2 \\ &\neq Z_1^2\end{align*}$$
The important thing here is that the limits of integration are now coupled by the fact that the total energy is fixed. Another way to put this is that the energy of the one particle is correlated with the energy of the other. As a result, the partition function has to be solved by integrating the volume of the constant-energy surface in phase space.
I hope this makes it clear how your article's math is canonical.

The other question I wasn't sure about for a while was why your math turned out canonical even though you used a Lagrange multiplier to constrain the total internal energy of the system. I believe the answer is that using Sterling's approximation relaxes the constraint on the total energy. If you somehow, by some serious math acrobatics, were able to complete the optimization problem without using Sterling's, then you would end up with a true microcanonical ensemble. However, using Sterling's approximation, you get a microcanonical ensemble in the limit as ##N \rightarrow \infty##, which is equivalently a canonical ensemble as shown in previous posts.
I certainly use Stirling's formula a lot, so all my results can only be valid for large N. I agree completely.

One big problem occurs in section 6 when I try to specify what the chemical potential and the density of states actually is for an ideal gas. That's where I switch to an integral with an infinite upper limit although the total energy is finite. I guess that's related to what you point out.
I really only use an infinite upper limit so that I can evaluate the integrals for total energy and particle number.
Until section 6 I only consider a finite set of discrete energy levels, which ought to mean that I can keep the highest energy level below the total energy simply by definition.
I simply shouldn't try to apply the theory to anything real! :)
 
Last edited:
  • Like
Likes vanhees71
  • #31
Philip Koeck said:
Also answering vanHees!

Maybe one could start with chapter 3 in Reif and the loop back to the first 2 chapters in case that's necessary.
How do we do this? Just start a thread and hope for interest?
Which Reif? The Berkeley physics course one or the more advanced standalone textbook? I'm fine with both.
 
  • #32
Philip Koeck said:
I certainly use Stirling's formula a lot, so all my results can only be valid for large N. I agree completely.

One big problem occurs in section 6 when I try to specify what the chemical potential and the density of states actually is for an ideal gas. That's where I switch to an integral with an infinite upper limit although the total energy is finite. I guess that's related to what you point out.
I really only use an infinite upper limit so that I can evaluate the integrals for total energy and particle number.
Until section 6 I only consider a finite set of discrete energy levels, which ought to mean that I can keep the highest energy level below the total energy simply by definition.
I simply shouldn't try to apply the theory to anything real! :)
That's one of the subtler points of stat. mech. The "thermodynamic limit" is not as simple as it looks!
 
  • #33
vanhees71 said:
Which Reif? The Berkeley physics course one or the more advanced standalone textbook? I'm fine with both.
I would say the Berkeley one.
 
  • #34
That's a good choice. The other book is very detailed and sometimes you can get lost in these details, though it's a very good source if you want to study the issues in more depth from different points of view.
 
  • #35
vanhees71 said:
That's a good choice. The other book is very detailed and sometimes you can get lost in these details, though it's a very good source if you want to study the issues in more depth from different points of view.
How do we do this? Should I just start a thread called "Reading Reif together" or something like that, or should this come from an advisor?
 
  • Like
Likes vanhees71

Similar threads

Back
Top