Gibbs paradox: an urban legend in statistical physics

In summary, the conversation discusses the misconception surrounding the mixing of distinguishable particles and the existence of a paradox in classical statistical physics. The speaker discovered that there is no real paradox, and this is supported by various references and papers. The correction factor of 1/N! in the entropy calculation is not a paradox, but a logical necessity. The factor is imposed by logic, not to obtain an extensive entropy, but due to combinatorial logic. This is different from Gibbs' definition of entropy in terms of volume of phase space. Within classical mechanics, there is no logical reason to include this factor, but in quantum theory it is a result of indistinguishability. This correction factor persists in the classical limit and is a natural result of Bolt
  • #71
autoUFC said:
I am not sure what you are saying that is not justified within a strict classical theory.

Is the idea that classical particles may be indistinguishable (or impermutable as I prefer)?
If so, I agree, indistinguishable particles (in the quantum sense) is not consistent with classical mechanics.

If you intend to say that the inclusion of the 1/N! is not justified in classical mechanics then you are wrong. This term is demanded by the definition of entropy as S=k ln(W), with W being the number of accessible states for a system with two partitions that can exchange identical classical particles.
Classical particles are distinguishable, because you can each individual particle track from the initial positions in phase space.

So the inclusion of ##1/N!## must be justified from another model for matter, and of course since 1926 we know it's quantum mechanics and the indistinguishability of identical particles. The argument with the phase-space trajectories is obsolete because of the uncertainty relation, i.e., within a system of many identical particles you can't follow any individual particle. In the formalism that's implemented in the Bose or Fermi condition on the many-body Hilbert space, according to which only such vectors are allowed which are symmetric (bosons) or antisymmetric (fermions) under arbitrary permutations of particles, i.e., any permutations of a particle pair doesn't change the state. That justfies the inclusion of the said factor ##1/N!## for an ##N##-body system of identical particles.
 
  • Like
Likes hutchphd
Physics news on Phys.org
  • #72
vanhees71 said:
Classical particles are distinguishable, because you can each individual particle track from the initial positions in phase space.

So the inclusion ofThe argument with the phase-space trajectories is obsolete because of the uncertainty relation, i.e., within a system of many identical particles you can't follow any individual particle. In the formalism that's implemented in the Bose or Fermi condition on the many-body Hilbert space, according to which only such vectors are allowed which are symmetric (bosons) or antisymmetric (fermions) under arbitrary permutations of particles, i.e., any permutations of a particle pair doesn't change the state. That justfies the inclusion of the said factor ##1/N!## for an ##N##-body system of identical particles. must be justified from another model for matter, and of course since 1926 we know it's quantum mechanics and the indistinguishability of identical particles.

No. You are wrong. The inclusive of ##1/N!## is not only justfied under the classical model, it is demanded.
When you count the number of states of a isolated system, with energy ##E##, composed of two subsystems in thermal contact, with energies ##E_1## and ##E_2##, you can consider that the number of states is the product of the number of states of each subsystem.
##W(E_1)=W_1(E_1) × W_2(E_2)##
Note that ##W(E_1)## is the number of accessible states for the whole system given the partition of the energy, with ##E_1## in subsystem 1, therefore it is a function of ##E_1##.

When you consider that the two substems can exchange classical particles, with ##N=N_1+N_2##, you have to include the number of possible permutations of classical particles between the two subsystems
##W(N_1)=W_1(N_1) × W_2(N_2) × [ N! /( N_1! N_2! ) ]##
or
##W(N_1)/N!= [W_1(N_1)/N_1!] × [W_2(N_2)/N_2! ]##

Therefore, when you consider exchange of classical particles the ##1/N!## needs to be included.

I agree that quantum particles do justify the inclusion of this term. However deny that this term is also necessary under the classical model is simply to deny combinatorial logic.

Note that the nice factorization leads to an extensive entropy. Meaning that extenvity follows from combinatorial logic. You do not include term to obtain extensivity. You include it due to logic and obtains extensivity as a result.
 
  • #73
autoUFC said:
No. You are wrong. The inclusive of ##1/N!## is not only justfied under the classical model, it is demanded.
It's really difficult to discuss this matter if you are not careful. I say the inclusion of ##1/N!## is NOT justified within the classical particle paradigm. The only way to come to the conclusion that you must include such a factor comes from the demand that the statistically defined entropy (applied to an ideal gas) must be extensive as the phenomenological entropy is and to avoid the Gibbs paradox. There's no other than this "empirical" reason within the classical particle picture.

The reason to include the factor from a modern point of view is quantum theory and the impossibility to track individual particles in a system of identical particles due to the Heisenberg uncertainty relation.

The most simple derivation is, of course, to start with quantum statistics and count quantum states of the many-body system (most easily in the "grand canonical approach"), where the indistinguishability for fermions and bosons is worked in from the very beginning and no difficulties with the additional factor needed in the classical derivation show up from the very beginning. Of course, it then turns out that the classical Boltzmann statistics, including the said factor, is a good approximation if the occupation probability for all single-particle states are small, i.e., if ##\exp(\beta(E-\mu)) \gg 1##.
 
  • Like
Likes hutchphd
  • #74
vanhees71 said:
It's really difficult to discuss this matter if you are not careful. I say the inclusion of ##1/N!## is NOT justified within the classical particle paradigm.

Are you trolling me?

vanhees71 said:
The only way to come to the conclusion that you must include such a factor comes from the demand that the statistically defined entropy (applied to an ideal gas) must be extensive as the phenomenological entropy is and to avoid the Gibbs paradox. There's no other than this "empirical" reason within the classical particle picture.

Already explained that you are wrong. I guess you are really just trolling...
vanhees71 said:
The reason to include the factor from a modern point of view is quantum theory and the impossibility to track individual particles in a system of identical particles due to the Heisenberg uncertainty relation.

The most simple derivation is, of course, to start with quantum statistics and count quantum states of the many-body system (most easily in the "grand canonical approach"), where the indistinguishability for fermions and bosons is worked in from the very beginning and no difficulties with the additional factor needed in the classical derivation show up from the very beginning. Of course, it then turns out that the classical Boltzmann statistics, including the said factor, is a good approximation if the occupation probability for all single-particle states are small, i.e., if ##\exp(\beta(E-\mu)) \gg 1##.
Would be nice to get some meaningful thoughts, though...
There are several interesting topics of discussion regarding this question. For instance:

The fact that the permuation term ##[ N ! / ( N_1! N_2! ) ]## is needed to account for all accessible states for systems that exchange classical particles. Something that I mentioned a few times but vanhees71 has yet to comment on.
Or the fact that this simple combinatorial logic has been missing from textbooks on statistical mechanics for more than a century.
Or the fact that incluing the ##1/N!## term in the entropy of the classical particle model was never a paradox. A fact that some people simply refuse to accept. I am not sure if the reason for this is a difficult in giving up preconceived ideais, or maybe just trolling fun.
 
  • Skeptical
Likes weirdoguy
  • #75
I'd like to discuss 3 things that I find very surprising in connection with entropy and the mixing paradox.

The first is in connection with the mixing of para- and ortho-helium.
I would say that the difference between these two does not affect the distribution of kinetic energies in a gas.
A gas of para-helium and a gas of ortho-helium should have (almost?) exactly the same distribution of kinetic energies at the same temperature, I would think.
In other words the difference between para and ortho-He is thermodynamically irrelevant.
How is it then that the entropy change can depend on whether you mix two volumes containing the two pure forms in each or whether you mix two volumes already containing mixtures?

The second is in connection with the knowledge of the experimenter that's been mentioned:
Entropy increase is loss of information and you cannot lose information you don't have.
If I give two students the same experiment: Both get a volume that's divided into two halves by a partition wall. In both cases one half volume contains para- and the other ortho-helium.
I tell one student about this, whereas the other student doesn't even know that two forms exist.
If the two students could do an experiment to measure the entropy change due to mixing, would the first one find an entropy increase and the second one not?

Now the third: It's been pointed out that classical systems should be described as limiting cases of quantum systems, so you only really need BE and FD statistics and the correct Boltzmann statistics emerge in the low occupancy limit.
What about if you have a very obviously classical system to start with? Imagine, for example, a large volume with perfectly elastic walls in zero gravity and in this volume billions of tiny, but macroscopic, perfectly elastic and frictionless metal spheres. (Alternatively you can think of clusters or colloids or macromolecules.) These spheres can even be identical and they still would be distinguishable simply because they are trackable with some optical system.
Wouldn't it make sense to describe this system directly in classical terms rather than as a limiting case of quantum statistics?
 
Last edited:
  • Like
Likes vanhees71
  • #76
Philip,
concerning your first point. Ortho and Para Hydrogen can in principle be separated easily e.g. using their different magnetic momentum and this can be done reversibly. So you could set up a thermodynamic cycle and measure the entropy difference of mixing the two gasses.
Concerning your second question, this is the point made in the paper by Jaynes which has been cited repeatedly in this thread. According to him, the entropy of a macro state depends on how many macro parameters we use to define it. If a student has no information on the difference of ortho- and para- hydrogen, he is discussing other macrostates as a student who has this information and can measure it.
Concerning your third question, I would say that we never describe like molecules or greater completely in terms of quantum mechanics. Born-Oppenheimer separation, separation of nuclear spin, rovibrational states etc. lead to a description of molecules which is also incompatible with pure Bose or Einstein statistics.
 
  • Like
Likes vanhees71
  • #77
Philip Koeck said:
I'd like to discuss 3 things that I find very surprising in connection with entropy and the mixing paradox.

The first is in connection with the mixing of para- and ortho-helium.
I would say that the difference between these two does not affect the distribution of kinetic energies in a gas.
A gas of para-helium and a gas of ortho-helium should have (almost?) exactly the same distribution of kinetic energies at the same temperature, I would think.
In other words the difference between para and ortho-He is thermodynamically irrelevant.
How is it then that the entropy change can depend on whether you mix two volumes containing the two pure forms in each or whether you mix two volumes already containing mixtures?

The second is in connection with the knowledge of the experimenter that's been mentioned:
Entropy increase is loss of information and you cannot lose information you don't have.
If I give two students the same experiment: Both get a volume that's divided into two halves by a partition wall. In both cases one half volume contains para- and the other ortho-helium.
I tell one student about this, whereas the other student doesn't even know that two forms exist.
If the two students could do an experiment to measure the entropy change due to mixing, would the first one find an entropy increase and the second one not?

Now the third: It's been pointed out that classical systems should be described as limiting cases of quantum systems, so you only really need BE and FD statistics and the correct Boltzmann statistics emerge in the low occupancy limit.
What about if you have a very obviously classical system to start with? Imagine, for example, a large volume with perfectly elastic walls in zero gravity and in this volume billions of tiny, but macroscopic, perfectly elastic and frictionless metal spheres. (Alternatively you can think of clusters or colloids or macromolecules.) These spheres can even be identical and they still would be distinguishable simply because they are trackable with some optical system.
Wouldn't it make sense to describe this system directly in classical terms rather than as a limiting case of quantum statistics?
ad 1) Concerning ortho- and para-He. These are non-identical (composed) particles and as such show mixing entropy if first separated in two parts of a volume and then diffusing through each other. They differ in spin (0 vs 1).

ad 2) I agree with this. In order to measure mixing entropy of course you need the information about the initial and the final state. As I already said, I couldn't find any real-world experiment in the literature demonstrating mixing entropy, though in nearly any theory textbook the mixing entropy is explained and the Gibbs paradox discussed with the conclusion that you need QUANTUM statistics as a really fundamental argument to introduce the correct counting via the indistinguishability of identical particles.

ad 3) I think there are no "classical systems", only quantum systems though of course many-body systems very often behave classically due to decoherence and the fact that a coarse-grained description of "macroscopic observables" is a sufficient description and these observables then behave with an overwhelming accuracy classical.

Of course, there are exceptions, usually for low-temperature situations. There you have collective "quantum behavior" (BECs, superconductivity, suprafluidity, specific heat of diamonds even at room temperature,...) of macroscopic observables.
 
  • Like
Likes Philip Koeck
  • #78
Philip Koeck said:
What about if you have a very obviously classical system to start with? Imagine, for example, a large volume with perfectly elastic walls in zero gravity and in this volume billions of tiny, but macroscopic, perfectly elastic and frictionless metal spheres. (Alternatively you can think of clusters or colloids or macromolecules.) These spheres can even be identical and they still would be distinguishable simply because they are trackable with some optical system.
Wouldn't it make sense to describe this system directly in classical terms rather than as a limiting case of quantum statistics?

There is something called Edward's entropy, that is the entropy for grains packed in a volume. Diferent from traditional entropy, it depends on the volume and number of grains, no energy. As you mention, this is a very obviously classical system to start with. People working on this problem say that the entropy is the logarithm of the number of accessible states divided by N!.
See for instance
Asenjo-Andrews, D. (2014). Direct computation of the packing entropy of granular materials (Doctoral thesis). https://doi.org/10.17863/CAM.16298
Accessible at
https://www.repository.cam.ac.uk/handle/1810/245871.

There you may see, section 6.2.7,

"When we plot ##S^∗## as a function of ##N## (Figure 6.16), we note (again) that its ##N-##dependence is not very linear, in other words, this form of ##S^∗## is also not extensive. This should come as no surprise because also in equilibrium statistical mechanics, the partition function of a system of N distinct objects is not extensive. We must follow Gibbs’ original argument about the equilibrium between macroscopically identical phases of classical particles and this implies that we should subtract ln ##N!## from ##S^∗##. We note that there is much confusion in the literature on this topic, although papers by van Kampen [93] and by Jaynes [39] clarify the issue. Indeed, if we plot ##S^∗ − \ln N!## versus ##N## we obtain an almost embarrassingly straight line that, moreover, goes through the origin. Previous studies on the entropy of jammed systems, such as the simulations of Lubachevsky et al. [55] presented in Chapter 2, ignored the ##N!## term. "

The author could be a little more clear. For instance, the sentence " the partition function of a system of N distinct objectsis not extensive," gives the impression that there is some paradox in this problem. In fact, for permutable objects, the entropy is ##S=\ln(W/N!)## the free energy is ##F=kT \ln(Z/N!)##. So, in the same way that ln(W) for a system of identical classical objects is not extensive, so ln(Z) is also not extensive.

Note that Boltzman principle was that S=k ln(W). However, this W should be the the number of accessible states for a isolated system composed of subsystems that exchange particles (or grains). In my previous posts I explained why Boltzman principle leads to the inclusion of the 1/N! term by combinatorial logic.

A curious thing is the fact that Lubachevsky did not included the 1/N!. I would say that he was mislead by what one reads in most textbooks that wrongly suggest that this term has no reason to be included in the classical model.
 
  • #79
vanhees71 said:
in nearly any theory textbook the mixing entropy is explained and the Gibbs paradox discussed with the conclusion that you need QUANTUM statistics as a really fundamental argument to introduce the correct counting via the indistinguishability of identical particles.

Thats is a sad truth. Nearly any theory textbook does not present the most precise explanation for the inclusion of the ##1/N!## term.

vanhees71 said:
ad 3) I think there are no "classical systems", only quantum systems though of course many-body systems very often behave classically due to decoherence and the fact that a coarse-grained description of "macroscopic observables" is a sufficient description and these observables then behave with an overwhelming accuracy classical.
There are classical and quantum models of real systems. I do not think that the quantum model would be any good for describing a pack full os grains, or a galaxy of stars.

By the way vanhees71, can you tell me please. Do you think that the permuation term ##[ N! /(N_1! N_2!) ]## should be included when counting the number of states of an isolated system, divided in two parts that can exchange classical particles? I would like to know what your answer that.
 
  • #80
The correct answer is you have to take out the ##N!##. This nobody doubts. We have only discussed whether this is justified within a strictly classical model (my opinion is, no) or whether you need to argue either with the exstensivity condition for entropy (which was how Boltzmann, Gibbs et al argued at time), i.e., by a phenomenological argument (i.e., you introduce another phenomenological principle to classical statistical physics) or with quantum mechanics and the indistinguishability of identical particles which is part of the foundations of quantum theory.
 
  • Like
Likes hutchphd
  • #81
vanhees71 said:
The correct answer is you have to take out the ##N!##. This nobody doubts. We have only discussed whether this is justified within a strictly classical model (my opinion is, no) or whether you need to argue either with the exstensivity condition for entropy (which was how Boltzmann, Gibbs et al argued at time), i.e., by a phenomenological argument (i.e., you introduce another phenomenological principle to classical statistical physics) or with quantum mechanics and the indistinguishability of identical particles which is part of the foundations of quantum theory.
It is not a question of opinion. I presented a logical reason to the inclusion of this term under the classical model. I would imagine that to state that there is no justification under classical physics to the ##1/N!## you should refute my argument.
(I say my argument but vam Kampen attributes the explanation to ehrenfest)

This term comes from counting the possible permutations of classical particles between systems that can exchange particles. One does not need to appeal to quantum mechanics. Vam Kampen writes:

In statistical mechanics this dependence is obtained by inserting a factor ##1/N!## in the partition function. Quantum mechanically this factor enters automatically and in many textbooks that is the way in which it is justified. My point is that this is irrelevant: even in classical statistical mechanics it can be derived by logic — rather than by the somewhat mystical arguments of Gibbs and Planck. Specifically I take exception to such statements as: "It is not possible to understand classically why we must divide by N! to obtain the correct counting of states", and: "Classical statistics thus leads to a contradiction with experience even in the range in which quantum effects in the proper sense can be completely neglected".

These two quotes that vam Kamped disses are from Huang 1963 and Münster 1969.
 
  • #82
autoUFC said:
My point is that this is irrelevant: even in classical statistical mechanics it can be derived by logic
Thanks for this discussion, it has been enlightening.
I disagree with this characterization. Within classical mechanics the 1/N! is a rubric pasted on ex poste facto to make the the statistical theory work out. Statistical mechanics should not require a redefinition of how we count items!
Until this discussion I hadn't fully appreciated how directly this pointed to the need for something that looks a lot like Quantum Mechanics.
 
  • Like
Likes vanhees71 and DrClaude
  • #83
hutchphd said:
Thanks for this discussion, it has been enlightening.
I disagree with this characterization. Within classical mechanics the 1/N! is a rubric pasted on ex poste facto to make the the statistical theory work out. Statistical mechanics should not require a redefinition of how we count items!
Until this discussion I hadn't fully appreciated how directly this pointed to the need for something that looks a lot like Quantum Mechanics.
Well...
I believe that to disagree you should refute my argument. Can you tell me the following.
How many states are available to an isolated system that is composed of two subsystems 1 and 2 that exchange classical particles. In your answer you should consider that subsystem 1 has ##W_1(E_1,N_1, V_1)## accessible states and subsystem 2 has ##W_2(E_2,N_2, V_2)## accessible states.

My answer is, the number of states ##W## are available to the isolated system that is composed of two subsystems of classical particles is
##W=W_1×W_2×[ N!/ (N_1! N_2!)]##

Is my answer wrong? If it is not wrong, can you see the factors ##1/N_1!## and ##1/N_2!##? These factors are NOT a rubric pasted on ex poste facto to make the the statistical theory work out. In fact, I would say that not including them would be a illogical redefinition of how we count items!

I really feel I am being trolled as people keep disagreing with me, but without addressing my arguments.
 
  • Skeptical
Likes weirdoguy
  • #84
With all due respect I will answer as I choose. Several people ( smarter than I) have clearly told you that your arithmetic is fine but there is no classical reason to count that way, other than it give you the obviously desired answer. That does not make it "logical"

You are not being trolled. Perhaps you are not trying hard enough to understand?
 
  • Like
Likes vanhees71, DrClaude and weirdoguy
  • #85
hutchphd said:
With all due respect I will answer as I choose. Several people ( smarter than I) have clearly told you that your arithmetic is fine but there is no classical reason to count that way, other than it give you the obviously desired answer. That does not make it "logical"

You are not being trolled. Perhaps you are not trying hard enough to understand?
Perhaps you are not trying hard enough to understand. You say there is no classical reason to count this way. What other way to count exists? As far as I know, counting is neither classical nor quantic. I do not count this way to obtain a desidered answer, what you mean by that anyway?

If you were do count in your way, what would be your result? Can you please to tell me?
 
  • #86
Stephen Tashi said:
I don't know which thread participants have read Jayne's paper http://bayes.wustl.edu/etj/articles/gibbs.paradox.pdf , but the characterization of that paper as "Gibbs made a mistake" is incorrect. The paper says that Gibbs explained the situation correctly, but in an obscure way.

I have read Jaynes again. He says that Gibbs probably presented the correct explanation in an early work. Only that he phrase his thoughts in a confusing way. As Jaynes writes about Gibbs text:
"The decipherment of this into plain English required much effort, sustained only by faith in Gibbs; but eventually there was the reward of knowing how Champollion felt when he realized that he had mastered the Rosetta stone."

But Jaynes places8 the blame in those that follow:
"In particular, Gibbs failed to point out that an "integration constant" was not an arbitrary
constant, but an arbitrary function. But this has, as we shall see, nontrivial physical consequences.
What is remarkable is not that Gibbs should have failed to stress a fine mathematical point in almost the last words he wrote; but that for 80 years thereafter all textbook writers (except possibly Pauli) failed to see it."

I have to say, however, that Jaynes is quite unclear also. One does not find in Jaynes the term ##[N! /( N_1! N_2!)]## that I see as the key to the problem. Van Kampen is a bit beter, but he also do not stress the mathematical point clearly. The binomial coeficient appears in an unnumbered equation between eqs (9) and (10). In my opinion, the best explanation is in the work by Swendsen. Also, the work by Hjalmar, regarding the buckballs, has the explanation with this binomial coeficient.
 
  • #87
Philip Koeck said:
This factor 1/N! is only correct for low occupancy, I believe. One has to assume that there is hardly ever more than one particle in the same state or phase space cell. Do you agree?
What happens if occupancy is not low?

In classical systems two particles can not be in the same state, as the state is determined by a point in a continuous phase space. If you consider phase cells, you could chose than so small that no two particles will ever occupy the same cell.
There is the Maxwell-Boltzmann statistics,
https://en.m.wikipedia.org/wiki/Maxwell–Boltzmann_statistics
As I see it, this would be a statistic for quantum-like particles (as states are assumed to be discreet and with possibly multiple occupancy) that permutable (if two change places you get a different state). As you correctly states, in the limit where occupancy is low, MB agrees with the quantum statistics.
 
  • Like
Likes Philip Koeck
  • #88
vanhees71 said:
I must admit, though looking for such experiments in Google scholar, I couldn't find any. I guess it's very hard to realize such a "Gibbs-paradox setup" in the real world and measuring the entropy change.
Pauli in his lecture notes on thermodynamics describes such an experiment which does not use semipermeable membranes but temperature changes, to freeze out the components.

You could also think of using, e.g. specific antibodies attached to a "fishing rod" i.e. a carrier. By changing the temperature or buffer conditions, you can reversibly bind or remove specific molecules. The occupation of the binding sites will depend on the concentration of the molecules which changes upon mixing. So binding to the fishing rods will occur at lower temperatures in the mixture as compared to the unmixed solutions.
 
  • Like
Likes vanhees71
  • #89
DrDu said:
You could also think of using, e.g. specific antibodies attached to a "fishing rod" i.e. a carrier. By changing the temperature or buffer conditions, you can reversibly bind or remove specific molecules. The occupation of the binding sites will depend on the concentration of the molecules which changes upon mixing. So binding to the fishing rods will occur at lower temperatures in the mixture as compared to the unmixed solutions.
I assume that you know this to be the basis for an entire class of "lateral flow assays" (see ERISA) and these can provide very high sensitivity optically. I am not sure whether these could be made easily reversible ( but I'm certain this reflects only my personal lack of knowledge). Interesting thoughts.
 
  • #90
vanhees71 said:
We have only discussed whether this is justified within a strictly classical model (my opinion is, no) or whether you need to argue either with the exstensivity condition for entropy (which was how Boltzmann, Gibbs et al argued at time), i.e., by a phenomenological argument (i.e., you introduce another phenomenological principle to classical statistical physics)

If I am understanding Jaynes' argument correctly, he is arguing that you can justify the ##1 / N!## factor in classical physics using a phenomenological argument. At the bottom of p. 2 of his paper, he says:

...in the phenomenological theory Clausius defined entropy by the integral of ##dQ / T## over a reversible path; but in that path the size of a system was not varied, therefore the dependence of entropy on size was not defined.

He then references Pauli (1973) for a correct phenomenological analysis of extensivity, which would then apply to both the classical and quantum cases. He discusses this analysis in more detail in Section 7 of his paper.
 
  • #91
autoUFC said:
I really feel I am being trolled as people keep disagreing with me, but without addressing my arguments.

I see no indication that anyone in this thread is trolling.

autoUFC said:
You say there is no classical reason to count this way. What other way to count exists?

A way that takes into account what happens if ##N## changes. If you only consider processes in which ##N## is constant, which is all you have considered in your posts, you cannot say anything about the extensivity of entropy. To even address that question at all, you need to consider processes in which ##N## changes. That is the key point Jaynes makes in the passage from his paper that I quoted in my previous post.

autoUFC said:
One does not find in Jaynes the term ##[N! /( N_1! N_2!)]## that I see as the key to the problem.

Jaynes in Section 7 of his paper is discussing a general treatment of extensivity (how entropy varies with ##N##), not the particular case of two types of distinguishable particles that you are considering. His general analysis applies to your case and for that case it will give the term you are interested in.
 
  • Informative
Likes hutchphd
  • #92
PeterDonis said:
If I am understanding Jaynes' argument correctly, he is arguing that you can justify the ##1 / N!## factor in classical physics using a phenomenological argument. At the bottom of p. 2 of his paper, he says:
He then references Pauli (1973) for a correct phenomenological analysis of extensivity, which would then apply to both the classical and quantum cases. He discusses this analysis in more detail in Section 7 of his paper.
I checked Pauli vol. III. There is only the treatment of ideal-gas mixtures within phenomenological thermodynamics, and there you start from the correct extensive entropy. After the usual derivation of the entropy change when letting two different gases diffuse into each other,
$$\bar{S}-S=R [n_1 \ln(V/V_1)+n_2 \ln(V/V_2)] > 0,$$
he states:

"The increase in entropy, ##\bar{S}-S##, is independent of the nature of the two gases. They must simply be different. If both gases are the same, then the change in entropy is zero; that is
$$\bar{S}-S=0.$$
We see, therefore, that there is no continuous transition between two gases. The increase in entropy is always finite, even if the gases are only infinitesimally different. However, if the two gases are the same, then the change in entropy is zero. Therefore, it is not allowed to let the difference between two gases gradually vanish. (This is important in quantum theory.)"

The issue with the counting in statistical mechanics is not discussed, but I think the statement is very clear that there is mixing energy for different gases and (almost trivially) none for identical gases. I still don't see, where in all this is a justification in classical statistics for the factor ##1/N!## in the counting of "complexions" other than the phenomenological input that the entropy must be extensive. From a microscopic point of view there is no justification other than the indistinguishability due to quantum theory. I think you need quantum theory to justify the factor ##1/N!## in the counting of complexions and that you also need quantum theory to get a well-defined entropy, because you need ##h=2 \pi \hbar## as the phase-space-volume measure to count phase-space cells. Then you also don't get the horrible expressions for the classical entropy with dimensionful arguments of logarithms. That's all I'm saying.
 
  • #93
vanhees71 said:
I checked Pauli vol. III. There is only the treatment of ideal-gas mixtures within phenomenological thermodynamics, and there you start from the correct extensive entropy.

Yes, Jaynes explicitly says that Pauli did not prove that entropy must be extensive, he just assumed it and showed what the phenomenology would have to look like if it was true.

vanhees71 said:
there is no continuous transition between two gases. The increase in entropy is always finite, even if the gases are only infinitesimally different

What does "infinitesimally different" mean? There is no continuous parameter that takes one gas into the other. You just have two discrete cases: one type of gas particle, or two types of gas particle mixed together.

Jaynes' treatment seems to me to correctly address this as well. Let me try to summarize his argument:

(1) In the case where we have just one type of gas particle, removing the barrier between the two halves of the container does not change the macrostate at all. We still have one type of gas particle, spread through the entire container. So macroscopically, the process of removing the barrier is easily reversible: just reinsert the barrier. It is true that particles that were confined to the left half of the container before are now allowed to be in the right half, and vice versa, and reinserting the barrier does not put all of the particles that were confined to each half back where they were originally; the precise set of particles that are confined to each half will be different after the barrier is reinserted, as compared to before it was removed. But since all of the particles are of the same type, we have no way of distinguishing the state before the barrier was removed from the state after the barrier was reinserted, so there is no change in entropy.

(2) In the case where we have two types of gas particle, removing the barrier does change the macrostate; now we have to allow for particles of both types being in both halves of the container, instead of each type being confined to just one half. This is reflected in the fact that the process of removing the barrier is now not reversible: we can't just reinsert the barrier and get back the original macrostate. To get back the original macrostate, we would have to pick out all the particles that were in the "wrong" half of the container and move them back to where they were before the barrier was removed. The mixing entropy ##N k \ln 2## is a measure of how much information is required to perform that operation, which will require some external source of energy and will end up increasing the entropy of that external source by at least that much (for example by forcing heat to flow from a hot reservoir to a cold reservoir and decreasing the temperature difference between them).

There is no continuum between these two alternatives; they are discrete. Alternative 2 obtains if there is any way of distinguishing the two types of gas particle available to us. It doesn't depend on any notion of "how different" they are.

vanhees71 said:
I still don't see, where in all this is a justification in classical statistics for the factor ##1 / N!## in the counting of "complexions"

Jaynes, in section 7 of his paper, shows how this factor arises, in appropriate cases (as he notes, entropy is not always perfectly extensive), in a proper analysis that includes the effects of changing ##N##. As he comments earlier in his paper (and as I have referenced him in previous posts), if your analysis only considers the case of constant ##N##, you can't say anything about how entropy varies as ##N## changes. To address the question of extensivity of entropy at all, you have to analyze processes where ##N## changes.

vanhees71 said:
you need ##h = 2 \pi \hbar## as the phase-space-volume measure to count phase-space cells. Then you also don't get the horrible expressions for the classical entropy with dimensionful arguments of logarithms.

Jaynes addresses this as well: he notes that, in the phenomenological analysis, a factor arises which has dimensions of action, but there is no explanation for where it comes from. I agree you need quantum mechanics to explain where this factor comes from.
 
  • Like
Likes dextercioby, hutchphd and vanhees71
  • #94
PeterDonis said:
What does "infinitesimally different" mean? There is no continuous parameter that takes one gas into the other. You just have two discrete cases: one type of gas particle, or two types of gas particle mixed together.
Don't ask me. That's what Pauli wrote. Of course, there's no continuous parameter, but that's also not explainable within classical mechanics. There's no consistent classical model of matter. It's not by chance that quantum theory has been discovered because of the inconsistencies of classical statistical physics. It all started with Planck's black-body radiation law. Classical statistics of the em. field leads to the Rayleigh-Jeans catastrophe. So there was no other way out than Planck's "act of desparation". Other examples are the specific heats at low temperature, the impossibility to derive Nernst's 3rd Law from classical statistics, and last but not least also the here discussed Gibbs paradox.
Jaynes' treatment seems to me to correctly address this as well. Let me try to summarize his argument:
Sure, I fully agree with Jayne's treatment, and of course there's no paradox if you use the information-theoretical approach and the indistinguishability of identical particles from quantum theory.

I just don't think that there is any classical justification for the crucial factor ##1/N!##, because classical mechanics tells you that you can follow any individual particle's trajectory in phase space. That this is practically impossible is not an argument for this factor, because you have to count different microstates compatible with the macrostate under consideration.

It's anyway a useless discussion, because today we know the solution of all these quibbles: It's QT!
 
  • Like
Likes hutchphd
  • #95
vanhees71 said:
classical mechanics tells you that you can follow any individual particle's trajectory in phase space. That this is practically impossible is not an argument for this factor, because you have to count different microstates compatible with the macrostate under consideration.

This would imply that in classical mechanics there can never be any such thing as a gas with only one type of particle, or even a gas with just two types: a gas containing ##N## particles would have to have ##N## types of particles, since each individual particle must be its own type: each individual particle is in principle distinguishable from every other one, and microstates must be counted accordingly.

If this is the case, then there can't be any Gibbs paradox in classical mechanics because you can't even formulate the scenario on which it is based.
 
  • Like
Likes dextercioby, hutchphd and vanhees71
  • #96
I have a question related to this topic. Since the canonical ensemble corresponds to an open system, and the micro-canonical ensemble corresponds to a closed system, is it true that the idea of density matrix is the appropriate way to describe canonical ensembles and not micro-canonical ensembles?
 
  • #97
PeterDonis said:
If this is the case, then there can't be any Gibbs paradox in classical mechanics because you can't even formulate the scenario on which it is based.
Not sure about that.
If entropy is only a function of macroscopic variables then you don't even take into account which particle is which. So it shouldn't matter whether they are distinguishable or not.
Even for distinguishable particles you would conclude that mixing two identical gases will not increase entropy, wouldn't you?
 
  • Like
Likes vanhees71
  • #98
PeterDonis said:
This would imply that in classical mechanics there can never be any such thing as a gas with only one type of particle, or even a gas with just two types: a gas containing ##N## particles would have to have ##N## types of particles, since each individual particle must be its own type: each individual particle is in principle distinguishable from every other one, and microstates must be counted accordingly.

If this is the case, then there can't be any Gibbs paradox in classical mechanics because you can't even formulate the scenario on which it is based.
Well, yes. That's an extreme formulation, but I'd say it's correct. It shows only once more that classical mechanics is not the correct description of matter, because it contradicts the observations. You need quantum theory!
 
  • #99
love_42 said:
I have a question related to this topic. Since the canonical ensemble corresponds to an open system, and the micro-canonical ensemble corresponds to a closed system, is it true that the idea of density matrix is the appropriate way to describe canonical ensembles and not micro-canonical ensembles?
All quantum states are described by statistical operators, aka a density matrix.

The difference between the standard ensembles is, under which constraints you maximize the Shannon-Jaynes-von Neumann entropy to find the equilibrium statistical operator:

Microcanonical: You have a closed system and now only the values of the conserved quantities (the 10 from spacetime symmetry, energy, momentum, angular momentum, center-mass/momentum velocity and maybe some for conserved charges like baryon number, Strangeness, electric charge).

Canonical: The considered system is part of a larger system but you can exchange (heat) energy between the systems. The energy of the subsystem is known only on average.

Grand-Canonical: The considered system is part of a larger system but you can exchange (heat) energy and particles between the systems. The energy and conserved charges of the subsystem are known only on average.
 
  • Like
Likes dextercioby
  • #100
Philip Koeck said:
Not sure about that.
If entropy is only a function of macroscopic variables then you don't even take into account which particle is which. So it shouldn't matter whether they are distinguishable or not.
Even for distinguishable particles you would conclude that mixing two identical gases will not increase entropy, wouldn't you?
True, but this is the phenomenological approach, and you don't use statistical physics to derive the phenomenological thermodynamic quantities to begin with, and no problem with counting occurs. That's the strength of phenomenological thermodynamics: You take a view simple axioms based on observations and describe very many phenomena very well. That's why the physicists of the 18th and 19th century came so far with thermodynamics that they thought there are only a "few clouds" on the horizon but that these will be overcome with more accurate observations and better axioms.

Together with the statistical approach a la Boltzmann, Maxwell, Gibbs, and later Planck and Einstein, one had to learn that what was really needed was a "revolutionary act of desperation" and quantum theory had to be developed.
 
  • Like
Likes Philip Koeck
  • #101
Philip Koeck said:
If entropy is only a function of macroscopic variables then you don't even take into account which particle is which. So it shouldn't matter whether they are distinguishable or not.

This is basically the point of Jaynes' phenomenological discussion in his paper. At the phenomenological level, it is of course correct.

However, in the post I was responding to, @vanhees71 was not talking about the phenomenological level. He was talking about a statistical treatment that looks at and counts microstates. At that level, no, entropy is not only a function of macroscopic variables; you have to know the microstates and their distribution.
 
  • Like
Likes vanhees71 and Philip Koeck
  • #102
Of course you have to know the microstates but not necessarily the distribution.

The entropy is a functional of the distribution, and this view makes it possible to use entropy in its information-theoretical foundation as a way to deduce the probabilities (or probability distribution for continuous observables) given the available information by the maximum entropy principle. It's a way to deduce a probability distribution of "least prejudice" based on the available information, i.e., it maximizes entropy as the measure of the "missing information" given a probability distribution under the constraint to be compatible with your knowledge about the state.

For the special case of equilibrium the only knowledge is about the conserved quantities of the system, and the corresponding constraints can be described also in different ways leading to the microcanonical, canonical, and grand canonical ensembles, which are all phenomenologically equivalent (only!) in the "thermodynamic limit", i.e., for really macroscopic systems with very many particles.
 
  • Like
Likes Philip Koeck
  • #103
I'm trying to understand the different levels of macro and micro that seem to be involved here.

Just checking whether people agree with my thinking.

I'll take the microcanonical derivation of BE or FD-statistics for example.
In my text on ResearchGate (mentioned earlier) I use W for the number of different ways a particular distribution of particles among energy levels can be realized.
In equilibrium this number W is maximized under the constraint of constant energy and possibly constant number of particles.
I believe this should mean that for a given system (with given energy levels and single particle states on each level) the distribution among energy levels with the largest W depends only on the total energy and number of particles.
So at equilibrium entropy is actually completely defined by macroscopic variables.
(Here I assume that there is a unique relation between S and W, such as S = k lnW.)

On the other hand there are many other distributions that give the same total energy.
They all have a smaller W and they correspond to non-equilibrium states of the system.
Is statistical entropy also defined as k lnW for these distributions?

Then there is a level that's even more "micro": I could also count how many particles are in each single particle state belonging to each energy level, not just how many there are on each level.
Is this ever used in a definition of entropy?

(Cautiously put in brackets: For distinguishable particles I could even ask which particle is in which single particle state.)
 
  • #104
I think the logical argument is rather to start with counting for a general non-equilibrium state and then derive the equilibrium case from the maximum-entropy principle under the constraints of constant energy and particle number.

For an ideal gas of fermions the counting is like this:

To be able to count you first have to put the particles in a finite box, but to have proper momentum operators you should impose periodic rather than rigid boundary conditions. Then the single-particle phase-space density turns out to be (counting the number of momentum eigenstates in a momentum volume ##\mathrm{d}^3 p## around ##\vec{p}_j##)
$$G_j=g \frac{\mathrm{d}^3 x \mathrm{d}^3p}{(2 \pi \hbar)^3},$$
where ##g=(2s+1)## is the degeneracy due to spin.

Now consider the number of possibilities to put ##N_j## particles in phase-space cell ##j##. For fermions ##N_j \leq G_j## since you can put only one particle into each one-particle state. Thus the number of possibilities is the same as drawing ##N_j## balls out of an urn containing ##G_j## labelled balls without repetition, i.e.,
$$\Gamma_j=\binom{G_j}{N_j}=\frac{G_j!}{N_j!(G_j-N_j)!}.$$
The entropy is given by
$$S=k \sum_j \Gamma_j \simeq k_{\text{B}} \sum_j [G_j \ln G_j -N_j \ln N_j-(G_j-N_j)\ln(G_j-N_j)].$$
Defining
$$N_j =G_j f_j=g \mathrm{d}^3 x \mathrm{d}^3 p/(2 \pi \hbar)^3, \quad N_j/G_j=f_j$$
you get
$$S=-k \sum G_j [f_j \ln f_j+(1-f_j) \ln(1-f_j)] = -k \int_{\mathbb{R}^6} \frac{\mathrm{d}^3 x \mathrm{d}^3 p}{(2 \pi \hbar)^3} g [f \ln f+(1-f) \ln(1-f)].$$
To find the equilibrium distribution it's most simple to use the grand-canonical ensemble and maximizing the entropy under the constraints of given average energy and particle number, using ##E(\vec{p})=\vec{p}^2/(2m)## (or ##E(\vec{p})=c \sqrt{m^2 c^2+\vec{p}^2}## for relativistic gases)
$$U=\int_{\mathbb{R}^3} \frac{\mathrm{d}^3 \vec{x} \mathrm{d}^3 \vec{p}}{(2 \pi \hbar)^3} g E_{\vec{p}} f(\vec{x},\vec{p}),$$
$$N=\int_{\mathbb{R}^3} \frac{\mathrm{d}^3 \vec{x} \mathrm{d}^3 \vec{p}}{(2 \pi \hbar)^3} g f(\vec{x},\vec{p}),$$
With the Lagrangemultipliers ##\lambda_1## and ##\lambda_2## you get
$$\delta S-\lambda_1 \delta U -\lambda_2 \delta N=-k \int_{\mathbb{R}^6} \frac{\mathrm{d}^3 \vec{x} \mathrm{d}^3 \vec{p}}{(2 \pi \hbar)^3} [\delta f \ln f -\delta f \ln (1-f) +\delta f- \lambda_1 E(\vec{p}) \delta f -\lambda_2 \delta f] \stackrel{!}{=}0.$$
Since this must hold for all variations ##\delta f## we find
$$\ln[f/(1-f)]=\lambda_1 E(\vec{p})+\lambda_2$$
or
$$\frac{f}{1-f}=\exp(\lambda_1 E+\lambda_2) \; \Rightarrow\; f=\frac{1}{\exp(\lambda_1 E(\vec{p})+\lambda_2)+1}.$$
The usual analysis of the resulting thermodynamics gives ##\lambda_1=1/(k T)## and ##\lambda_2=-\mu/(k T)##, where ##\mu## is the chemical potential and ##T## the (absolute) temperature of the gas,
$$f=\frac{1}{\exp[(E(\vec{p})-\mu)/(k T)]+1},$$
which is the Fermi-Dirac distribution of an ideal gas, as it should be.
 
  • Like
Likes dextercioby, etotheipi and Philip Koeck
  • #105
PeterDonis said:
Yes, Jaynes explicitly says that Pauli did not prove that entropy must be extensive, he just assumed it and showed what the phenomenology would have to look like if it was
Jaynes, in section 7 of his paper, shows how this factor arises, in appropriate cases (as he notes, entropy is not always perfectly extensive), in a proper analysis that includes the effects of changing ##N##. As he comments earlier in his paper (and as I have referenced him in previous posts), if your analysis only considers the case of constant ##N##, you can't say anything about how entropy varies as ##N## changes. To address the question of extensivity of entropy at all, you have to analyze processes where ##N## changes.

I believe my argument addresses the case where N changes, as it deals with two systems that exchange particles.

Considering a simpler problem may help illustrate my point. There are two bags, one has ##V_1## pockets and the other has ##V_2## pockets. There are ##N## balls, each ball has a different number written on it, so that they are distinguishable. Someone knows at any moment in what poket are each of the balls. You play a game where at each step you chose randomly one of the balls and moves it to one of the pockets, also chosen randomly. After a large number of steps of the game, what would be the most probable number ##N_1## of balls in the bag with ##V_1## pockets?

I guess that most people would agree that simple intuition suggests that ##N_1=N V_1/(V_1+V_2)##. But how to show that mathematically?

There will be ##\Omega_1=V_1^{N_1}## ways to put ##N_1## balls in one bag and ##\Omega_2=V_2^{N_2}## ways to put ##N_2## balls in the other bag.

If you are one of those that believe that there is no logical reason to divide by the factorial, since the balls are distinguishable, I guess you believe that the equilibrium maximizes ##\Omega_1(N_1)×\Omega_2(N_2)##.
If you apply the logarithm and differentiate with ##N_1##, considering that ##N_1+N_2=N## you get ##\ln(V_1)=\ln(V_2)## as equilibrium condition. Since the numbers ##V_1## and ##V_2## are arbitrary, that would suggest that there is no equilibrium. One may believe that this is a paradox. One may insist that the only possible explanation for the existence of an equilibrium is that the balls are in fact indistinguishable and a permutation would not count as a different state. In fact, there is no paradox, only bad math.

Since the balls are distinguishable the number of states for the system composed of the two bags is
##\Omega_1(N_1)×\Omega_2(N_2)×[N!/(N_1! N_2!)]##.
Note that the term ##[N!/(N_1! N_2!)]## is not included to obtain a consistent equilibrium. It is included to correctly count all possible states for the system. I am still waiting for those who clain that there is no logical reason to include of this term to explain how else would they count states...

Including the nessessary ##[N!/(N_1! N_2!)]##, applying the logarithm, and differentiating with ##N_1##, you get ##\ln(V_1/N_1)=\ln(V_2/N_2)##. Of course, you have to remember that ##N_1+N_2=N##, and consider that both ##N_1## and ##N_2## are large enough to use Stirling.

My conclusion, when one uses proper combinatorial logic, one sees that, for a systems of distinguishable elements, equilibrium happens when
##\partial\ln[\Omega_1(N_1)/N_1!]/\partial N_1=\partial\ln[\Omega_2(N_2)/N_2!]/\partial N_2##.

Most people would agree that the definition of entropy is such that
##\partial S_1(N_1)/\partial N_1=\partial S_2(N_2)/\partial N_2##, and that leads to
##S_1=k \ln[\Omega_1(N_1)/N_1!]## and ##S_2=k \ln[\Omega_2(N_2)/N_2!]##.

Note also that this argument is consistent with ##S= k \ln(W)##, only that S is the entropy of the system composed by the two bags, and ##W## is proportional to the number of accessible states for this system composed by the two bags.

This point is stressed by Swendsen, and I quote:

"Although Boltzmann never addressed Gibbs’ Paradox directly, his approach to statistical mechanics provides a solid basis for its resolution. Boltzmann defined the entropy in terms of the probability of the macroscopic state of a composite system. Although the traditional definition of the entropy is often attributed to Boltzmann, this attribution is not correct. The equation on Boltzmann’s tombstone, ##S = k \log W##, which is sometimes called in evidence, was never written by Boltzmann and does not refer to the logarithm of a volume in phase space. The equation was first written down by Max Planck, who correctly attributed the ideas behind it to Boltzmann. Planck also stated explicitly that the symbol “W” stands for the German word “Wahrscheinlichkeit” (which means probability) and refers to the probability of a macroscopic state. The dependence of Boltzmann’s entropy on the number of particles requires the calculation of the probability of the number of distinguishable particles in the each subsystem of a composite system. The calculation of this probability requires the inclusion of the binomial coefficent, ##N!/(N_1!N_2!)##, where ##N_1## and ##N_2## are the numbers of particles in each subsystem and ##N = N_1 + N_2##. This binomial coefficient is the origin of the missing factor of ##1/N!## in the traditional definition, and leads to an expression for the entropy that is extensive."

From
Gibbs’ Paradox and the Definition of Entropy
By
Robert H. Swendsen
Entropy 2008, 10, 15-18
 
  • Like
Likes Philip Koeck and dextercioby

Similar threads

Back
Top