Can indistinguishable particles obey Boltzmann statistics

In summary, textbooks claim that particles following Boltzmann statistics must be indistinguishable in order to ensure extensive entropy. However, a combinatorics derivation shows that both distinguishable and indistinguishable particles can follow Boltzmann statistics, and that the Bose-Einstein distribution is only obtained for indistinguishable particles. Although textbooks argue that indistinguishable particles must follow Boltzmann statistics, there is evidence that this is not always the case, such as the use of the classical Drude model for conductors. The confusion may arise from the different interpretations of the terms "distinguishable" and "indistinguishable" when discussing identical particles.
  • #71
Philip Koeck said:
I've tried to fill in the gaps of this derivation (see appended pdf) and I don't get the same result. Am I making a mistake?
It looks like something went wrong here:
upload_2018-2-16_11-26-27.png

After factoring out the ##g^{n}## the expression should read as
$$\frac{g^{n}\left(1+n/g\right)^{g}}{n^{n}}$$
then use the approximation
$$\left(1+\frac{n}{g}\right)^{g}\approx e^{n}$$
for ##g>>n##.
 

Attachments

  • upload_2018-2-16_11-26-27.png
    upload_2018-2-16_11-26-27.png
    9.6 KB · Views: 933
Physics news on Phys.org
  • #72
NFuller said:
It looks like something went wrong here:
View attachment 220421
After factoring out the ##g^{n}## the expression should read as
$$\frac{g^{n}\left(1+n/g\right)^{g}}{n^{n}}$$
I really don't see a mistake there, I'm afraid. I'm not actually factoring out gn. What happens to the n in the exponent n+g in your result?
 
  • #73
I think this is how it is justified:
$$\frac{(n+g)^{n+g}}{n^{n}g^{g}}=\frac{(n+g)^{n}(n+g)^{g}}{n^{n}g^{g}}\approx\frac{g^{n}(n+g)^{g}}{n^{n}g^{g}}=\frac{g^{n}g^{g}\left(1+n/g\right)^{g}}{n^{n}g^{g}}$$
$$=\frac{g^{n}\left(1+n/g\right)^{g}}{n^{n}}\approx\frac{g^{n}e^{n}}{n^{n}}\approx\frac{g^{n}}{n!}$$
 
  • Like
Likes Philip Koeck
  • #74
NFuller said:
I think this is how it is justified:
$$\frac{(n+g)^{n+g}}{n^{n}g^{g}}=\frac{(n+g)^{n}(n+g)^{g}}{n^{n}g^{g}}\approx\frac{g^{n}(n+g)^{g}}{n^{n}g^{g}}=\frac{g^{n}g^{g}\left(1+n/g\right)^{g}}{n^{n}g^{g}}$$
$$=\frac{g^{n}\left(1+n/g\right)^{g}}{n^{n}}\approx\frac{g^{n}e^{n}}{n^{n}}\approx\frac{g^{n}}{n!}$$
Thanks, that must be it. I didn't see the additional approximation.
 
  • Like
Likes NFuller
  • #75
NFuller said:
I'm sorry but I don't think I understand what you are asking. Can you rephrase this?

I'll try. But we face the fundamental problem that the meaning of a physical system "selecting" a certain state hasn't been defined. (e.g. Are we talking about the state that is "selected" if we select a random time in [0,T] to measure the state of a system in equilibrium? )

Two competing definitions of micro-state have cropped up. In definition 1) a micro-state is only described by the occupancy numbers. In definition 2) the description also includes the labels of which particles are occupying the energy levels.

So a [classical] view is that a given physical system can be described by two probability distributions , f1 describes the probability of the system "selecting" the states of definition 1) and f2 describes the probability of selecting states of definition 2).

We are going to model the system in equilibrium either by assuming f1 to be a maximum entropy distribution subject to some constraints or we are going to model the system by assuming f2 is a maximum entropy distribution subject to the same constraints.

How do we choose between using f1 versus f2? Is the choice made on a purely empirical basis - to match data from experiments? Or is there some collection of assumptions and definitions that can deduce which distribution we choose?

Speculating about the deductive way - the appropriate choice may be dictated by how we define "equilibrium".
I think the closest I can give to a "consensus definition" is the one given in Kardar's statistical physics book. He

The literal interpretation of that definition would distinguish particle 1 from particle 2 via the position of its data in the 6N dimensional vector. So that definition agrees with definition 2) . It also agrees with definition used in the Wikipedia article https://en.wikipedia.org/wiki/Maxwell–Boltzmann_statistics.

To introduce the whatever-we-shall-call-it concept of definition 1), the Wikipedia article speaks of "degeneracies" of microstates.
 
Last edited:
  • #76
Stephen Tashi said:
Two competing definitions of micro-state have cropped up. In definition 1) a micro-state is only described by the occupancy numbers. In definition 2) the description also includes the labels of which particles are occupying the energy levels.
Stephen Tashi said:
How do we choose between using f1 versus f2? Is the choice made on a purely empirical basis - to match data from experiments? Or is there some collection of assumptions and definitions that can deduce which distribution we choose?
I think I understand your confusion. The choice of which description of microstate to use depends on the type of statistical ensemble being employed. For example, your definition 2 says to include the labels of which particles occupy which energy levels, but what if all the particles have the same energy? Then we must use the microcanonical ensemble and definition 1 and f1 is used. If all the particles are at the same temperature but may have different energy, then the canonical ensemble is used, which follows from definition 2.
 
  • #77
NFuller said:
The choice of which description of microstate to use depends on the type of statistical ensemble being employed. For example, your definition 2 says to include the labels of which particles occupy which energy levels, but what if all the particles have the same energy?
I understand a situation where the totality of the particles has a constant energy.

Then we must use the microcanonical ensemble and definition 1 and f1 is used.
I understand that's the standard procedure. I don't understand the justification for "must". Is it empirical or deductive? Even it it's only tradition, there must be some empirical reason why the tradition is followed.

If all the particles are at the same temperature but may have different energy, then the canonical ensemble is used, which follows from definition 2.

I understand that's standard procedure, but again, I don't see the justification for it. We can't justify it by saying that the procedure is justified by the definition and the definition justifies the procedure.
 
  • #78
NFuller said:
I think this is how it is justified:
$$\frac{(n+g)^{n+g}}{n^{n}g^{g}}=\frac{(n+g)^{n}(n+g)^{g}}{n^{n}g^{g}}\approx\frac{g^{n}(n+g)^{g}}{n^{n}g^{g}}=\frac{g^{n}g^{g}\left(1+n/g\right)^{g}}{n^{n}g^{g}}$$
$$=\frac{g^{n}\left(1+n/g\right)^{g}}{n^{n}}\approx\frac{g^{n}e^{n}}{n^{n}}\approx\frac{g^{n}}{n!}$$
I have a sort of a summary of my view on things now:
I've appended some derivations that show most of what I've come up with.
In short: The Boltzmann distribution follows from "traditional" Boltzmann counting for distinguishable particles.
A distribution like Boltzmann, but without factor N, follows from "correct" Boltzmann counting, which is a limiting case of Bose-Einstein counting for indistinguishable particles when g >> n >> 1 for every energy level.
I don't see that this necessarily makes particles distinguishable. Low occupancy is not the same as distinguishability, in my opinion.
In both cases I assume S = k ln W when I determine the Lagrange multipliers and for deriving an expression for S at the end.
Obviously if I allow for S = k ln W + f(N) the results change.
In both cases I get an extensive expression for S, so there's no indication of a paradox, again assuming S = k ln W.
Two things worry me: No factor N in the Boltzmann distribution from "correct" counting. S for distinguishable particles is missing the "pV-term".
Any comments?
 

Attachments

  • Boltzmann and correct Boltzmann.pdf
    347.3 KB · Views: 284
  • #79
Philip Koeck said:
I don't see that this necessarily makes particles distinguishable. Low occupancy is not the same as distinguishability, in my opinion.
It may be helpful to look back at posts 59 and 60. There, a simple example was given showing how to count the states of two identical particles. The Bose-Einstein counting is the exact counting, but if ##g>>n>>1## then this can be approximated by "correct Boltzmann counting". This is not making the particles distinguishable, it is only a mathematical approximation.
Philip Koeck said:
Two things worry me: No factor N in the Boltzmann distribution from "correct" counting. S for distinguishable particles is missing the "pV-term".
This is really the whole point. The correct counting lacks the factor N and gives the correct thermodynamic relations. The incorrect counting has the factor N which cancels out with another factor N later, so you end up missing the pressure term.
 
  • Like
Likes Philip Koeck
  • #80
Stephen Tashi said:
I understand a situation where the totality of the particles has a constant energy.
It's not just the total energy is constant, but that each particle has a constant average energy. This is sufficient because equilibrium statistical mechanics is a time independent construction of the particle behavior.
Stephen Tashi said:
I understand that's the standard procedure. I don't understand the justification for "must". Is it empirical or deductive? Even it it's only tradition, there must be some empirical reason why the tradition is followed.
The ansatz of the microcanonical ensemble is that all the particles lie on the surface of a ##N##-dimensional sphere in momentum space, i.e. they all have the same energy. Thus if there is a system where all the particles have an average energy ##E##, then the ansatz is satisfied, and the microcanonical ensemble is valid.
 
  • #81
NFuller said:
It's not just the total energy is constant, but that each particle has a constant average energy.
How could there be a non-constant average? I can see how each particle could have the same expected value of energy. Mathematical expectations (and averages) are taken with respect to some variable. So to define what it means for a particle to have an average energy, we need to know what physical variable we are averaging over. Is the average taken with respect to time in some long time interval?

This is sufficient because equilibrium statistical mechanics is a time independent construction of the particle behavior.

Hearing time mentioned makes me hopeful. Are we getting closer to answering my question about what it means for a system to "select" a microstate? After all, if we are computing probabilities that the system "selects" a microstate, we need to know what that means physically to "select". I suggested that we pick a random time from a uniform distribution in some long time interval [0,T] and observe the microstate of the system at the selected time. Nobody has supported or opposed that definition of "selecting".

The ansatz of the microcanonical ensemble is that all the particles lie on the surface of a ##N##-dimensional sphere in momentum space, i.e. they all have the same energy.

I don't know what the word "ansatz" means in this context. Going by the Wikipedia article https://en.wikipedia.org/wiki/Microcanonical_ensemble , the microcannonical ensemble is used to represent a system of particles that has a time invariant value of energy. Is the only way to represent such a system to represent each individual particle has having the same time invariant value of energy?

Thus if there is a system where all the particles have an average energy ##E##, then the ansatz is satisfied, and the microcanonical ensemble is valid.

What's valid is that a system where all particles have the same average energy may satisfy the definition of a microcanonical ensemble.

But this doesn't answer the question of why, in a microcannonical ensemble, a particular definition of "microstate" is appropriate for defining events with equal probability. The definition of "microcannonical ensemble" is made without defining a "microstate".

My understanding so far:
By definition, in the "microcannonical ensemble", each particle has the same average energy ##E## where the average is taken with respect to time , say time over some long time interval. The system of particles has an average energy ##E_S## where the average is taken with respect to time. Since both ##E## and ##E_S## are averages taken with respect to time they are constants with respect to time.

I think the definition of "microcannonical ensemble" also says that the total energy ##E(t)## of the system at time ##t## is constant with respect to time. Assuming that is a requirement, then it must be that ##E(t) = E_S##. This still leaves open the possibility that the energy of an individual particle can vary with time.

Do I have the right picture?
 
  • #82
NFuller said:
It may be helpful to look back at posts 59 and 60. There, a simple example was given showing how to count the states of two identical particles. The Bose-Einstein counting is the exact counting, but if ##g>>n>>1## then this can be approximated by "correct Boltzmann counting". This is not making the particles distinguishable, it is only a mathematical approximation.
I completely agree.

NFuller said:
This is really the whole point. The correct counting lacks the factor N and gives the correct thermodynamic relations. The incorrect counting has the factor N which cancels out with another factor N later, so you end up missing the pressure term.
Here we might be at the core of my problem.
As I see it the Boltzmann distribution for classical, distinguishable particles, such as Xenon atoms (written without index i) is this:
n = N g ea e-bu
Here n is the number of particles in a particular energy level with energy u, g is the number of states in that level, N the total number of particles, a is chemical potential/kT and b is 1/kT.
If T is constant, I would say g does not depend on N. I don't think the chemical potential depends on N either, does it?
On the other hand n must be proportional to N.
That's why the factor N has to be there, I think (unless the chemical potential changes with N).

In this context, I think I've noticed that the cause of the non-extensive entropy expression for classical, distinguishable particles, based on "incorrect" Boltzmann counting, is actually that the density of states, g(u)du is made proportional to V. I've seen two ways of coming up with this expression for the density of states (in Beiser and in Blundell), and I don't quite buy either of them.

I agree that for quantum mechanical particles the situation can be different since g depends on N and/or V, for example for photons in a box and maybe for a hydrogen gas.
 
Last edited:
  • #83
Stephen Tashi said:
How could there be a non-constant average? I can see how each particle could have the same expected value of energy. Mathematical expectations (and averages) are taken with respect to some variable. So to define what it means for a particle to have an average energy, we need to know what physical variable we are averaging over. Is the average taken with respect to time in some long time interval?
What I meant to say was each particle has the same average energy in the microcanonical ensemble.
Stephen Tashi said:
Hearing time mentioned makes me hopeful. Are we getting closer to answering my question about what it means for a system to "select" a microstate? After all, if we are computing probabilities that the system "selects" a microstate, we need to know what that means physically to "select". I suggested that we pick a random time from a uniform distribution in some long time interval [0,T] and observe the microstate of the system at the selected time. Nobody has supported or opposed that definition of "selecting".
This definition is reasonable.
Stephen Tashi said:
But this doesn't answer the question of why, in a microcannonical ensemble, a particular definition of "microstate" is appropriate for defining events with equal probability. The definition of "microcannonical ensemble" is made without defining a "microstate".
As I mentioned before, a microstate is a set of ##N## positions in a ##6N## dimensional phase space. In the microcanonical ensemble, these points are restricted to lie on the surface of a ##3N## dimensional sphere which is sufficient to constrain one of the thermodynamic variables, i.e. the energy. I don't know of any other way to describe this.
 
  • #84
Philip Koeck said:
Here we might be at the core of my problem.
As I see it the Boltzmann distribution for classical, distinguishable particles, such as Xenon atoms (written without index i) is this:
n = N g ea e-bu
Here n is the number of particles in a particular energy level with energy u, g is the number of states in that level, N the total number of particles, a is chemical potential/kT and b is 1/kT.
It looks like you are starting to derive the Grand Canonical Ensemble because you have introduced the chemical potential ##\mu##. In that case, ##N## as you have defined it, does not exist because in this ensemble the particle number is not fixed. What you have is not exactly the grand canonical ensemble, it looks line there is a factor ##n## missing in one of the exponentials. I think it may be easier to start with constructing either the microcanonical or canonical ensemble where ##N## is a fixed value and convince yourself of the counting that way.
 
  • #85
NFuller said:
It looks like you are starting to derive the Grand Canonical Ensemble because you have introduced the chemical potential ##\mu##. In that case, ##N## as you have defined it, does not exist because in this ensemble the particle number is not fixed. What you have is not exactly the grand canonical ensemble, it looks line there is a factor ##n## missing in one of the exponentials. I think it may be easier to start with constructing either the microcanonical or canonical ensemble where ##N## is a fixed value and convince yourself of the counting that way.
Not at all. I assume constant U and constant N. The constant N constraint leads to the Lagrange multiplier alpha, which turns out to be -chem.pot./kT. So the chemical potential occurs in the distribution because of the constraint of constant N. I've appended the text once more.
You're not really answering my question, whether the factor N in the Boltzmann distribution makes sense or not.
 

Attachments

  • Boltzmann and correct Boltzmann.pdf
    347.3 KB · Views: 345
  • #86
NFuller said:
As I mentioned before, a microstate is a set of ##N## positions in a ##6N## dimensional phase space. In the microcanonical ensemble, these points are restricted to lie on the surface of a ##3N## dimensional sphere which is sufficient to constrain one of the thermodynamic variables, i.e. the energy. I don't know of any other way to describe this.

My understanding of that:

One point on the ##6N## dimensional sphere represents the state of the system (at a given time) and the changing state of the system is visualized by a "moving point" on the surface of the sphere. By definition of this ##6N## dimensional point, subsets of it components represent data for individual particles, so by definition of such a point, each individual particle is "distinguished".

However, to justify computing a probability distribution based on the above model in a particular way requires more assumptions that merely using the above as a definition. The basic concept must be the (imperfect) notion that "Each point on the sphere has the same probability of being where the system is" - meaning (the equally imperfect concept) that "The system spends the same fraction of time (in some long time interval [0,T] at each point on the sphere".

Naturally the notions of probability "at" a point must be replaced by a probability density. And the notion of the fraction of time a system spends "at" a point only makes literal sense if the systems stops dead in its tracks for some finite interval of time.

The calculations based on defining discrete microstates and doing combinatorics on them are unjustified unless we establish facts beyond the mere definition of the micro-cannonical ensemble. These facts are

1) The probability density for the system being at a point on the ##6N## dimensional sphere is a uniform distribution over the surface of the sphere.

2) The way the energy levels of the discrete microstates is defined, assuming a uniform distribution over the microstates approximates a uniform probability density over the surface of the sphere - and the correct answer (to a given computation) about the uniform probability density can be found by taking the limit of the calculation performed on the discrete microstates as the number of microstates approaches infinity.

I'll conjecture fact 1) can be established by defining "equilibrium" to mean exactly the situation described in fact 1). Instead of such legal trickery, there are probably experimental ways to test whether a system that is in equlibrium (using the empirical notion of that word) satisfies fact 1).

I'll conjecture that fact 2) is never established in typical expositions of thermodyamics! The mathematical aspects of it look imposing. They involve ergodic processes and limits of sequences of functions. (Maybe there's no mathematical way to make the classical model actually work!)
 
  • #87
Philip Koeck said:
Not at all. I assume constant U and constant N. The constant N constraint leads to the Lagrange multiplier alpha, which turns out to be -chem.pot./kT. So the chemical potential occurs in the distribution because of the constraint of constant N. I've appended the text once more.
You're not really answering my question, whether the factor N in the Boltzmann distribution makes sense or not.
It looks like when using the correct Boltzmann counting, without the factor ##N##, the number of particles in the ##i##th state is a function of the temperature only. This may be reasonable in the thermodynamic limit, but I'm not sure. Correct me if I'm wrong, but it looks like you are holding ##N##, ##T##, and ##U## constant in the derivation. What is bothering me is that this is not one of the five standard ensemble types used so I am wondering if this approach is meaningful when describing a thermodynamic state.
 
  • #88
Stephen Tashi said:
However, to justify computing a probability distribution based on the above model in a particular way requires more assumptions that merely using the above as a definition. The basic concept must be the (imperfect) notion that "Each point on the sphere has the same probability of being where the system is" - meaning (the equally imperfect concept) that "The system spends the same fraction of time (in some long time interval [0,T] at each point on the sphere".
This is generally justified a priori by stating that there is no directional preference in the momentum, so the points are uniformly distributed on the sphere.
Stephen Tashi said:
(Maybe there's no mathematical way to make the classical model actually work!)
You may be right. I think I have heard of people trying to prove the a priori arguments given and they always fail miserably.
 
  • #89
NFuller said:
It looks like when using the correct Boltzmann counting, without the factor ##N##, the number of particles in the ##i##th state is a function of the temperature only. This may be reasonable in the thermodynamic limit, but I'm not sure. Correct me if I'm wrong, but it looks like you are holding ##N##, ##T##, and ##U## constant in the derivation. What is bothering me is that this is not one of the five standard ensemble types used so I am wondering if this approach is meaningful when describing a thermodynamic state.
If I restrict myself to ideal gases then U depends only on T and N, so T is automatically constant, I agree.
Also notice that I say nothing about T until I interpret the Lagrange mutipliers, so T is not really part of the model I use in the derivations.
I was assuming my derivations were microcanonical due to the constant U, but I don't really know.
 
  • #90
Let's look at a very specific case:
The Boltzmann distribution for "correct Boltzmann counting", which is an approximation to Bose-Einstein, is this (written without index):
n = g ea e-bu
Here n is the number of particles in a particular energy level with energy u, g is the number of states in that level, N the total number of particles, a is chemical potential/kT and b is 1/kT.
Now assume we have two identical containers with equal V. In each there is an ideal gas at very low pressure at temperatur T.
Container A contains twice as many atoms as container B, so obviously the pressure and the inner energy are twice as high in A.
Let's divide the range of kinetic energies into discrete energy levels for the sake of the model.
I would say that for a given energy level u, the number of atoms in that level, n, should be twice as big for container A as for container B. Do you agree?
If so, which factor in the above distribution function accounts for this?
 
  • #91
Philip Koeck said:
If so, which factor in the above distribution function accounts for this?
The chemical potential of an ideal gas is a function of ##N##, ##V##, and ##T## or simply ##P## and ##T##.
$$\mu=kT\ln\left(\frac{\lambda^{3}N}{V}\right)=kT\ln\left(\frac{\lambda^{3}P}{kT}\right)$$
where ##\lambda=h/\sqrt{2\pi mkT}##
 
  • #92
NFuller said:
The chemical potential of an ideal gas is a function of ##N##, ##V##, and ##T## or simply ##P## and ##T##.
$$\mu=kT\ln\left(\frac{\lambda^{3}N}{V}\right)=kT\ln\left(\frac{\lambda^{3}P}{kT}\right)$$
where ##\lambda=h/\sqrt{2\pi mkT}##
Thanks for the help! It seems that this expression is derived based on "correct Boltzmann counting" (at least where I found it in Blundell's book) and it nicely puts the factor N back into the Boltzmann distribution.
 
  • #93
I've not read the entire thread till the end, but there seems to be a lot of confusion only due to the didactical mistake perpertuated for almost 200 years to treat classical statistics first. The problem with classical statistics is that there is no way to properly define phase-space distribution functions and the entropy. Boltzmann was ingenious enought to plug in the additional factor ##1/N!## to remedy the Gibbs paradox with a handwaving argument about indistinguishability of classical particles, but a true understanding for both the correct entropy expression and phase-space distribution functions you necessarily need quantum mechanics, which introduces the notion of a natural "action" or "phase-space-volume measure" in terms of Planck's constant ##h=2 \pi \hbar##.

Here it is of utmost help for the understanding to derive the Fermi-Dirac, Bose-Einstein, and Boltzmann statistics in the original way by counting the combinatorics to distribute particles over quantum (!) states. It's very clearly written in Landau and Lifshitz volume 5. I've stolen this from them in my transport-theory manuscript:

https://th.physik.uni-frankfurt.de/~hees/publ/kolkata.pdf

The derivation can be found for the Boltzmann statistics in Sec. 1.2.2 (using of course the necessary minimum of the quantum definition of the phase-space volume) and the quantum statistics cases in Sec. 1.8.

Of course, it's also a good idea to derive classical statistical mechanics from the very beginning starting from the Liouville equation for phase-space distributions and deriving the Boltzmann transport equation by cutting the BBGKY hierarchy at the lowest non-trivial order. That makes utmost clear why Boltzmann's H-theorem is valid and thus why equilibrium is the state of maximum entropy under the constraints due to the additive conservation laws.
 
  • #94
vanhees71 said:
I've not read the entire thread till the end, but there seems to be a lot of confusion only due to the didactical mistake perpertuated for almost 200 years to treat classical statistics first. The problem with classical statistics is that there is no way to properly define phase-space distribution functions and the entropy. Boltzmann was ingenious enought to plug in the additional factor ##1/N!## to remedy the Gibbs paradox with a handwaving argument about indistinguishability of classical particles, but a true understanding for both the correct entropy expression and phase-space distribution functions you necessarily need quantum mechanics, which introduces the notion of a natural "action" or "phase-space-volume measure" in terms of Planck's constant ##h=2 \pi \hbar##.

Here it is of utmost help for the understanding to derive the Fermi-Dirac, Bose-Einstein, and Boltzmann statistics in the original way by counting the combinatorics to distribute particles over quantum (!) states. It's very clearly written in Landau and Lifshitz volume 5. I've stolen this from them in my transport-theory manuscript:
The problem I have with this view is that both BE and FD statistics are based on indistinguishability (I think. Correct me if I'm wrong.) Clearly we can make the approximation of low occupancy and arrive at the Boltzmann statistics (as worked out for BE earlier in the thread by NFuller), but in my mind that doesn't remove the assumption of indistinguishability. Is there really no place for a purely classical description for an ideal gas of very large particles (C60 molecules, colloids, a heavy noble gas)? The way I see things these particles are definitely distinguishable, either because they can be tracked with a microscope or because they are actually slightly different such as in the case of colloids. The deBroglie wave length of these particles would also be tiny if they are fast enough (on average) so I see no reason to use quantum mechanics. In summary I see two reasons not to treat them as a limiting case of quantum statistics. They are distinguishable and they are much to heavy and fast.
 
  • #95
Of course, Bose-Einstein and Fermi-Dirac statistics are based on indistinguishability, and this is one of the most simple examples for the fact that classical physics is not entirely correct on the microscopic level of matter. It cannot be described in any classical way, but that is no problem but a feature! At the same time you cure the problems of classical statistical physics by interpreting it as an approximation of quantum statistics and you understand, why macroscopic matter behaves to such a high accuracy classically in almost all circumstances of our everyday lifes!

You also can't establish classical statistics properly without quantum theory since you have no natural measure for phase-space volumes within classical physics. Also about this problem Boltzmann was pretty much aware. Nowadays it's easy to derive the correct natural measure as ##h^{2f}=(2 \pi \hbar)^{2f}##, where ##f## is the number of degrees of freedom in configuration space. The factor ##2## in the exponent is due to the fact that phase space consists of configuration-space as well as canonical-momenta degrees of freedom, and this leads indeed to the correct dimension, because ##q p## for any pair of configuration and canonical-momentum observable has the dimension of an action.
 
  • Like
Likes DrClaude and Philip Koeck
  • #96
vanhees71 said:
Of course, Bose-Einstein and Fermi-Dirac statistics are based on indistinguishability, and this is one of the most simple examples for the fact that classical physics is not entirely correct on the microscopic level of matter. It cannot be described in any classical way, but that is no problem but a feature! At the same time you cure the problems of classical statistical physics by interpreting it as an approximation of quantum statistics and you understand, why macroscopic matter behaves to such a high accuracy classically in almost all circumstances of our everyday lifes!

You also can't establish classical statistics properly without quantum theory since you have no natural measure for phase-space volumes within classical physics. Also about this problem Boltzmann was pretty much aware. Nowadays it's easy to derive the correct natural measure as ##h^{2f}=(2 \pi \hbar)^{2f}##, where ##f## is the number of degrees of freedom in configuration space. The factor ##2## in the exponent is due to the fact that phase space consists of configuration-space as well as canonical-momenta degrees of freedom, and this leads indeed to the correct dimension, because ##q p## for any pair of configuration and canonical-momentum observable has the dimension of an action.
I'm not sure that you need an absolute measure for a volume in phase space. To derive the Maxwell Boltzmann distribution for an ideal gas, for example, it's sufficient to state that the number of states with energies between u and u+du is proportional to the volume of a spherical shell in momentum space (and to the real space volume). There's no need to come up with a phase space volume unit involving Plank's constant, which, I agree, is a rather strange thing to do for an entirely classical system (if such a system exists).
 
  • #97
Without an absolute measure of phase space you have to introduce an arbitrary one, because otherwise you cannot define entropy properly. There must be no dimensionful quantities in logarithms!
 
  • #98
Let me quote Callen, Thermodynamics and an Introduction to Thermostatics, 2nd ed., sec. 16-9:
Callen said:
[...] the partition function becomes
$$
z = \frac{1}{h^3} \int e^{-\beta \mathcal{H}} dx \, dy \, dz \, dp_x \, dp_y \, dp_z \quad \quad (16.68)
$$
Except for the appearance of the classically inexplicable prefactor (##1/h^3##), this representation of the partition sum (per mode) is fully classical. It was in this form that statistical mechanics was devised by Josiah Willard Gibbs in a series of papers in the Journal of the Connecticut Academy between 1875 and 1878. Gibbs' postulate of equation 16.68 (with the introduction of the quantity ##h##, for which there was no a priori classical justification) must stand as one of the most inspired insights in the history of physics. To Gibbs, the numerical value of ##h## was simply to be determined by comparison with empirical thermophysical data.
 
  • Like
Likes dextercioby, bhobba and vanhees71
  • #99
vanhees71 said:
Without an absolute measure of phase space you have to introduce an arbitrary one, because otherwise you cannot define entropy properly. There must be no dimensionful quantities in logarithms!
You are assuming that W in the expression S = k ln W stands for a volume in phase space. What about if we just regard W as a whole number, the number of ways that a system can realize a certain distribution of particles among energy levels? Obviously for a classical gas "energy level" actually refers to a small range of energies.
 
  • #100
To introduce entropy in the usual hand-waving way you have to count microstates, compatible with a given macrostate. In classical physics it's suggestive to use the phase-space volume as the measure of states because of Liouville's theorem, because the phase-space volume is conserved along the Hamiltonian flow of the system, and that's how Gibbs et al came to this correct assumption. To "count" you need a natural measure of phase space, i.e., a natural scale for phase space volumes (of the dimension of the appropriate power of action), and there is no such natural scale in classical physics.

A more convincing argument for me is the information-theoretical approach to statistical physics. There it's clear that the Shannon-Jaynes (von Neumann) entropy is alwasy relative to what is considered "complete possible information" and a corresponding reference probability distribution, which in the case of classical physics again is equipartition over the available phase-space volume. Then the same dilemma with the missing appropriate natural scale for phase-space volums arises as with the naive approach to entropy.

What you suggest, is of course a correct approach, using the microcanonical ensemble, but that doesn't help with the dilemma since again you need to count the available microstates in terms of phase-space volumes.
 
  • Like
Likes dextercioby and bhobba
  • #101
A little spin-off from this thread: A state for 1 particle is given by a small volume of size h3 in phase space. If two particles occupied the same volume in 1 particle phase space that would mean, in classical terms, that they are at the same spatial coordinates and moving with the same momentum vector at a given time. In other words they would be inside each other. For classical particles (C60-molecules etc.) I would say that's not possible. That seems to indicate that FD statistics is the obvious choice for describing classical particles. Most textbooks, however, introduce classical systems as having no limit for the number of particles per state. Do you agree with my thinking?
 
  • #102
Hm, that's a contradictio in adjecto, because classical particles make only things in a realm, where the Bose or Fermi nature is irrelevant. Both the Bose and the Fermi statistics have as the low-occupation-number limit the Boltzmann statistics (including the ##N!## factor "repairing" the Gibbs paradox). The low-occupation-number constraint makes the indistinguishability of particles irrelevant since there are on average less than 1 particle in a single-particle phase-space cell of size ##h^3##.
 
  • Like
Likes Philip Koeck
  • #103
vanhees71 said:
Hm, that's a contradictio in adjecto, because classical particles make only things in a realm, where the Bose or Fermi nature is irrelevant. Both the Bose and the Fermi statistics have as the low-occupation-number limit the Boltzmann statistics (including the ##N!## factor "repairing" the Gibbs paradox). The low-occupation-number constraint makes the indistinguishability of particles irrelevant since there are on average less than 1 particle in a single-particle phase-space cell of size ##h^3##.
Assuming we could create a system of classical particles like C60 molecules at high occupancy, would it follow FD or BE statistics? Or is this not even a sensible question?
 
  • #104
It depends on how high the occupancy is. As a whole C60 is a boson. So if not too close together they behave as bosons. Also the carbon atoms are bosons (if you have the usual ##^6\text{C}## isotope), but of course on the level of the fundamental constituents you have fermions. I guess, however, that to get this fermionic nature into action you's have to pack the buckyballs so close together that you destroy them ;-).
 
  • #105
vanhees71 said:
It depends on how high the occupancy is. As a whole C60 is a boson. So if not too close together they behave as bosons. Also the carbon atoms are bosons (if you have the usual ##^6\text{C}## isotope), but of course on the level of the fundamental constituents you have fermions. I guess, however, that to get this fermionic nature into action you's have to pack the buckyballs so close together that you destroy them ;-).
Are you beginning to see the problem? If C60 truly behaved like a boson you would be able to put any number of particles into the same state (or "point" in phase space). I find that really hard to imagine. I think they'll simply and very classically be in each others way, even considering the effects of uncertainty. To me it seems that quantum statistics simply doesn't apply to systems that are "too classical".
 
Last edited:

Similar threads

Back
Top