Are Boltzman's statistics compatible with a deterministic universe?

In summary, Boltzmann's statistics are compatible with a deterministic universe. However, his model is based on probability theory and has been rendered obsolete by modern computer simulations of ideal gas, which show that the laws of motion are in fact irreversible. Boltzmann believed that nature tends to go from less probable to more probable states, but this is only true for macrostates that are far from the most probable state. Probability theory, according to Boltzmann, does not determine whether nature is deterministic or not. The question of whether probability is intrinsic to quantum mechanics is still under debate.
  • #36
Bill Dreiss said:
From Boltzmann, Ludwig. Lectures on Gas Theory (Dover Books on Physics) (p. 59):

“It is only when one reverses the velocities at time t1 that he obtains a motion for which H must increase during the time interval t1 – t0, and even then H would probably decrease again after that…” [my italics]

I’m using the same logic as Boltzmann. Do you see a flaw in our reasoning?
I do not like the word "probably."
 
Science news on Phys.org
  • #37
PeroK said:
It takes a special type of mind, I guess, to believe something like that.
The essence of science when you encounter something hard to understand is to think.
 
  • #38
Physicist248 said:
I do not like the word "probably."
I don't either, since the process he's describing is deteministic.
 
  • #39
Bill Dreiss said:
I don't either, since the process he's describing is deteministic.
There is a fundamental difference between deterministic and predictable. The critical role is played by the initials conditions, which cannot be known to infinite precision. Take the three-body gravitational problem - which, despite being deterministic, exhibits chaotic behaviour.

https://en.wikipedia.org/wiki/Three-body_problem
 
  • Like
Likes vanhees71
  • #40
Physicist248 said:
I do not like the word "probably."
As near as I can tell, Boltzmann's insight was first proposed by William Thompson in the paper “The Kinetic Theory of the Dissipation of Energy” (1874), which can be found in The Kinetic Theory of Gases by Stephen G. Brush. (see the highlighted text in the attachment) It's possible that this is where Boltzmann got the idea, since he was in frequent communication with the English physicists.
 

Attachments

  • Thompson p.355 in Brush.pdf
    741.8 KB · Views: 102
  • #42
Bill Dreiss said:
For an example, see https://phet.colorado.edu/sims/html/gas-properties/latest/gas-properties_en.html. The molecules are identical circles in 2-dimensional space. The initial conditions usually have the molecules bunched in a small region of the box. They then spontaneously disperse, increasing entropy. The simulation is strictly deterministic and probability is not involved. Since the algoritm instantiates only Newton's three deterministic laws of motion, the second-law dispersion can only be a consequence of these laws.

Which particular law is behind the dispersion? The third law, which relates to collisions, is clearly reversible. The second law, which relates to force, is not relevant since the ideal gas model assumes that no external forces or forces between the molecules exist. This leaves the first law, the law of inertia. The action of inertia is clearly visible as the source of the dispersion behind the second law.

If the simulation is reversed, the molecules will retrace their paths back to the initial conditions, temporarily decreasing entropy. However, if the simulation continues to run, the molecules will once again disperse, increasing entropy. The direction of increasing spontaneous dispersion is the direction of time.
OK, so let me put it this way. Suppose that we put gas molecules in a container. We start with all the molecules on the left having high kinetic energy, and all the molecules on the right having low kinetic energy. We run the simulation, and the kinetic energy will even out. Then we rerun the simulation with all the velocity signs reversed. The kinetic energy will still even out. Is that what you are saying?
 
  • #43
Physicist248 said:
OK, so let me put it this way. Suppose that we put gas molecules in a container. We start with all the molecules on the left having high kinetic energy, and all the molecules on the right having low kinetic energy. We run the simulation, and the kinetic energy will even out. Then we rerun the simulation with all the velocity signs reversed. The kinetic energy will still even out. Is that what you are saying?
The simulation I linked to displays the molecular distribution in space. The energy distribution is a separate problem. Everthing I've said so far has been related to the spatial distribution, which is generally what we observe in real life. I'm not saying that you rerun the simulation from the beginning, but that after a time t1 you instantaneously reverse the directions of the molecules (don't ask me how, but this is the scenario described in the literature). The molecules will the retrace their paths until they reach the positions that they started in at time t0. This will coincide with time 2(t1 - t0). From that point the molecules will once again disperse.
 
  • #44
Bill Dreiss said:
The simulation I linked to displays the molecular distribution in space. The energy distribution is a separate problem. Everthing I've said so far has been related to the spatial distribution, which is generally what we observe in real life. I'm not saying that you rerun the simulation from the beginning, but that after a time t1 you instantaneously reverse the directions of the molecules (don't ask me how, but this is the scenario described in the literature). The molecules will the retrace their paths until they reach the positions that they started in at time t0. This will coincide with time 2(t1 - t0). From that point the molecules will once again disperse.
Energy distribution is THE problem.
 
  • #45
Physicist248 said:
Is it true that, if the initial conditions are randomly selected, then in most cases the entropy will be increasing (but in some cases it will not)?
No. If you select the initial conditions entirely at random, it is overwhelmingly likely that you will select initial conditions that already have very high entropy, corresponding to thermodynamic equilibrium, and that at a macroscopic level, you will observe the system to be in such an equilibrium and not to change observably with time.

The reason we observe the second law of thermodynamics in our actual universe is that the initial conditions of our universe were not chosen randomly from all possible "states of universes". Our universe's initial conditions were such that the initial entropy was very low. That is what leaves "room" for entropy in our universe to increase and to continue increasing for a long time.
 
  • Like
Likes jbergman, vanhees71 and PeroK
  • #47
Physicist248 said:
Energy distribution is THE problem.
You're right. Boltzmann focussed on the velocity/energy distribution and made no mention of the spatial distribution. However, most of the textbooks I've reviewed have presented the example of the number of molecules in one half of a box of gas as a teaching aid. Since this example relies on the binomial distribution, it is easy to understand and illustrates many of the mathematical features of the more complicated energy distribution. Furthermore, in section 6 of his Lectures on Gas Theory, Boltzmann uses the isomorphic lottery example as the first step in his derivation of the velocity distribution.
 
  • #48
Correction. Boltzmann uses the isomorphic lottery example as the first step in his derivation of the entropy of the most probable macrostate. He references Maxwell on the velocity distribution.
 
  • #50
I haven't followed this thread in detail, and I hope I don't make an argument which already has been made.

The important point here is that gravitation is a long-ranged force. Arguing within Newtonian gravity, the interaction potential is going with ##1/r## only, and that makes it special for the dynamics. Dynamically the canonical Maxwell-Boltzmann equilibrium phase-space distribution ##f \propto \exp(-\beta H)## follows from Boltzmann's equation. Arguing within classical mechanics the Boltzmann equation is derived from the ##N##-body Liouville equation for the multi-particle phase-space distribution by reducing it to the one-particle phase-space distribution function and treat the collisions such that they are described by cross sections under the assumption of short-ranged forces, i.e., in the dilute-gas limit the particles are mostly moving freely and only when they come close together they are scattering on each other. The mean free path or time is then much larger than the trajectory/time of the collision. Another important ingredient is the molecular-chaos assumption, i.e., you substitute the two-body phase-space distribution function occurring in the collision term by the product of the corresponding one-body phase-space-distribution functions, which neglects correlations between the particles. At this moment you also introduce a direction of time (defined as the "causal direction of time"), and this leads to the H-theorem, i.e., that entropy never decreases and the equilibrium distribution functions are completely determined by the values of the additive conserved quantities (energy, momentum, angular momentum,...) and maximum entropy under the corresponding constraints, leading to the Maxwell-Boltzmann distribution as the equilibrium distribution.

Now, in the case of long-ranged interactions you have to be more careful. E.g., if you naively put a Coulomb interaction for charged particles into the collision term to describe a plasma, the collision integrals diverge. In this electromagnetic case this can be repaired by lumping the long-ranged part of the force to the left-hand side of the Boltzmann equation by considering the particle under investigation and then split off the part of the interactions that can be described by the motion of this particle in a mean electromagnetic field due to the charge-current distribution of all other particles. The rest of the collision term than is of short-ranged nature due to screening, i.e., in a plasma around a positive ion there'll be a cloud of electrons due to the attractive Coulomb force, effectively "screening" the positive charge interacting with the other charged particles. This leads to an effective Yukawa potential for the electrostatic interaction which is short-ranged, and the collision term gets well-defined.

Now the (Newtonian) gravity is similar, i.e., you also have a long-ranged ##1/r## potential, but here the problem is that it cannot be screened since it's always attractive and not also repulsive as for like-sign electric charges. That means the correlations due to these long-ranged interactions cannot be neglected, and that's why density fluctuations in a gravitationally interacting gas tend to be not relaxing but getting larger, i.e., due to the long-ranged gravitational interaction a region of higher density attracts more matter making it even denser. That's why gravitationally bound systems like stars are formed, and that's why the distribution is not homogeneous as expected from a naive Boltzmann picture of a (dilute) gas or (Debye sreened) plasma.

This of course also holds in the relativistic case, as needed to describe neutron stars or structure formation of the universe, where the initial fluctuations shortly after the big bang are the seeds for the formation of galaxies, galaxy clusters and all that.

It's also clear that for a star to remain stable there must be some pressure acting against gravity to get a stable compact object. That's provided by the thermonuclear reactions like in our Sun, and the Pauli blocking of fermions (as in a neutron star or white dwarf). If this pressure counterbalancing the gravitational interaction, the object collapses like in a nova of a star or a supernova of a neutron star etc.
 
Back
Top