Statistical ensemble in phase space

  • #1
cianfa72
2,452
255
TL;DR Summary
On the meaning of statistical ensemble in phase space and its uses
Hi,
I've a question about the concept of ensemble is statistical physics.

Take a conservative system in a given macrostate (e.g. with a given energy): there will be a number of phase space's microstates compatible with the given macrostate.

If I understand it correctly, basically the microstates compatible with a given macrostate are all equiprobable. Therefore the probability to find a system in an infinitesimal volume of phase space is actually the density of microstates inside that chosen infinitesimal volume (density calculated w.r.t. the total number of compatible microstates).

What is an ensemble ? It should be a large set of compatible microstates inside a given phase space's volume. We can therefore imagine letting such "virtual systems" evolve according to the physics's laws verifying how the ensemble evolves along the phase space.

Does it make sense ? Thanks.
 
Science news on Phys.org
  • #2
Any comment ? Thanks.
 
  • #3
cianfa72 said:
Any comment ?
What sources have you looked at to try to find the answer to your question?
 
  • #4
I've seen the classic Landau's book of statistical physics.
 
  • Like
Likes vanhees71
  • #5
So my understanding of ensemble is correct? Thanks.
 
  • #6
cianfa72 said:
I've seen the classic Landau's book of statistical physics.
Can you give a specific reference or quote from that book that you think describes what a statistical ensemble is?
 
  • #7
cianfa72 said:
So my understanding of ensemble is correct?
Is it the same as what Landau's book says?
 
  • #8
I think so, however I'd like to get an answer from expert people.
 
  • #9
cianfa72 said:
I think so, however I'd like to get an answer from expert people.
Then you first need to give a specific quote, as I have asked, for us to compare with what you said in your OP. It should be easy for you to find one if you are familiar with the book. If you're not familiar enough with the book to find a quote, then you need to fix that first.

cianfa72 said:
I'd like to get an answer from expert people.
The fact that you would like an answer does not mean you can expect me to spoon feed it to you. IMO you will gain a better understanding if you do the work yourself to find a good reference and compare it to what you said. Once you have done that and can post the comparison here, then we can give some feedback.
 
  • #10
PeterDonis said:
Can you give a specific reference or quote from that book that you think describes what a statistical ensemble is?
Paragraph 3 in

L. D. Landau and E. M. Lifshitz, Course of Theoretical
Physics Volume V, Pergamon Press (1980).

ensemble-ll-vol5.png
 
  • #11
vanhees71 said:
Paragraph 3
Paragraph 3 of what? Chapter? Section? Page?

A quote would be helpful.
 
  • #12
Look again at my posting #10. I inserted a quote.
 
  • #13
vanhees71 said:
Look again at my posting #10. I inserted a quote.
Where does the word "ensemble" appear in this quote? I don't see it.
 
  • #16
cianfa72 said:
I think so
@vanhees71 has now given a quote (post #14) from the book you referenced, which includes the key footnote defining the term "statistical ensemble". Please compare with what you wrote in the OP of this thread.
 
  • #17
cianfa72 said:
Take a conservative system in a given macrostate (e.g. with a given energy)
Note that this constraint (plus a constraint that the total number of particles is fixed) means that you are specifically dealing with the microcanonical ensemble. There are other types of ensembles corresponding to different sets of constraints. Gibb's classic book Elementary Principles of Statistical Mechanics was, AFAIK, the first to explicitly define the different types of ensembles.
 
  • Like
Likes vanhees71
  • #18
PeterDonis said:
Note that this constraint (plus a constraint that the total number of particles is fixed) means that you are specifically dealing with the microcanonical ensemble
Ok, now we can assume that all the microstates compatible with the given macrostate are all equiprobable. Therefore the number/density of compatible microstates inside an infinitesimal phase space's volume is proportional the distribution function (i.e. the probability to find the system inside that infinitesimal volume).

Now my doubt concerns the distribution chosen for the "virtual systems" in the ensemble. As far as I understand, they are chosen in such a way that their density at each point occupied from them in phase space is proportional to the distribution function.
 
  • Like
Likes vanhees71
  • #19
cianfa72 said:
now we can assume that all the microstates compatible with the given macrostate are all equiprobable
cianfa72 said:
my doubt concerns the distribution chosen for the "virtual systems" in the ensemble. As far as I understand, they are chosen in such a way that their density at each point occupied from them in phase space is proportional to the distribution function
The statement in the second quote above is correct, per the definition of the distribution function ##\rho## given in the text just prior to equation 1.4.

I'm not sure that is consistent with the first quote above.
 
  • #20
PeterDonis said:
I'm not sure that is consistent with the first quote above.
Why not ? For which reason some compatibile microstates might have a different probability of occurring ?

Note that I'm not saying that the distribution ##\rho## is constant over all points of the phase space volume occupied by the system.
 
Last edited:
  • Like
Likes vanhees71
  • #21
cianfa72 said:
Why not ? For which reason some compatibile microstates might have a different probability of occurring ?
Because the distribution ##\rho## might not be uniform over all of the microstates (or more precisely over all of the points in phase space, which is what "microstates" actually means in this context). And if it's not uniform, then the subsystem (not system--see below) will have different probabilities of being at different points of phase space.

cianfa72 said:
Note that I'm not saying that the distribution ##\rho## is constant over all points of the phase space volume occupied by the system.
The distribution ##\rho## is for a subsystem, not for the whole system.
 
  • #22
cianfa72 said:
Ok, now we can assume that all the microstates compatible with the given macrostate are all equiprobable. Therefore the number/density of compatible microstates inside an infinitesimal phase space's volume is proportional the distribution function (i.e. the probability to find the system inside that infinitesimal volume).
Yes, and this assumption can be justified from an information-theoretical point of view. This is the one important thing missing in Landau+Lifshitz vol. 5, which otherwise is one of the best books on the subject.

The idea is that entropy in fact is a measure for the missing information, given a probability distribution. To "guess" a probability distribution for a given situation the idea is to find that distribution that precisely describes, what is known about the system you want to describe statistically, i.e., you must find that distribution which makes the entropy maximal, given the known facts about the system.

The most simple distributions are now the stationary distributions, which don't change in time. Using the maximum-entropy principle and the dynamics of classical or quantum theory it turns out that then the distributions must only depend on the conserved quantities of the system, with the energy the most important among them.

If you now consider a closed system, i.e., a system, which cannot exchange anything to "the environment", then you know from Noether's theorem that total energy, momentum, and angular momentum is conserved, and thus you can assume only the corresponding values of your system. Then the maximum-entropy principle leads indeed to the microcanonical ensemble, i.e., each microstate compatible with the values of these additive conserved quantities are equally probable, and the entropy is given by the logarithm of this number of microstates (modulo some factor, defining the unit of temperature, i.e., in the SI ##k_{\text{B}}##).
cianfa72 said:
Now my doubt concerns the distribution chosen for the "virtual systems" in the ensemble. As far as I understand, they are chosen in such a way that their density at each point occupied from them in phase space is proportional to the distribution function.
The microcanonical ensemble consists of systems, each prepared in such a way that the additive conserved quantities take the given values such that each microstate is equally probable to be found when choosing a system randomly out of this ensemble.
 
  • Like
Likes cianfa72
  • #23
PeterDonis said:
Because the distribution ##\rho## might not be uniform over all of the microstates (or more precisely over all of the points in phase space, which is what "microstates" actually means in this context). And if it's not uniform, then the subsystem (not system--see below) will have different probabilities of being at different points of phase space.The distribution ##\rho## is for a subsystem, not for the whole system.
Not necessarily. The microcanonical ensemble describes a closed system.

The argument that in classical mechanics the occupation density of phase-space volume elements are the measure for the probabilities of a closed system is that the phase-space volume elements' volume is unchanged under time evolution (Liouville's theorem). Of course, knowing the exact ##N##-body phase-space distribution function at initial time implies knowing the complete state of the system, which in practice is impossible. That's why you have to "coarse grain" the description to describe the "relevant macroscopic degrees" of freedom for the situation at hand.

The most simple case is equilibrium, which for the microcanonical case, I've described in #22. That's in fact the description of minimal knowledge you can have, i.e., the system has "forgotten" anything about the initial state.

The next simple case used in practice is to truncate the practically infinitely many equations of motion for the ##n##-particle phase-space distribution functions (##n \leq N## with ##N## the total number of particles in your closed system) at the one-body level. This is Boltzmann's idea, known as the "molecular-chaos assumption" (or in German the "Stoßzahlansatz"). The equation of motion for the one-particle distribution, depends on the two-particle distribution, and the molecular-chaos assumption simply assumes that two particles are uncorrelated, i.e., the two-particle distribution function can be approximated by the product of the two one-particle distribution functions. At this point, in fact, you "coarse grain" in the sense that you neglect some information about the system, namely two-particle correlations. This then leads to the Boltzmann equation and to the H-theorem, i.e., that the macroscopic entropy is increasing with time, defining the "thermodynamical arrow of time", which however in fact is by construction identical with the fundamental "causal arrow of time".

The quantum description is similar and in some sense less complicated, because there the single-particle phase-space volume has a natural measure, i.e., ##h^3=(2 \pi \hbar)^3##. Then you can describe the system in the 2nd-quantization (QFT) formalism and Green's functions. For the general off-equilibrium case this leads to the so-called Schwinger-Keldysh (SK) real-time contour formalism. Again the exact equations of motion for the many-body SK Green's functions build a coupled set of (practically) infinitely many Green's functions, which you never can solve. Also here the idea is to truncate this "BBGKY hierarchy" (names after Born, Boguliuobov, Green, Kirkwood, and Yvon). On the one-particle level you only consider the one-body Green's function, i.e., the two-point function (and maybe also some one-point function, i.e., some mean field, as, e.g., for a Bose Einstein condensate or the magnetization of a ferromagnet,...).

You can derive such approximation from a variational principle, known as the ##\Phi##-functional formalism (going back to Luttinger, Ward, Baym, and Kadanoff in condensed-matter physics and to Cornwall, Jackiw, and Tomboulis). In diagrammatic Language the ##Phi## functional is the sum over all closed two-particle irreducible diagrams, i.e., diagrams with lines symbolizing exact, interacting two-point propagators, which don't get disconnected when cutting any pair of lines. Then the exact self-energy is given by opening each diagram by omitting one line, and the exact Green's function then is given by the Dyson equation with these self-energies.

Of course also this cannot be solved for non-trivial interacting theories, and you have to truncate the ##\Phi## functional in some way to a finite number of diagrams or an infinite subset of diagrams, which can be "resummed" somehow. Usually this is achieved by expanding formally in powers of some parameter (coupling constants or ##\hbar##, the latter based on the assumption that a macroscopic system is well described by classical physics with "small" quantum corrections).

This then leads to the socalled Kadanoff-Baym equations for the two-point (one-particle) Green's functions. The next step towards a classical transport equation is then to do a so-called "Wigner transformation", which is a Fourier transformation wrt. ##t_1-t_2## and ##\vec{x}_1-\vec{x}_2##, where ##t_1,t_2## and ##\vec{x}_1,\vec{x}_2## are the arguments of the two-point Green's function. Then you get the Green's function in terms of the corresponding Fourier-transformed function, which now is considered as a function of the "momentum" and ##\tau=(t_1+t_2)/2## and ##\vec{X}=(\vec{x}_1+\vec{x}_2)/2##. One of the so Wigner-transformed functions, ##G^{<}##, then is something very similar to a phase-space distribution function, but it's not yet really one, because it's real but not positive definite, but integrating over ##\vec{p}## gives the one-particle density and integrating over ##\vec{X}## gives the momentum distribution.

To get to a phase-space distribution function and a semiclassical quantum-transport equation you do a gradient expansion, which formally is also an ##\hbar##-expansion. This indeed leads to a Boltzmann-like quantum transport equation.

You find all this in Landau-Lifshitz vol. X, which is a very good book on kinetic theory (both classical and quantum).
 
  • #24
vanhees71 said:
The microcanonical ensemble consists of systems, each prepared in such a way that the additive conserved quantities take the given values such that each microstate is equally probable to be found when choosing a system randomly out of this ensemble
Yes, however, given a finite assigned volume of phase space for the ensemble, the ensemble's virtual systems are actually distribuited inside that volume with a density proportional to the phase space's distribution function ##\rho## at time 0. Right ?
 
Last edited:
  • Like
Likes vanhees71
  • #25
That's the general off-equilibrium case, right. In the long-time limit this is expected to approach a microcanonical ensemble due to the ergodic hypothesis.
 
  • #26
vanhees71 said:
That's the general off-equilibrium case, right.
So the virtual systems in the ensemble start always with a phase space density proportional to the distribution function ##\rho##. I believe this distribution function is given since it is basically the ratio of the number of compatible microstates in a infinitesimal phase space's volume w.r.t. the number of total compatible microstates (w.r.t. the given macrostate).

vanhees71 said:
In the long-time limit this is expected to approach a microcanonical ensemble due to the ergodic hypothesis.
Why, at time 0 the ensemble we started with is not a microcanonical ensemble ?
 
Last edited:
  • Like
Likes vanhees71
  • #27
With a general, non-uniform distribution function you start with an arbitrary non-equilibrium state.
 
  • #28
vanhees71 said:
The microcanonical ensemble describes a closed system.
Yes, and as Landau & Lifshitz explain, in this particular case they are considering one subsystem of the whole system as a closed system, because they are looking at a period of time that is short compared to the characteristic interaction time between subsystems.
 
  • Like
Likes vanhees71
  • #29
vanhees71 said:
The microcanonical ensemble consists of systems, each prepared in such a way that the additive conserved quantities take the given values such that each microstate is equally probable to be found when choosing a system randomly out of this ensemble.
Reading this again, I think it means the virtual systems in the microcanonical ensemble are uniformly distributed only on the phase space's hypersurface of constant energy. That is, if the system has 2N degree of freedom, then the virtual systems in the microcanonical ensemble are equally distributed over the 2N-1 dimensional hypersurface of constant energy.
 
Last edited:
  • Like
Likes vanhees71
  • #30
Exactly! The ensemble is defined by systems that fulfill the given constraints. For a equilibrium ensemble of a closed system these constraints are the values of the conserved additive quantities, particularly the energy.
 
  • Like
Likes cianfa72

FAQ: Statistical ensemble in phase space

What is a statistical ensemble in phase space?

A statistical ensemble in phase space is a large collection of virtual copies of a system, each representing a possible state that the system could be in, given its constraints. These virtual copies are used to model the statistical properties of the system, such as average values and fluctuations of observables, by sampling over the ensemble.

What are the different types of statistical ensembles?

The three main types of statistical ensembles are the microcanonical, canonical, and grand canonical ensembles. The microcanonical ensemble represents an isolated system with fixed energy, volume, and number of particles. The canonical ensemble represents a system in thermal equilibrium with a heat reservoir, allowing energy exchange but with fixed volume and number of particles. The grand canonical ensemble allows both energy and particle exchange with a reservoir, thus having fixed temperature, volume, and chemical potential.

How is the phase space defined in the context of statistical ensembles?

In the context of statistical ensembles, phase space is a multidimensional space in which each point represents a possible state of the system. For a system with N particles, the phase space has 6N dimensions, with each particle contributing three position coordinates and three momentum coordinates. The state of the entire system at any given time is represented by a single point in this phase space.

Why are statistical ensembles important in statistical mechanics?

Statistical ensembles are crucial in statistical mechanics because they provide a framework to connect microscopic properties of individual particles with macroscopic observables of the system. By averaging over a large number of possible states, ensembles allow the calculation of thermodynamic quantities like temperature, pressure, and entropy, which are otherwise difficult to obtain from first principles.

How do you compute the properties of a system using a statistical ensemble?

To compute the properties of a system using a statistical ensemble, one typically calculates the ensemble average of a physical quantity. This involves integrating the quantity over all possible states in the phase space, weighted by the probability distribution of the ensemble. For example, in the canonical ensemble, the probability of a state is given by the Boltzmann factor exp(-E/kT), where E is the energy of the state, k is the Boltzmann constant, and T is the temperature.

Back
Top