QM is Feynman path ensemble - is QFT Feynman field ensemble?

In summary, in QM, the Feynman ensemble of trajectories is used to derive Brownian motion. In classical mechanics, single action optimizing trajectories are used. QFT can be formulated as Feynman ensemble of fields, with each field configuration corresponding to a Feynman diagram. There is a relationship between the statistical mechanics (ensembles) of classical fields and the path integral in quantum field theory. Soliton models are an example of an intermediate approach between classical and quantum mechanics.
  • #71
A. Neumaier said:
This is true as a mathematical fact but has no physical relevance.

Indeed, the maximum entropy principle was not even formulated before 1957 - more than half a century after Gibbs had established the grand canonical ensemble and used it with great success. Its derivation is completely independent of any entropy considerations.

If you are slightly out of equilibrium and know the temperature and chemical potential (which is the typical experimental situation), the dynamics does not bring you to the corresponding maximum entropy state but to the corresponding state of least grand potential.

Since real systems are never exactly in equilibrium, the natural extremal principle is the latter and not the former.
It depends on the situation. If you have a system coupled to a heat bath such that everythin is kept at a given temperature and chemical potential you end up at the minimum of the gc potential.

For a closed system the thermodynamical potential is ##S(U,N,V)##, and ##S## gets maximal for the equilibrium.

In the thermodynamic limit the ensembles (microcanonical, canonial, grand-canonical) are equivalent, because the fluctuations of the non-fixed quantities are very small.

This does not invalidate the derivation of the gc stat. op. from the maximium-entropy principle, which is used in any textbook on statistical physics.
 
Physics news on Phys.org
  • #72
vanhees71 said:
This does not invalidate the derivation of the gc stat. op. from the maximum-entropy principle, which is used in any textbook on statistical physics.
One does not need a derivation - one can assume the form of the operator on the basis of simplicity and tractability. Gibbs didn't have the principle; it is a comparatively modern addition.
vanhees71 said:
For a closed system the thermodynamical potential is S(U,N,V), and S gets maximal for the equilibrium.
Yes, but this is not the typical experimental situation.
 
  • #73
Well, after knowing the result you can always say you could have guessed this somehow, but physics is not about guesses but the derivation from some general principles, and that this specific type of statistical operators turns out to be the correct equilibrium limit is not some plausible chance but follows from the dynamics of the system and the symmetries underlying its foundations. I don't think that one can simply guess this, but it took some centuries from Bernoulli, Boltzman, Gibbs et al. to the modern information theoretical understanding (Shannon, Szilard, Landauer etc.).
 
  • #74
vanhees71 said:
that this specific type of statistical operators turns out to be the correct equilibrium limit is not some plausible chance but follows from the dynamics of the system and the symmetries underlying its foundations.
... which is independent of the maximum entropy principle.
vanhees71 said:
Well, after knowing the result you can always say you could have guessed this somehow, but physics is not about guesses but the derivation from some general principles,
Gibbs found it from general principles, not from the (at his time unknown) maximum entropy principle.
 
Last edited:
  • #75
I don't know, what you don't like about the maximum entropy principle. As a very general modern principle, based on information theory, it helped at least me to understand statistical physics (not only thermal equilibrium). How would you derive the grand-canonical statistical operator?
 
  • #76
vanhees71 said:
How would you derive the grand-canonical statistical operator?
See Section 8.2 and Definition 9.1 of my book
Everything of interest, including an intuitive understanding, follows canonically, as shown in Chapter 9.
vanhees71 said:
I don't know what you don't like about the maximum entropy principle.
A lot. It may give completely nonsensical results, and is reliably applicable only when you can already anticipate the correct result. (Indeed, Jaynes found it after the result was known for over 60 years.).

For details see Section 10.7 of the above book.
 
Last edited:
  • #77
In Chpt. 9 you just give the state ex cathetra. There's no derivation whatsoever. The MEM is much more convincing. In elementary form you can use it for a undergraduate first encounter with statistical mechanics (see, e.g., the Berkeley physics course volume on the subject the socalled "little Reif").

Concerning your criticism in 10.7, it's of course clear that you have to apply the principle to the full information. E.g., if you treat ##x## as continuous but it's in reality to meant a discrete random variable you have to describe it as such and not as continuous in the maximum entropy principle. Also if all moments of a distribution function are given, it's completely determined and you don't need the maximum entropy principle anymore to determine it.
 
  • #78
vanhees71 said:
In Chpt. 9 you just give the state ex cathetra.
Namely by an argument of simplicity. Whereas you give the maximum entropy principle ex cathedra, although the latter is known to give spurious results in many cases. This is much worse!
vanhees71 said:
you have to apply the principle to the full information.
I specified in Section 10.7 several cases of assumed full information where the result is meaningless.

The principle is reliably applicable only when you can already anticipate the correct result. (Indeed, Jaynes found it after the result was known for over 60 years.).
 
  • #79
I find the MEM very plausible. Of course it must fail, when your "given information" is incomplete or contradictory. E.g., you can take a 1D particle in quantum mechanics and give the mean position and momentum and the standard deviations of position and momentum. If the latter contradict the Heisenberg uncertainty relation, you don't get a statistical operator from MEM, which must be so, because there cannot be any quantum state with these properties.

It's also clear that the MEM depends on the choice of the prior. In statistical mechanics you also have to find the correct measure first, before you can apply the MEM. E.g., for a classical many-body system the a-priori measure is the phase-space volume, and to get a sensible canonical or grand canonical ensemble you need a Hamiltonian bounded from below, because otherwise the canonical or grand canoncal distributions following from MEM don't make sense.

I know that the physicists dealing with statistical physics are devided in two camps, the one camp liking MEM the others don't. There seems to be no way to convince either camp about the advantages or disadvantages of the method.
 
  • #80
vanhees71 said:
I find the MEM very plausible. Of course it must fail, when your "given information" is incomplete or contradictory.
Suppose you have measured a sample of 1000 values of the internal energy of a system in a canonical ensemble. This is the complete information you can get from experiment. (No amount of experimentation can give you more than a sample.) You cannot apply the MEM.

But you can compute from the measurement mean and variance of the sample and pretend that this is the complete information. Now you can apply the MEM and obtain not the canonical ensemble but one with density operator ##\exp(-## quadratic in ##H)##, with a small but nonzero quadratic term since the sample mean and variance do not agree with the mean and variance of the exact distribution. Failure on MEM, in spite of consistent information. Of course it is incomplete information, but what would be complete information?

You need to pretend that the mean alone is the complete information (although in fact you know that you have much more detailed information). Then, and only then, you get the correct distribution from the ex cathetra MEM. Thus you essentially need to assume the result to find the result.

Compare with my ex cathetra Definition 9.1. It always produces the correct distribution, even in complicated situations (see Section 10.1), since extensivity has a well-specified physical meaning.
 
Last edited:
  • #81
vanhees71 said:
for a classical many-body system the a-priori measure is the phase-space volume
If you don't know the exact number of particles, MEM gets the right result only if you use correct weights, otherwise you get the wrong entropy of mixing.

Again you need to know which formula works before you can trust the MEM.
 
  • #82
A. Neumaier said:
Suppose you have measured a sample of 1000 values of the internal energy of a system in a canonical ensemble. This is the complete information you can get from experiment. (No amount of experimentation can give you more than a sample.) You cannot apply the MEM.

But you can compute from the measurement mean and variance of the sample and pretend that this is the complete information. Now you can apply the MEM and obtain not the canonical ensemble but one with density operator ##\exp(-## quadratic in ##H)##, with a small but nonzero quadratic term since the sample mean and variance do not agree with the mean and variance of the exact distribution. Failure on MEM, in spite of consistent information. Of course it is incomplete information, but what would be complete information?

You need to pretend that the mean alone is the complete information (although in fact you know that you have much more detailed information). Then, and only then, you get the correct distribution from the ex cathetra MEM. Thus you essentially need to assume the result to find the result.

Compare with my ex cathetra Definition 9.1. It always produces the correct distribution, even in complicated situations (see Section 10.1), since extensivity has a well-specified physical meaning.
I don't know what you mean that I cannot apply the MEM. Of course I can apply the MEM, taking the average energy an determine the temperature determining this value as the expectation value of the energy, which is the internal energy by definition.

You also answer yourself the question, how to choose the "relevant observation" in this specific standard case of equilibrium thermodynamics: You determine the MEM distribution by giving the expectation values of the (relevant) extensive variables, because what you have in mind to calculate is the thermodynamic limit. As I said, MEM doesn't tell you what the correct relevant variables are. That you have to determine otherwise from the specific case you want to investigate.

I don't understand, why your definition 9.1 should be always the "correct distribution". It's the correct distribution for thermodynamics, but you just give the result without any justification from the underlying physics.
 
  • #83
A. Neumaier said:
If you don't know the exact number of particles, MEM gets the right result only if you use correct weights, otherwise you get the wrong entropy of mixing.

Again you need to know which formula works before you can trust the MEM.
Sure, if you don't take into account the indistinguishability of particles from quantum theory you get the Gibbs paradox of classical thermodynamics, but this is not a problem of MEM, which doesn't provide you with the correct counting of the "available states" but gives only the least-prejudice statistical description given the information you give it. As I said, if the information is the wrong one for your problem under consideration, MEM doesn't guarantee you the correct the mistake automatically.
 
  • #84
vanhees71 said:
I don't know what you mean that I cannot apply the MEM. Of course I can apply the MEM, taking the average energy an determine the temperature determining this value as the expectation value of the energy, which is the internal energy by definition.
Then you don't use complete information. You arbitrarily select from the complete information that you have the mean energy, and only the mean energy, as complete information, and discard everything else. You could as well arbitrarily select only the variance, and get a completely different ensemble, one that yields wrong predictions.
vanhees71 said:
As I said, MEM doesn't tell you what the correct relevant variables are.
Earlier you said that one has to use the complete information.

In fact one never needs complete information, but exactly that information that produces the correct density matrix, and to know what this is you need to know what the density operator is, up to some choice of parameters.
vanhees71 said:
As I said, MEM doesn't tell you what the correct relevant variables are.
... while my Definition 9.1 tells you that. Thus it is superior to the MEM.
vanhees71 said:
I don't understand, why your definition 9.1 should be always the "correct distribution". It's the correct distribution for thermodynamics
So you understand why it gives the right distribution: Extensivity of my entropy operator requires that the contributions to the entropy are extensive, too, and this determines the relevant operators at a given description level.
 
Last edited:
  • #85
atyy said:
Yes, in QFT the path integral is over field configurations.
It's an "integral", but in the case of fermionic fields, can the Grassmann "integral" be viewed as a kind of sum?
 
  • #86
Demystifier said:
It's an "integral", but in the case of fermionic fields, can the Grassmann "integral" be viewed as a kind of sum?
In that case, it seems not. Well, at least not in the normal way. I'm not sure if the mathematicians have some way of making it some sort of sum?
 
  • Like
Likes Demystifier
  • #87
Demystifier said:
It's an "integral", but in the case of fermionic fields, can the Grassmann "integral" be viewed as a kind of sum?
Yes, it's a sum, but with many minus signs. More like determinants.
 
  • #88
Demystifier said:
It's an "integral", but in the case of fermionic fields, can the Grassmann "integral" be viewed as a kind of sum?
In general it is an infinite sum. Only for fermionic fields on a lattice it is a finite sum.
 
Last edited:
  • Like
Likes vanhees71
  • #89
WernerQH said:
Yes, it's a sum, but with many minus signs. More like determinants.
A. Neumaier said:
In general it is an infinite sum. Only for fermionic fields on a lattice it is a finite sum.
I meant that it's not a sum like in a Riemann or Lebesgue integral - or can those sums for Grassmann integration be considered generalizations of those more common integrals?

Also, the bosonic Feynman path integral is analogous to those for classical random processes. As far as I know, there are no classical random processes that are like Grassmann integrals. So that's another way in which I think Grassman integrals don't allow the same "picture" (fluctuations of classical configurations) as bosonic fields.
 
Last edited:
  • #90
A. Neumaier said:
In general it is an infinite sum. Only for fermionic fields on a lattice it is a finite sum.
The point is that, for a fixed spacetime point, a Grassmann integral is not a sum. Now if you meant that taking contributions from all spacetime points is a sum over all spacetime points, well, that's actually not a sum but a product.
 

Similar threads

Replies
1
Views
1K
Replies
5
Views
2K
Replies
134
Views
9K
Replies
13
Views
1K
Replies
6
Views
1K
Replies
34
Views
4K
Replies
26
Views
2K
Replies
36
Views
4K
Back
Top