9 Reasons Quantum Mechanics is Incomplete - Comments

In summary: I don't know...the lead singer for a band that sounds like it borrows from Pink Floyd and Radiohead.In summary, Dirac believed that QM is just a natural outgrowth of classical physics where some of its assumptions are relaxed. However, because QM is still a physical theory, it is incomplete.
  • #106
DarMM said:
It's possible that the global algebra of QED doesn't have pure states due to massless particles:
Buchholz, D., and Doplicher, S., Exotic infrared representations of interacting systems, Ann. Inst. H. Poincare, 40, 175-184, (1984).

But QED is not yet known to exist?
 
Physics news on Phys.org
  • #107
atyy said:
But QED is not yet known to exist?
Just to be clear what do you mean by "exist"? Depending on exactly what you mean no 4D QFT is known to exist.
 
  • Like
Likes dextercioby
  • #108
DarMM said:
Just to be clear what do you mean by "exist"? Depending on exactly what you mean no 4D QFT is known to exist.

Exist in the mathematical sense, ie. in the same sense in which no 4D QFT is known to exist. So maybe these rigorous considerations (about pure states existing) don't affect MWI at all.
 
  • #109
atyy said:
Exist in the mathematical sense, ie. in the same sense in which no 4D QFT is known to exist. So maybe these rigorous considerations (about pure states existing) don't affect MWI at all.
There's a few things to say here.

The result only depends on there being a massless sector present in any sense, not specifically QED. There really are photons so ultimately there will be a massless sector.

The only way out of this would be if continuum QFT were simply wrong, i.e. there is by necessity a ultraviolet cutoff.

This seems unlikely in light of Balaban, Magnen , Rivasseau and Sénéor's results that there is a nontrivial continuum limit for gauge theories. The only question concerns the infrared limit and the remaining Haag-Kastler axioms.
 
  • Like
Likes atyy and dextercioby
  • #110
DarMM said:
The only way out of this would be if continuum QFT were simply wrong, i.e. there is by necessity a ultraviolet cutoff.

Could string theory save the day?

DarMM said:
This seems unlikely in light of Balaban, Magnen , Rivasseau and Sénéor's results that there is a nontrivial continuum limit for gauge theories.

Does this mean that it has been proved that Yang Mills on a compact space exists, but has not yet been constructed (eg. Jaffe and Witten write "So even having a detailed mathematical construction of Yang–Mills theory on a compact space would represent a major breakthrough." https://www.claymath.org/sites/default/files/yangmills.pdf)?
 
  • #111
atyy said:
Could string theory save the day?
I'm not aware of work on what kind of C*-algebra string theory involves. If turns out that it uses Type I algebras globally and that you need it to give a well-defined limit to QFTs (something I doubt) then you could have pure states.

atyy said:
Does this mean that it has been proved that Yang Mills on a compact space exists, but has not yet been constructed (eg. Jaffe and Witten write "So even having a detailed mathematical construction of Yang–Mills theory on a compact space would represent a major breakthrough." https://www.claymath.org/sites/default/files/yangmills.pdf)?
It's known that there is a continuum limit of the action and that it is not trivial, so there is a well-defined continuum theory. Wilson loops exist, but they have yet to be proven to be unique, so there might be multiple continuum Yang-Mills. Which would be a genuinely shocking result to me. You also need to show that these Wilson loops when expanded perturbatively actually match the typical physicists calculations.

There are some theories where this has been shown not to be the case like the 3D Gross-Neveu model which is non-renormalizable with a Landau pole in the usual Wilson RG treatment (or any typical perturbative treatment about the free theory), but we now know has a well-define non-trivial continuum limit.
 
  • Like
Likes atyy
  • #112
Avodyne said:
Yes. But the starting point is counting branches in simpler situations. See also Carroll & Sebens, who elaborate basically the same idea, with slightly different language:

https://arxiv.org/abs/1405.7907
Both Vaidman himself and Adrian Kent have critiques of Carroll & Sebens work. Vaidman himself (http://philsci-archive.pitt.edu/14389/) says nothing like Carroll's approach could ever work:
Vaidman said:
Finally, in section 5 we strengthen Kent’s [2015] objection to the self-location uncertainty account of Sebens and Carroll[forthcoming]. We show that Sebens and Carroll’s proof cannot be modified to avoid Kent’s objection.

As for Vaidman's work itself, what can I say. I've read and worked my through every proof of the Born rule in MWI and all come short. Once again he assumes a branching structure you need the Born rule to derive in a form more exact than the Born rule via decoherence is actually going to give you.
 
  • #113
I am sorry I have been away from the discussion for the week. Busy week with tests. But I did find time to sit down and work out the measurement "problem" and realized I have been in error. There is no need to invoke thermodynamics to describe within quantum mechanics the measurement process. Begin with your system to be measured, say a spin for which you wish to measure the z component. In that basis the initial sharp mode of production is say:
[tex] \psi_i = a\mid \uparrow\rangle + b\mid\downarrow\rangle[/tex]
You wish to use a second qubit to make a measurement, so you prepare it in an initial mode ##\mid 0\rangle## and select an orthogonal mode ##\mid 1\rangle## to register a spin up if you measure it while remaining in its initial mode indicates spin down.

The initial composite system is then in the sharp mode:
##\Psi_i = a|0,\uparrow\rangle + b|0,\downarrow\rangle##
One then applies the unitary evolution:
## U=\mid 0,\downarrow\rangle\langle 0,\downarrow \mid +\mid 1,\downarrow\rangle\langle 1,\downarrow \mid +\mid 0,\uparrow\rangle\langle 1,\uparrow \mid +
\mid 1,\uparrow\rangle\langle 0,\uparrow \mid ##
You may recognize this as a "control Not" flipping the meter bit only if the spin is up on the bit to be observed.

Applying this unitary evolution to our initial composite mode yields:
[tex] \Psi_f = U\Psi_i = a\mid 1,\uparrow \rangle + b\mid 0,\downarrow\rangle[/tex]
The system and meter are entangled which is to say correlated. But of course a meter must be correlated with the system it measures or it isn't recording anything.

But being entangled we cannot rightly speak of the system after measurement alone except only in what we can say if we trace over the meter system. This will of course result in a non-zero entropy of the system. This is not due to non-unitary evolution but rather due to the necessary degree of entanglement between system and epi-system which constitutes the measurement process. In point of fact tracing over the meter yields a system in a "classical probability distribution" over the two eigen-states with exactly the probabilities dictated by QM.
[tex]Tr_{M}(\Psi_f\otimes\Psi^\dagger_f) = |a|^2 \mid \uparrow\rangle \langle \uparrow| + |b|^2\mid\downarrow\rangle\langle\downarrow|[/tex]

Also if you trace over the system the meter will likewise be in the same classical probability distribution over it's registry states. So any description of either alone will be as dictated by a quantum measurement occurring.

This may not yet be enough to satisfy some? Well you can use 10,000,000 meter qubits and the unitary operators effecting their measurements commute. You can repeat this process as many times as you like correlating the system with the macroscopic object one gets by aligning all of these qubits. Trace over even one to account for it having popped out to lunch and gotten eaten by an entropy dump and the whole system reverts to a classical binary probability distribution between the two outcomes with exactly the two probabilities predicted by QM.

Now if that is not a quantum mechanical description of the measurement process I don't know what could satisfy anyone. Once we're talking classical probability distributions then, as I've said before you're drawing lottery tickets and no-one is wondering why the one in their hand has suddenly become worthless when they see the results.

Personally I don't agree with separately labeling some probabilities "classical" or "quantum" probabilities are limiting frequencies of outcome of experiments. Fundamentally they're all quantum since under our classical models all is quantum mechanical.

A final comment. So called "mixed states" are specifically necessary in the descriptions of quantum systems because they do upon occasion interact with their environment and thereby get entangled. I'm fond of asserting (in slight hyperbole) that "the entropy of the universe is zero." There is no disputing my assertion (as there is also no proving it per se) since we can only observer parts of the universe where are typically quite entangled with other parts of the universe including that junk that fell into all those black holes out there, and all them photon flying away from us in irretrievable fashion. Entropy is not additive it is subadditive. There is no shame in describing the non-unitarity of a system's description's evolution due to having to trace out those entangled parts of the epi-system. You are just observing the 2nd law in action. I assert all entropy is the entropy of entanglement of a system with the rest of the universe. I have yet to see any reason to stop using that as a working definition.
 
  • Like
Likes lowlize, dextercioby and stevendaryl
  • #114
jambaugh said:
I assert all entropy is the entropy of entanglement of a system with the rest of the universe. I have yet to see any reason to stop using that as a working definition.

People are talking about different notions of entropy in thermodynamics and in quantum mechanics. In thermodynamics, entropy is the log of the number of micro states that correspond to the same macro states. In quantum information theory, the entropy (Von Neumann entropy) is given by something like ##S = -tr(\rho log(\rho))##, where ##\rho## is the density matrix. I can understand the claim that nonzero Von Neumann entropy is always due to tracing out environmental degrees of freedom, so if we included the environment, the entropy would be zero. But is your claim also supposed to apply to thermodynamic entropy?
 
  • #115
stevendaryl said:
I can understand the claim that nonzero Von Neumann entropy is always due to tracing out environmental degrees of freedom, so if we included the environment, the entropy would be zero. But is your claim also supposed to apply to thermodynamic entropy?

Yes I was thinking of Von Neumann entropy specifically. However that equals thermodynamic entropy for systems *in equilibrium* with their environment and at some level of steady state and is, I think, the only reasonable definition of entropy for non-equilibrium systems for which we can justify a specific density operator description. In point of fact I am not aware of anywhere that the Von Neumann entropy of a system has deviated from its thermodynamic entropy when both could be defined.

I have to constantly keep reminding myself that entropy is not a physical observable (and neither are entanglement or superposition). It is a quantifier of how non-specific our statements about a physical system are. But we must always also remember that as physicists any statement we assert about a system carries with it the presumption that a specific class of physical processes of measurement and constraint have occurred.

This is why I'm so gung-ho about operational meaning. Entropy is about what we know, wave functions and density operators are about what we know, but what we know is about what physical interactions we've had with a physical system a la measurement and constrained dynamics. This to me seems to resolve all paradoxes and is the most (epistemologically) fundamental level of physical description. Ontology is then always tentative and contingent.
 
  • Like
Likes lowlize and Mentz114
  • #116
jambaugh said:
Yes I was thinking of Von Neumann entropy specifically. However that equals thermodynamic entropy for systems *in equilibrium* with their environment and at some level of steady state and is, I think, the only reasonable definition of entropy for non-equilibrium systems for which we can justify a specific density operator description. In point of fact I am not aware of anywhere that the Von Neumann entropy of a system has deviated from its thermodynamic entropy when both could be defined.

I just don't see them as connected, at all, but maybe I'm missing the connection. Thermodynamically, if we allow a hot gas to exchange energy with a cold gas so that the first gets cooler and the second gets warmer, entropy goes up. I don't see how that connects to Von Neumann entropy.
 
  • #117
stevendaryl said:
I just don't see them as connected, at all, but maybe I'm missing the connection. Thermodynamically, if we allow a hot gas to exchange energy with a cold gas so that the first gets cooler and the second gets warmer, entropy goes up. I don't see how that connects to Von Neumann entropy.

Describe the hot gas with a density operator. Describe the cold gas with a density operator. Assume the two systems are unentangled and describe the composite as a product density operator. Take the Von Neumann entropy of these and they will equal the thermodynamic entropies. Allow the composite system to evolve under some unitary isotropic dynamic.

The two systems will entangle and their expected energies per degree of freedom should equalize over time... The total entropy will not change but the sum of the two partial entropies will go up. Then when you decohere the entanglement* by allowing the system to interact with the outside world the resulting quantum description will have the same entropies for the two gas volumes as when still entangled but the joint entropy will again be their sum, as their joint density operator will again be a tensor product of a single pair of density operators.

Pick up a statistical mechanics textbook and look at the derivation of the partition functions and the definition of (reciprocal) temperature and chemical potential. They emerge as the Lagrange multipliers in the constrained optimization problem where you maximize Von-Neumann entropy subject to the constraints of various expectation values being specified. The expected energy constraint defines temperature and for chemical potential(over temp) it is the constraint of a given expected particle number. Unity of probability as a constraint gives you the actual partition function itself.

*( By "decohere the entanglement" is meant recognize that the composite system once randomly interacting with the outside world has entangled with that in such an unpredictable way that it is not possible to assert with a straight face that you could actually make the measurements necessary to write done a specific density operator for the whole kit and caboodle. One must simply sigh and trace over the environment describing the system with less certainty and thus with higher entropy. This is the source of the non-unitarity.)
 
  • #118
jambaugh said:
Describe the hot gas with a density operator. Describe the cold gas with a density operator. Assume the two systems are unentangled and describe the composite as a product density operator. Take the Von Neumann entropy of these and they will equal the thermodynamic entropies. Allow the composite system to evolve under some unitary isotropic dynamic.

The two systems will entangle and their expected energies per degree of freedom should equalize over time... The total entropy will not change but the sum of the two partial entropies will go up. Then when you decohere the entanglement* by allowing the system to interact with the outside world the resulting quantum description will have the same entropies for the two gas volumes as when still entangled but the joint entropy will again be their sum, as their joint density operator will again be a tensor product of a single pair of density operators.

Pick up a statistical mechanics textbook and look at the derivation of the partition functions and the definition of (reciprocal) temperature and chemical potential. They emerge as the Lagrange multipliers in the constrained optimization problem where you maximize Von-Neumann entropy subject to the constraints of various expectation values being specified. The expected energy constraint defines temperature and for chemical potential(over temp) it is the constraint of a given expected particle number. Unity of probability as a constraint gives you the actual partition function itself.

*( By "decohere the entanglement" is meant recognize that the composite system once randomly interacting with the outside world has entangled with that in such an unpredictable way that it is not possible to assert with a straight face that you could actually make the measurements necessary to write done a specific density operator for the whole kit and caboodle. One must simply sigh and trace over the environment describing the system with less certainty and thus with higher entropy. This is the source of the non-unitarity.)

Hmm. I will think about it. But in the meantime, do you think that thinking of all entropy as Von Neumann entropy sheds any light on the issue of why the universe was in a low-entropy state right after the Big Bang?
 
  • #119
stevendaryl said:
Hmm. I will think about it. But in the meantime, do you think that thinking of all entropy as Von Neumann entropy sheds any light on the issue of why the universe was in a low-entropy state right after the Big Bang?
Perhaps because it had not yet the time to dexohere, hence had zero entropy?
 

Similar threads

Back
Top