When Quantum Mechanics is thrashed by non-physicists #1

In summary: The same state can be described using different finite dimensional vector spaces, each corresponding to a different frame of reference. So, the same state can be said to exist in different ways, and the different interpretations of the state might be considered to be "correct" or "incorrect", depending on your perspective.I haven't read the paper, so I can't say much more about it.
  • #141
vanhees71 said:
I don't see a problem in this since I don't think that our contemporary physical models are complete in any sense. :)
That's a healthy attitude. But isn't it essentially the same as saying:
- OK, some kind of hidden "variables" (not necessarily deterministic and perhaps not even expressible by mathematics) are likely to exist, even if at the moment I cannot say anything specific about them.
 
Physics news on Phys.org
  • #142
Demystifier said:
But if there are two possible choices, and Bohmian theory requires to pick only one, don't you see that as a problem? The whole idea of Bohmian mechanics is not only to agree with observations (for that purpose the standard QM is also fine), but also to offer a reasonable ontological picture of the world. On the other hand, the two different choices propose two different ontological pictures of the world, which is a problem because then the existence of competitive pictures implies that neither of the pictures is sufficiently reasonable by itself.
Different ontological pictures are, IMHO, not a problem at all. First, it is also important that we know what we don't know. So, if there are two different ontological pictures, it means, we don't know which is the true one. Such is life. We cannot know everything.
On the other hand, if all the ontological pictures share some interesting properties, this at least increases the plausibility that this shared property is correct. Or, in other words, there may be many different ontological possibilities, but they all present a similar nice picture.

In particular, why should I care which of the frames is the preferred one? Whatever it is, the resulting picture looks similar. And the question which is more interesting is if this picture looks better than that of a four-dimensional spacetime or not.
 
  • #143
atyy said:
Hmm, why not say that the purpose of BM is to disagree with QM at some level? Then it doesn't matter how ugly it is, since experiment will pick.
The point of considering interpretations is, essentially, that they ultimately tend to disagree. The point is that they usually have problems. And how are these problems solved? By a modification of the theory, of course. And after this, the "interpretation" is no longer an interpretation but a different theory.
There are, for example, interpretations using the "hydrodynamic variables" (which would better be named probability flow variables). These interpretations have a problem - the Wallstrom objection. A variant of this problem is that they have infinities - near the zeros of the wave function, the velocity of the flow becomes infinite.
These two problems can be solved by regularization: http://arxiv.org/abs/1101.5774 but the regularized theory is already a different one.

Another example. The theory proposed in http://arxiv.org/abs/gr-qc/0205035 started as an interpretation of GR - with harmonic coordinates as preferred coordinates. This interpretation had a problem - an additional equation, thus, the whole set is no longer derived from a Lagrange formalism. The problem was solved, by adding a term which leads to the harmonic condition as the equation. Problem solved - but, action equals reaction, now the original Einstein equations obtained a modification too. Thus, the theory was no longer GR in harmonic coordinates, but different.

A third one. So we have the equations, and now we add an ether interpretation. The key is the ether density defined by [itex]\rho=g^{00}\sqrt{-g}[/itex]. Nicely, if the density is positive, the time coordinate is time-like. Unfortunately, the equations themself do not care. Solutions are imaginable where the density is initially positive everywhere, but becomes somewhere negative.

What to do? The ether interpretation can be preserved by modifying the theory. The regions where the ether density becomes negative are considered physically invalid, the places where this happens are considered as places where the ether tears into parts - and the continuous theory is no longer applicable. The theory, modified in such a way, is already different from the theory without this modification, some seemingly completely innocent solutions of the theory without ether interpretation are rejected as invalid. In particular, this excludes all solutions with closed causal loops.

So, yes, the ultimate purpose of considering interpretations is the search for different theories. They are starting points, important for finding starting directions for modifications - the directions which solve problems of the particular interpreation.
 
  • Like
Likes atyy and vanhees71
  • #144
Demystifier said:
It's important to clarify the meaning of certain terms we use, and I don't think that this kind of existence would be called "ontological" by philosophers of physics. In this context, by ontological existence one would mean individual preparations, not the equivalence class of similar preparations.

On the other hand, QM in its statistical ensemble form says nothing about individual preparations. In this sense QM in the statistical ensemble form is not complete, because individual preparations obviously exist (ontologically), and yet the theory says (syntactically) nothing about them.

This proves that QM in the statistical ensemble form is not ontologically complete. A possible way to stop worry about that is to assume that QM in the statistical ensemble form is at least syntactically complete, i.e. that syntax (formal mathematical theory) correctly describing individual preparations - does not exist. But such an assumption is not very well grounded, especially with a known counter-example candidates such as Bohmian mechanics, many-worlds, and objective-collapse theories.
If quantum theory does not, in fact, predict the result of individual measurements, but only their statistical mean, then why should one
expect a syntax describing individual preparations?
 
  • Like
Likes vanhees71
  • #145
TrickyDicky said:
If quantum theory does not, in fact, predict the result of individual measurements, but only their statistical mean, then why should one expect a syntax describing individual preparations?

In the ensemble interpretation, there is a classical/quantum cut. One may like to call it something else, but as most proponents of the ensemble interpretation agree, there is no meaning to the "wave function of the universe" in that interpretation. If there is no "wave function of the universe", one has to say which part of the universe the wave function applies to, which is the classical/quantum cut in all but name.

So the measurement problem can be phrased in different forms, such as the question about individual systems, or removing the classical quantum cut.
 
  • #146
atyy said:
In the ensemble interpretation, there is a classical/quantum cut. One may like to call it something else, but as most proponents of the ensemble interpretation agree, there is no meaning to the "wave function of the universe" in that interpretation. If there is no "wave function of the universe", one has to say which part of the universe the wave function applies to, which is the classical/quantum cut in all but name.

So the measurement problem can be phrased in different forms, such as the question about individual systems, or removing the classical quantum cut.
My question was in reference to Demistifier comment about syntactical
completeness, I don't inmediately see
how your post answers it but I will
address what you write anyway.
The reasoning about the wave function of the uiverse you use would be valid if like MW the ensemble int. considered the wave function as ontologicaly real, but it doesn't.
 
Last edited:
  • #147
TrickyDicky said:
If quantum theory does not, in fact, predict the result of individual measurements, but only their statistical mean, then why should one expect a syntax describing individual preparations?
Would that, then, not make the ensemble interpretation just a "shut up and calculate" interpretation in disguise?
 
Last edited:
  • #148
TrickyDicky said:
My question was in reference to Demistifier comment about syntactical
completeness, I don't inmediately see
how your post answers it but I will
address what you write anyway.
The reasoning about the wave function of the uiverse you use would be valid if like MW the ensemble int. considered the wave function as ontologicaly real, but it doesn't.

Copenhagen does not consider the wave function real on the same level as measurement outcomes. That is the point of the classical/quantum cut. Any minimal interpretation which does not consider the wave function of the universe to be meaningful has a classical/quantum cut.
 
Last edited:
  • #149
bohm2 said:
Would that, then, not make the ensemble interpretation just a "shut up and calculate" interpretation in disguise?
I think ensemble is the "shut up and calculate" genuine interpretation but done in an honest and elegant way ;-)
 
  • #150
atyy said:
Copenhagen does not consider the wave function real on the same level as measurement outcomes. That is the point of the classical/quantum cut.
Yes, that's right.

Any minimal interpretation which does not consider the wave function of the universe to
be meaningful has a classical/quantum cut.
This doesn't follow. First the ensemble interpretation not only does not have an ontology for the wave function, it doesn't have an ontology for classical reality as it is the case in the Copenhagen interpretation.
There is no objective classical world in the ensemble interpretation so no classical-quantum cut. Remember that classical physics is an approximation, If it works so well in the macro world is because it is a good approximation on that scale, so reality is not classical.
 
Last edited:
  • #151
TrickyDicky said:
This doesn't follow. First the ensemble interpretation not only does not have an ontology for the wave function, it doesn't have an ontology for classical reality as it is the case in the Copenhagen interpretation.
There is no objective classical world in the ensemble interpretation so no classical-quantum cut. Remember that classical physics is an approximation, If it works so well in the macro world is because it is a good approximation on that scale, so reality is not classical.

But does common sense reality exist in the ensemble interpretation? In the ensemble interpretation, does nature exist after all physicists have died? Does nature have a law-like description, at least approximately?
 
Last edited:
  • #152
atyy said:
But does common sense reality exist in the ensemble interpretation? In the ensemble interpretation, does nature exist after all physicists have dies? Does nature have a law-like description, at least approximately?
I'd say yes it exists and being an observer
independent interpretation nature don't care
about physicists, but it is agnostic about the specific ontology beyond quantum statistical mechanics.
 
  • #153
TrickyDicky said:
I'd say yes it exists and being an observer
independent interpretation nature don't care
about physicists, but it is agnostic about the specific ontology beyond quantum statistical mechanics.

Then there is still a classical/quantum cut. One shouldn't take the "classical" too seriously in that term, it can be substituted by "common sense reality". So the wave function still does not cover the whole universe, and one has to choose which part of common sense reality is assigned a wave function.
 
  • #154
atyy said:
Then there is still a classical/quantum cut. One shouldn't take the "classical" too seriously in that term, it can be substituted by "common sense reality". So the wave function still does not cover the whole universe, and one has to choose which part of common sense reality is assigned a wave function.
I don't think the wave function is assigned any part as it is purely epistemic, just an instrument to obtain statistical predictions to compare with nature(that's why I say it is compatible with the objective existence of nature) and the interpretation is agnostic wrt hidden variables, so it clearly admits the wave function may not be all.
 
  • #155
TrickyDicky said:
I don't think the wave function is assigned any part as it is purely epistemic, just an instrument to obtain statistical predictions to compare with nature(that's why I say it is compatible with the objective existence of nature) and the interpretation is agnostic wrt hidden variables, so it clearly admits the wave function may not be all.
Do you view the wave function as representing our knowledge of some underlying reality?
 
  • #156
TrickyDicky said:
I don't think the wave function is assigned any part as it is purely epistemic, just an instrument to obtain statistical predictions to compare with nature(that's why I say it is compatible with the objective existence of nature) and the interpretation is agnostic wrt hidden variables, so it clearly admits the wave function may not be all.

Well, let's say there's a cat in a box. Schroedinger's cat scenario is assignment of a wave function to the cat, which is a part of commonsense reality. Or if you have a superconducting chunk in the lab. We assign the chunk a wavefunction, and since the chunk is part of commonsense reality, we are assigning a wavefunction to part of it.
 
  • #157
bohm2 said:
Would that, then, not make the ensemble interpretation just a "shut up and calculate" interpretation in disguise?

Its not in disguise - its explicit.

For example if somehow you proved BM correct that would not disprove the ensemble interpretation. And that is the precise reason it doesn't require collapse - its totally compatible with interpretations like BM that explicitly do not have collapse.

There are many variants to shut up and calculate - most having to do with different takes on probability. You can interpret probability via the Kolmogerov's axioms and leave probability abstract. You can use a frequentest take and get something like the ensemble. You can use a Bayesian take and get something like Copenhagen (most vesrions - some have the quantum state as very real) or Quantum Bayesianism - not that I can see much of a difference between the two except Quantum Bayeianism states its interpretation explicitly.

I also want to emphasise regarding this issue there seems to be a bit of confusion about Bayesanism and frequentest views promulgated in Jaynes otherwise excellent book on probability. There is no difference in either of those interpretations mathematically - as they must be since they are equivalent to the Kolmogorov axioms. But they can lead to different ways of viewing the same problems which sometimes can give different answers:
http://stats.stackexchange.com/ques...frequentist-approach-giving-different-answers

I want to be clear about this from the outset because there have been threads where wild claims about the two approaches are made and it is claimed the frequentest's are incorrect - I think Jaynes makes that claim. It's balderdash.

Thanks
Bill
 
Last edited:
  • Like
Likes bohm2
  • #158
Well, to me BM is ugly, because it introduces trajectories, which finally are not observable, right? So what are they good for?
 
  • #159
TrickyDicky said:
If quantum theory does not, in fact, predict the result of individual measurements, but only their statistical mean, then why should one expect a syntax describing individual preparations?
Because quantum theory may not be the final theory of everything.
 
  • #160
vanhees71 said:
Well, to me BM is ugly, because it introduces trajectories, which finally are not observable, right? So what are they good for?
The wave function is also not observable, yet it is very useful. From the practical point of view, sometimes the numerical calculations with particle trajectories are simpler then more conventional numerical methods of solving the Schrodinger equation.

More generally, as BM is ugly to you, in most cases it is probably not very useful to you. But it is beautiful and intuitive to me, which makes it helpful as a thinking tool to me. For instance, it seems that I was first on this forum who understood the meaning of the main paper we discussed in this thread, and Bohmian way of thinking helped me a lot to gain this understanding (even though I have not mentioned it in my first explanation of the paper, because I adjusted my explanation to majority who are not fluent in Bohmian way of thinking). A more famous example is Bell, who discovered his celebrated theorem with the help of Bohmian way of thinking.

I am not saying that any of these make a use of BM necessary, but as many other tools, it may be useful if you know how to use it.
 
Last edited:
  • Like
Likes TrickyDicky
  • #161
vanhees71 said:
Well, to me BM is ugly, because it introduces trajectories, which finally are not observable, right? So what are they good for?
Sorry, but the trajectories of BM are the classical trajectories of the "classical part" of Copenhagen, thus, are very well observable. They are good for having a unified picture of the "quantum" and the "classical" domain of Copenhagen.
 
  • #162
bhobba said:
... about Bayesanism and frequentest views promulgated in Jaynes otherwise excellent book on probability. There is no difference in either of those interpretations mathematically - as they must be since they are equivalent to the Kolmogorov axioms.
...
I want to be clear about this from the outset because there have been threads where wild claims about the two approaches are made and it is claimed the frequentest's are incorrect - I think Jaynes makes that claim. It's balderdash.

What I remember in this direction from Jaynes (long ago, and my own attempt to understand, so without any warranty, don't blame Jaynes for my errors) is something along the following lines: The frequentists have no concept for assigning probabilities for theories - a theory can be true or not, it cannot be true with probability 0.743. But, of course, they have to do science and that means they have to use outcomes with some probabilities to decide between theories.

Once these are not frequentist probabilities, what they have done is to develop an independent science, stochastics. What they use in this domain is simply intuition - because, different from the Bayesians, they have no nice axiomatic foundation for this. Sometimes the intuition works fine, sometimes it errs, and in the last case Bayesian probability and these intuitive "stochastics" give different answers. But in such cases it would be, of course, wrong to blame the frequentist approach, because this approach, taken alone, simply tells us nothing.
 
  • #163
Ilja said:
The frequentists have no concept for assigning probabilities for theories - a theory can be true or not, it cannot be true with probability 0.743. But, of course, they have to do science and that means they have to use outcomes with some probabilities to decide between theories.

That's not correct.

The modern frequentest view as found in standard textbooks like Feller is based on the assigning of an abstract thing called probability that obeys the Kolmogerov axioms, to events. Its meaningless until one applies the strong law of large numbers and then, and only then, does the frequentest view emerge. Since, via the Cox axioms, the Baysian view is equivalent to the Kolmogerov axioms there can, obviously, be no difference mathematically. The only difference is how you view a problem.

Thanks
Bill
 
  • #164
Ilja said:
Sorry, but the trajectories of BM are the classical trajectories of the "classical part" of Copenhagen, thus, are very well observable. They are good for having a unified picture of the "quantum" and the "classical" domain of Copenhagen.
I thought, the Bohm trajectories are not the classical ones, because there's the pilot wave concept, and the whole theory becomes non-local. I have to reread about Bohmian mechanics, I guess.
 
  • #165
vanhees71 said:
I thought, the Bohm trajectories are not the classical ones, because there's the pilot wave concept, and the whole theory becomes non-local. I have to reread about Bohmian mechanics, I guess.
What Ilja meant is the following: Even though Bohmian trajectories of individual microscopic particles are not directly observable, a large collection of such trajectories may constitute a macroscopic trajectory of a macroscopic body, which obeys approximately classical non-local laws and is observable.
 
  • #166
bhobba said:
The modern frequentest view as found in standard textbooks like Feller is based on the assigning of an abstract thing called probability that obeys the Kolmogerov axioms, to events. Its meaningless until one applies the strong law of large numbers and then, and only then, does the frequentest view emerge. Since, via the Cox axioms, the Baysian view is equivalent to the Kolmogerov axioms there can, obviously, be no difference mathematically. The only difference is how you view a problem.
There can be a difference.

The point is that, first, an essential part of the objective Bayesian approach is the justification of prior probabilities - the probabilities you have to assign if you have no information at all. The Kolmogorovian axioms simply tell us nothing about such prior probabilities. The basic axiom is here that if you have no information which distinguishes two situations when you should assign the same probabilities. Nothing in Kolmogorovian probability theory gives such a rule.

Then, the point is that there is the problem of theory choice based on statistics of experiments. Which is inherently non-frequentist, because theories have no frequencies. Othodox, non-Bayesian statistics is doing something in this domain, because it has to. But what it is doing is nothing which could be derived from Kolmogorovian axioms.
 
  • #167
Ilja said:
The Kolmogorovian axioms simply tell us nothing about such prior probabilities.

The Kolmogorovian axioms define probability abstractly. Baysian probability (as defined by the Cox axioms) is logically equivalent to the Kolmogorov axioms except its not abstract - it represents a degree of confidence.

There is nothing stopping assigning abstract prior probability.

Thanks
Bill
 
Last edited:
  • #168
bhobba said:
The Kolmogorovian axioms define probability abstractly. Baysian probability (as defined by the Cox axioms) is logically equivalent to the Kolmogorov axioms except its not abstract - it represents a degree of confidence.

But in Jaynes' variant there is more than only the axioms which define probability. There are also rules for the choice of prior probabilities.

If we have no information which makes a difference between the six possible outocomes of throwing a dice, we have to assign equal probability to them, that means, 1/6. This is a fundamental rule which is different from Kolmogorovian axioms, and is also not part of some subjectivist variants of Bayesian probability theory (de Finetti), but is an essential and important part of Jaynes concept of probability as defined by the information which is available.

With Kolmogorov or de Finetti you can assign whatever prior probability you want. Following Jaynes, you do not have this freedom - the same information means the same probability.
 
  • Like
Likes microsansfil
  • #169
Ilja said:
But in Jaynes' variant there is more than only the axioms which define probability. There are also rules for the choice of prior probabilities.

If we have no information which makes a difference between the six possible outocomes of throwing a dice, we have to assign equal probability to them, that means, 1/6. This is a fundamental rule which is different from Kolmogorovian axioms, and is also not part of some subjectivist variants of Bayesian probability theory (de Finetti), but is an essential and important part of Jaynes concept of probability as defined by the information which is available.

With Kolmogorov or de Finetti you can assign whatever prior probability you want. Following Jaynes, you do not have this freedom - the same information means the same probability.

I haven't read Jaynes, but I don't see how the choice 1/6 is essential to a Bayesian account of probability. The choice of 1/6 is the "maximal entropy" choice where the entropy of a probability distribution is defined by: [itex]S = \sum_j P_j log(\frac{1}{P_j})[/itex], where [itex]P_j[/itex] is the (unknown) probability of outcome number [itex]j[/itex]. The purely subjective Bayesian approach doesn't require such a choice. However, to the extent that the entropy measures your lack of knowledge, maximal entropy priors better reflect your lack of knowledge.

The beauty of Bayesian probability is that, given enough data, we converge to the same posterior probabilities even if we start with different prior probabilities. To me, that's an important feature.
 
  • Like
Likes vanhees71
  • #170
stevendaryl said:
I haven't read Jaynes, but I don't see how the choice 1/6 is essential to a Bayesian account of probability.

I think Ilja was referring specifically to Jaynes. Jaynes considered the prior to be objective, ie. in any situation, there is not a free subjective choice of prior. So there are subjective (de Finetti) and objective (Jaynes) Bayesians. Of course most practical people do something like semi-empirical priors and a mixture of frequentism (practical, but incoherent at some point) and Bayesianism (coherent, but impractical).

stevendaryl said:
The purely subjective Bayesian approach doesn't require such a choice. However, to the extent that the entropy measures your lack of knowledge, maximal entropy priors better reflect your lack of knowledge..

I think Jaynes here advocated the Shannon entropy, but it isn't clear why one of the Renyi entropies shouldn't be preferred.
 
Last edited:
  • Like
Likes vanhees71
  • #171
Why do you think that Bayesianism is impractical? AFAIU there is no problem for Bayesians to obtain the results of frequentists if there are frequencies to be observed.
 
  • #172
Ilja said:
Why do you think that Bayesianism is impractical? AFAIU there is no problem for Bayesians to obtain the results of frequentists if there are frequencies to be observed.

I think Bayesianism is impractical, because to remain coherent and have the data lead one to the correct conclusion (in the Bayesian sense), the prior must be nonzero over all possibilities including the true possibility. So as long as we can state all possibilities, then Bayesianism is practical. But what happens if I am looking for a quantum theory of gravity? I don't know all possibilities, so I can't write my prior. At this point I am forced to be incoherent, and rely on genius or guesswork.
 
  • #173
atyy said:
But what happens if I am looking for a quantum theory of gravity? I don't know all possibilities, so I can't write my prior. At this point I am forced to be incoherent, and rely on genius or guesswork.
Of course, but in this case frequentism does not help you at all. It does not work on theories, because theories have no frequencies.

And from a pragmatical point of view there is no problem at all - all theories you have to consider are those known. The very point of Bayesianism is, anyway, that you don't have to know everything, but have to use plausible reasoning based on the information you have.
 
  • #174
Ilja said:
Of course, but in this case frequentism does not help you at all. It does not work on theories, because theories have no frequencies.

I think that there is a sense in which Popperian falsifiability can be seen as a way to manage the complexity of a full-blown Bayesian analysis. If there is a number of possible theories, you just pick one. Work out the consequences, and compare with experiment. Then if it's contradicted by experiment, then you discard that theory, and pick a different one. So you're only reasoning about one theory at a time.
 
  • #175
Ilja said:
Of course, but in this case frequentism does not help you at all. It does not work on theories, because theories have no frequencies.

And from a pragmatical point of view there is no problem at all - all theories you have to consider are those known. The very point of Bayesianism is, anyway, that you don't have to know everything, but have to use plausible reasoning based on the information you have.

Yes. I guess what I should say is that the Bayesian dream of never breaking coherence is impractical.
 

Similar threads

Back
Top