The quantum state cannot be interpreted statistically?

In summary, the Pusey, Barret, Rudolph paper of Nov 11th discusses the differing views on the interpretation of quantum states and argues that the statistical interpretation is inconsistent with the predictions of quantum theory. The authors suggest that testing these predictions could reveal whether distinct quantum states correspond to physically distinct states of reality. This preprint has attracted interest and discussion in the scientific community.
  • #106
bohm2 said:
So if one takes that pure epistemic/instrumentalist stance it seems to me one is almost forced to treat QT as "a science of meter readings". That view seems unattractive to me.
I think the problem here is that the possibilities are being too narrowly constrained. You seem to be making a choice between imagining that there is some mathematical object, call it "properties", that underlie some "true theory" that nature actually follows, versus the opposite choice that the only reality is what the meter reads, and all physics should do is predict observations. I don't think either of those models is what physics has ever been, nor what it ever should be. So let me propose a third option.

What's wrong with saying that physics is the art of taking objective measurements and braiding them into a consistent mathematical picture that gives us significant understanding of, and power over, those objective measurements? Isn't that just exactly what physics has always been, so why should we want it to be something different going forward? I see nothing unattractive about it, the mathematical structures we create come just from where they demonstrably come from, our brains, and they work to do just exactly what they work to do-- convey a sense of understanding, beauty, symmetry, and reason to the universe around us. That's what they do, it doesn't make any difference if we imagine there is some "true theory" that we don't yet know underlying it all, I have no idea where that fantasy even comes from!

Some say that they would find it disappointing if there was no true theory like that, no mathematical structure of properties that really does describe everything that happens. I can't agree-- I would find it extremely disappointing to live in such an unimaginative universe as that! We certainly would never want to actually discover such a theory, in which our own minds have mastered everything that happens. We might as well be dead! No more life to the discovery process, no more surprises about anything that nature does, no mystery or wonder beyond the amazement that we actually figured it all out. Even if we did all that, we'd still have at least one mystery to ponder: the paradox of how our brains managed to figure out how we think. Can a thought know where it comes from? Isn't the origin of a thought another thought that needs an origin?

So on the contrary, I would never characterize physics as the attempt to figure out the mathematical structure that determines the true properties of everything. Instead, I would characterize it as the process of inventing properties to answer questions and resolve mysteries, fully aware that this process will only serve to replace more superficial mysteries with more profound ones. And that was in fact the purpose all along, since when has physics been about eliminating mystery? I don't find this view either disappointing, or supportive of the concept of the existence of a unique mathematical structure that determines the true properties of a system. I can hardly even imagine a theory that could give unambiguous meaning to every word in that sentence!
 
Physics news on Phys.org
  • #107
Ken G said:
... My objection was always with the whole concept of hidden-variable theories, I believe they represent a form of pipe dream that physics should have figured out by now it just isn't! Hidden variables are nothing but the variables of the next theory that we haven't figured out yet, there's nothing ontological about them.

If you add local before hidden-variable theories, the pipe dream is dead.

If you add non-local before hidden-variable theories, I know at least one guy in this thread that will have something to say about this... :biggrin:
 
  • #108
The issue isn't local vs. nonlocal, it is in the whole idea of what a hidden variables theory is. It's an oxymoron-- if the variables are hidden, it's not a theory, and if they aren't hidden, well, then they aren't hidden! The whole language is basically a kind of pretense that the theory is trying to be something different from what it actually is. In other words, I have no objection at all to trying to unearth additional variables underneath quantum mechanics, perhaps following the template of deBroglie-Bohm-- doing that would just be good physics. What I object to is the pretense that the resulting theory will be something other than a physics theory, and would not simply have it's own new version of "hidden variables" underlying it. Framed this way, a belief in "hidden variables theories" is simply the belief that physics is an ongoing process of replacing more superficial theories with more profound ones.
 
  • #109
Ken G said:
And that was in fact the purpose all along, since when has physics been about eliminating mystery?

I’m trying real hard to comprehend what you are saying, but with all due respect – it doesn’t make sense.

Are for real saying that one of the goals of physics is to keep us ignorant about how the world works? To preserve the mysteries??

Geeze dude, I smell a rat...
 
  • #110
DevilsAvocado said:
I’m trying real hard to comprehend what you are saying, but with all due respect – it doesn’t make sense.

Are for real saying that one of the goals of physics is to keep us ignorant about how the world works? To preserve the mysteries??
.
Where did I say our goal is to remain ignorant? Talk about the fallacy of the excluded middle-- you are saying that if we don't believe there is a mathematical structure that completely describes everything that happens, then it must be because our goal is to remain ignorant of such a structure. Ah, no. What I am saying is that the process of explaining mysteries is just that: a process of explaining mysteries. No claim needs to be made about what other mysteries might crop up in the process, and I'd say the history of physics is really pretty clear on this point, not that we seem to be getting the message.
 
  • #111
Ken G said:
The issue isn't local vs. nonlocal, it is in the whole idea of what a hidden variables theory is.

Wrong. This is exactly what it is, and it is well supported in theory and all performed experiments performed this far. When the EPR-Bell loopholes are all finally closed, Local Realism is forever dead. This will be an empirical fact.

New successful theories do not change empirical facts, and Newton’s apple will not suspend itself in mid-air just because of a new more precise theory.

That’s just nuts.
 
  • #112
Ken G said:
My objection was always with the whole concept of hidden-variable theories, I believe they represent a form of pipe dream that physics should have figured out by now it just isn't!
I think the last comment is a bit unfair, because how do you figure it out if not by proving theorems like this?

Ken G said:
Hidden variables are nothing but the variables of the next theory that we haven't figured out yet, there's nothing ontological about them.
Yes, this is something that's been bugging me about these "ontic models" as Matt Leifer is calling them. There's a set [itex]\Lambda[/itex] whose members are called ontic states. Given a [itex]\lambda\in\Lambda[/itex], and a measurement procedure M, the theory assigns a probability P(k|λ,M) to each possible result k. This probability is not assumed to be either 0 or 1. There's nothing inherently "ontic" about this. If we say that a model is called "ontic" if and only if each [itex]\lambda\in\Lambda[/itex] represents all the properties of the system (in a sense that's left undefined), then we don't have any way of knowing if a given theory really is ontic. And if we simply define all models that make probability assignments of the type discussed above to be "ontic models", then nothing can tell us if λ really represents properties.

Ken G said:
Physics just makes theories, and they work very well, but none of that has anything do with the existence or non-existence of a "perfect theory" of a mathematical structure that completely describes the properties of a system. There is absolutely no reason to ever assume that such a structure exists, and any proof that starts there has entered into a kind of fantasy realm (and claimed it was a "mild assumption" to boot!).
I don't think their assumption is quite that extreme, but I agree that's it's not "mild". We can imagine a less than perfect theory where the members of [itex]\Lambda[/itex] can be thought of as approximate representations of the system's properties. (The meaning of that is still left undefined). If the epistemic states of this theory (its probability distributions of ontic states) give us exactly the same probability assignments of QM. This theorem is telling us (assuming that its proof is correct) that none of the probability distributions in such a theory are overlapping.

This is hardly worthy of a title like "the quantum state cannot be interpreted statistically", but at least it's a somewhat interesting result, because it tells us something we didn't know before about theories that can reproduce the predictions of QM.
 
  • #113
DevilsAvocado said:
Wrong. This is exactly what it is, and it is well supported in theory and all performed experiments performed this far. When the EPR-Bell loopholes are all finally closed, Local Realism is forever dead.
I never said that wasn't true, and I have no idea why you think I did. What I actually said is that this issue is completely irrelevant to the question of what a quantum mechanical state is. We have absolutely no reason to expect that quantum systems (i.e., states in the theory of quantum mechanics) have "hidden properties" at all-- so I don't care if such imaginary properties are local or nonlocal. We constantly apply many types of unhidden local and nonlocal properties, like charges and action at a distance, and quite successfully, there's no problem at all if we apply them appropriately-- unless we want those pictures to be "the truth", which is just silly. Do we imagine that "hidden properties" of Newtonian gravity turns it into general relativity? Did we debate endlessly on whether Newtonian gravity was a theory that could be consistent with hidden variables that describe why inertial mass is the same as gravitational mass? Maybe they did once, but quickly gave up on the uselessness of the endeavor. Instead, they just came up with the next theory, guided by whatever worked, which is what physics does.

Yes, we know that local hidden properties can't completely reproduce quantum mechanics, wonderful. That provides guidance for the next theory, and how to borrow from the success of QM in such a theory. It is fine to want guidance for the next theory, but people seem to want quantum mechanics to be a description of some part of the "ultimate theory" that is the mathematical structure that describes all the properties of a system. There is zero evidence that it is that, and we should never have expected it to be. Instead, what we should expect it to do is the same things that every physics theory in the history of the discipline has ever done: supply us with a useful picture for making fairly precise calculations and entering into pictorial modes of thought that offer us a sense of understanding. Hidden variables are simply not part of that theory, so wondering what types of hidden variable theories could make all the same predictions as quantum mechanics, including untested predictions, is nothing but an exercise in guiding the next generation of useful observations that could give rise to better theories. It doesn't tell you what a quantum state is, only one thing can do that: the theory of quantum mechanics.

In this light, what the PBR theorem is really saying is, "if you want to replace QM with a hidden variables theory that you can sell as a part of the ultimate theory of ontological truth, don't try to do it using a generalization of the state vector that involves it being epistemic rather than ontic." Fine, thanks for the guidance, it's very relevant for those looking for a theory they can sell that implausible way. It doesn't tell us anything about the theory of quantum mechanics, however, because it's very first assumption has nothing demonstrably to do with quantum mechanics.
 
Last edited:
  • #114
Ken G said:
I think the problem here is that the possibilities are being too narrowly constrained. You seem to be making a choice between imagining that there is some mathematical object, call it "properties", that underlie some "true theory" that nature actually follows, versus the opposite choice that the only reality is what the meter reads, and all physics should do is predict observations. I don't think either of those models is what physics has ever been, nor what it ever should be. So let me propose a third option.

What's wrong with saying that physics is the art of taking objective measurements and braiding them into a consistent mathematical picture that gives us significant understanding of, and power over, those objective measurements?

I've always had trouble understanding this third option. For instance, I tried reading the Fuchs paper (we discussed this on the philosophy board) and I just could not understand it. I only seem to be able to understand the two options. Maybe I'm mistaken but I fear there is no difference between the purely epistemic/instrumentalist stance and the third option you favour.

I know some "Bohmians" treat the wave function as some type of nomological (law of nature)/abstract entity (e.g. Goldstein, Durr, etc.) but there are problems with this approach as mentioned by Valentini. I also understand the Bohrian view, I think, but I can't seem to grasp that third option. I mean, what exactly are those objective measurements about? What do those mathematical objects in QM (e.g. wave function) refer to in that third option?

Edit: So there's no confusion I'm not a "naive" realist. And I'm pretty supportive of this position, I think:

the propositions of physics are equations, equations that contain numbers, terms that refer without describing, many other mathematical symbols, and nothing else; and that these equations, being what they are, can only tell us about the abstract or mathematically characterizable structure of matter or the physical world without telling us anything else about the nature of the thing that exemplifies the structure. Even in the case of spacetime, as opposed to matter or force—to the doubtful extent that these three things can be separated—it’s unclear whether we have any knowledge of its intrinsic nature beyond its abstract or mathematically representable structure."

Thus, in physics, the propositions are invariably mathematical expressions that are totally devoid of direct pictoriality. Physicists believe that physics has to 'free itself' from ‘intuitive pictures’ and give up the hope of ‘visualizing the world'. Steven Weinberg traces the realistic significance of physics to its mathematical formulations: ‘we have all been making abstract mathematical models of the universe to which at least the physicists give a higher degree of reality than they accord the ordinary world of sensations' ( e.g. so-called 'Galilean Style').
 
Last edited:
  • #115
Fredrik said:
I think the last comment is a bit unfair, because how do you figure it out if not by proving theorems like this?
The theorem only "proves" that one type of thing is a pipe dream by assuming an even larger pipe dream. No theorem is any better than its postulates, and in this case, we have a postulate that there exists a physics theory that is unlike any physics theory ever seen. So the theorem only means something to people who believe in that postulate. That may be a lot of people, in which case the theorem does have significance for them, but many of the bloggers are basically saying "the theorem has no significance for me because I didn't expect epistemic states to work like that anyway." I'm saying it has no significance for me because I don't even expect physics theories of any kind to be the objects that they are assumed to be in that paper, some kind of "mini version" of an ultimate mathematical description of life, the universe, and everything.
Yes, this is something that's been bugging me about these "ontic models" as Matt Leifer is calling them. There's a set [itex]\Lambda[/itex] whose members are called ontic states. Given a [itex]\lambda\in\Lambda[/itex], and a measurement procedure M, the theory assigns a probability P(k|λ,M) to each possible result k. This probability is not assumed to be either 0 or 1. There's nothing inherently "ontic" about this. If we say that a model is called "ontic" if and only if each [itex]\lambda\in\Lambda[/itex] represents all the properties of the system (in a sense that's left undefined), then we don't have any way of knowing if a given theory really is ontic. And if we simply define all models that make probability assignments of the type discussed above to be "ontic models", then nothing can tell us if λ really represents properties.
Yes, that bothers me too. I really don't see what an "ontic model" is, it sounds like something that no physics model has ever been. Can someone give me an example, anywhere in physics, of an ontic model, and tell me why it is not an epistemic model? To me an "epistemic model" is a model about what we know, rather than about what is actually there. I'm very curious what physics theory talks about what is really there, rather than what we know about that system.

I don't think their assumption is quite that extreme, but I agree that's it's not "mild". We can imagine a less than perfect theory where the members of [itex]\Lambda[/itex] can be thought of as approximate representations of the system's properties. (The meaning of that is still left undefined). If the epistemic states of this theory (its probability distributions of ontic states) give us exactly the same probability assignments of QM. This theorem is telling us (assuming that its proof is correct) that none of the probability distributions in such a theory are overlapping.
The key question is, how much of this proof requires that there be these things called "properties" that can adjudicate the meaning of an ontic state and an epistemic state? It seems to me that the properties are crucial-- the theory essentially assumes that there is such a thing as ontic states, and only then does it ask if quantum states refer to ontic states. I feel that if one is to think of a quantum state as an epistemic state, one is not thinking of it as a probability distribution of ontic states, one is rejecting the whole concept of an ontic state. If you embrace the ontic state, then you are doing deBroglie-Bohm or some such hidden variable theory, you are not doing an epistemic interpretation at all. To me, a real full-blown epistemic interpretation is saying that our knowledge of a system is not some idle "fly on the wall" to the behavior of that system, it is part of the defining quality of what we mean by that "system" and its "behavior" in the first place. I thus see no reason to adopt epistemic interpretations if ontic states exist at all!

This is hardly worthy of a title like "the quantum state cannot be interpreted statistically", but at least it's a somewhat interesting result, because it tells us something we didn't know before about theories that can reproduce the predictions of QM.
Yes, the theorem does connect some interesting ramifications with some questionable postulates, I will agree there. The value is only in the observations it could motivate, in that they might help us find outcomes where quantum mechanics is wrong-- we already have quantum mechanics, we don't need any other theory to get the same answers that quantum mechanics does.
 
  • #116
bohm2 said:
Maybe I'm mistaken but I fear there is no difference between the purely epistemic/instrumentalist stance and the third option you favour.
There is a big difference, if by "instrumentalist stance" you basically mean "shut up and calculate." To me, a purely instrumentalist stance is a kind of radical empiricism, that says reality is what dials read. I am saying, reality is a way of thinking about our environment, it is a combination of the dial readings and how we synthesize them into a rational whole. It is what we make sense of. I think Bohr said it best: physics is about what we can say about nature. The "saying" part is really crucial, and that is where I differ from pure instrumentalism, because it is not true that all we can say about nature is facts and figures.

I mean, what exactly are those objective measurements about? What do those mathematical objects in QM (e.g. wave function) refer to in that third option?
They are about, and refer to, whatever we make of them being about, and referring to. That's it, that's what we get: what we can make of it, what we can say about it. It doesn't need to be some approximate version of a "true theory", there is no need for any such concept, and no such concept ever appears anywhere in physics, so I'm mystified why so many people seem to imagine that physics requires it in order to work. We should ask ourselves: for an approximate theory to work, why must there be an exact one?
 
  • #117
Ken G said:
Can someone give me an example, anywhere in physics, of an ontic model, and tell me why it is not an epistemic model?
The first part is easy. The classical theory of a single particle in Galilean spacetime, moving under the influence of a force. The phase space of this this theory meets the requirements I mentioned in my previous post: Denote the phase space by [itex]\Lambda[/itex]. Given a [itex]\lambda\in\Lambda[/itex] and a measuring procedure M, the theory assigns a probability P(k|λ,M) to each possible result k.

The second part is harder, or maybe I just feel that way because I don't understand these things well enough yet. (I have two answers. The first one is right here. The other is what I'm saying in response to the last quote below). I think it's obvious enough that it makes sense to think of phase space points as complete sets of properties. I don't think a proof or even a definition is required*. If you want a reason to think of them that way, then consider the fact that if you know one point on the curve that describes the particle's motion, you can use the force (:wink:) to find all the other. So if you know a point, you know everything.

*) I don't think it's necessarily crazy to leave some things undefined. As you know it isn't possible to define everything, but more importantly, there are some things that we simply can't avoid treating as more fundamental than other things. For example, the concept of "measuring devices" is more fundamental than any theories of physics, and the concept of natural numbers is more fundamental than even the formal language used to define the set theories that we use to give the term "natural number" a set theoretic definition. It seems reasonable to me to take "property" to be one of those things that we consider so fundamental that we don't need to define it.

Ken G said:
To me an "epistemic model" is a model about what we know, rather than about what is actually there.
Right, but in this context, it's what we know about the ontic states. Like it or not, that's seems to be how these guys are defining it.

Ken G said:
The key question is, how much of this proof requires that there be these things called "properties" that can adjudicate the meaning of an ontic state and an epistemic state?
This is something that I find confusing. I'm tempted to say "none of it". Suppose that we consider all models that for each measuring device and each member of some set [itex]\Lambda[/itex] assigns a probability P(k|λ,M) to each result k to be "ontic". We have no way of knowing if the ontic states really represent properties, but that also means that nothing will go seriously wrong if we just pretend that we do.

I think that this is what the HS article does, because their first example of an ontic model (they may have used the term "hidden-variable theory" instead) simply defines [itex]\Lambda[/itex] to be the set of Hilbert subspaces of the Hilbert space of QM.
 
  • #118
A few more thoughts... (some of this has already been suggested by Ken G)

If we define an "ontic model" as a theory that involves a set [itex]\Lambda[/itex] and assigns probability P(k|λ,M) to measurement result k, given [itex]\lambda\in\Lambda[/itex] and a measuring procedure M, then QM is already an ontic model.

It's a ψ-complete (and therefore a ψ-ontic) ontic model. So if we really want to ask whether probabilities in QM are a result of our ignorance of ontic states, then we have to consider some other ontic model. We are now asking if there's another ontic model such that
  • The ontic states in QM (the pure states, the state vectors) correspond to the epistemic states of this alternative ontic model.
  • This alternative ontic model makes the same probability assignments as QM.
  • Some of the probability distributions are overlapping.
Suppose that we could somehow verify that there is an ontic model with these properties. Would that result be at all interesting?

I would say "yes", if and only if the P(k|λ,M) of the alternative ontic model are all 0 or 1. If the alternative model also assigns non-trivial probabilities, then why should we care about the result? Now someone is just going to ask "Are these probabilities the result of our ignorance of ontic states?"

From this point of view, it's a bit odd that ontic theories are allowed to make non-trivial probability assignments.
 
  • #119
One more thing... (Edit: OK, two more things...)

If we make the requirement that each ontic state must represent all the properties of the system, and leave the term "properties" undefined, then the PBR result can't be considered a theorem. (Because theorems are based on assumptions about terms that have definitions in set theory). I still think it makes sense to leave a term like "property" undefined in a general discussion, but it makes no sense to make such terms part of a definition of a term that's involved in a theorem.

In other words, if PBR defines the "knowledge of the system" view as "There's a ψ-epistemic ontic model that can reproduce the predictions of QM", the definition of the term "ontic model" in that statement can't include the concept of "property", unless it's defined. The only definition that I would consider appropriate is the probability-1 definition, but since neither HS nor Leifer is using it, I don't think we should. The only possibility appears to be to leave out any mention of "properties" from the definition. That would mean that there's no technical difference between an "ontic model" and just a "model".

Do these guys distinguish between the terms "model" and "theory"? I don't think they do. Here's a distinction I would like to make: Both are required to assign the probabilities p(k|λ,M), but for a "model", we don't require that it's possible to identify preparation procedures with probability distributions of ontic states. In other words, a theory must make testable predictions, and a model doesn't. (This is just a suggestion. I don't think there's an "official" definition, and I don't know if this concept of "model" is useful).
 
Last edited:
  • #120
There's something not quite right with this paper:
1 - if you are talking about physical properties, they must be talking about a single individual system (eg one electron, or one photon or one atom).
2- QM does not make predictions about individual events, so they seem to be mixing concepts.
3- If the outcome of each individual measurement is uniquely determined by the complete physical properties of the electron, photon etc, then that outcome is certain and can not be "statistical", in which case the measuring device can not and does not give probabilities. The statement "the probabilities for different outcomes is only determined by the complete physical state of the two systems at the time of the measurement" is the source of all their problems IMHO (See the last paragraph on the left hand side of page (2))
4- An ontic but incomplete QM state, is not very different from an epistemic state with hidden ontic properties. Both will result in "statistical" predictions since incomplete specification of the QM state results in lack of certainty. (cf "uniquely determines"). The only way to distinguish the two is to make a prediction about a single event and compare with an experiment in which only a single event happens. Good luck with that.
 
  • #121
Fredrik said:
I think it's obvious enough that it makes sense to think of phase space points as complete sets of properties. I don't think a proof or even a definition is required*. If you want a reason to think of them that way, then consider the fact that if you know one point on the curve that describes the particle's motion, you can use the force (:wink:) to find all the other. So if you know a point, you know everything.
But you don't know a point, you only know that the particle is in some box. In other words, if I replace classical mechanics the way it is normally described (a theory of impossible precision) with a theory that only talks about intervals, rather than points, do I not have an epistemic version? And here's the real kicker: how is such a theory not completely equivalent to classical mechanics? And what's more, isn't the second theory the one we actually test, not the first one? If the second version is the only version of classical mechanics that ever gets tested, then I claim the second version is the actual theory of classical mechanics, and the first one is just a kind of make-believe version that we only use out of a kind of laziness to talk about the theory that we have actually checked. I like laziness as much as the next guy, but we should at least recognize it. (If we had, we would never have concluded that quantum mechanics was "unclassical", we would have called it what it really is "super-classical." It includes classical physics, and adds more complexity at smaller scales inside the boxes that classical physics never tested.)
As you know it isn't possible to define everything, but more importantly, there are some things that we simply can't avoid treating as more fundamental than other things. For example, the concept of "measuring devices" is more fundamental than any theories of physics, and the concept of natural numbers is more fundamental than even the formal language used to define the set theories that we use to give the term "natural number" a set theoretic definition.
I'm with you up to here.
It seems reasonable to me to take "property" to be one of those things that we consider so fundamental that we don't need to define it.
The problem is not with using "properties" as conceptual devices, we do that all the time-- physics would be impotent without that ability. The issue is what does it mean when we invoke a conceptual device and call it a property. Does it mean that if we knew all the properties, we'd understand the system completely? That's the part I balk at, I see zero evidence of that, and I find it such a complete departure from anything that physics has ever been in the past. I think the more we know about something, the deeper the mysteries about it become-- we never understand it completely, we understand what we didn't understand before and now don't understand something new. So much for properties!

So I ask the same question-- for an approximate theory to work well, why does this require that there be an exact theory underlying it? I think that is a bogus proposition, yet it seems to be the very first assumption of PBR. The crucial assumption is not that the concept of a property might be useful, it is that systems really have properties that determine outcomes. If we strip out that part of the proof, what does it prove now?
Right, but in this context, it's what we know about the ontic states. Like it or not, that's seems to be how these guys are defining it.
Yes, and that is exactly what I think limits the generality of their proof. Let's go back to classical mechanics, and my point that it was never really a theory about points in phase space, it was always a theory about boxes in phase space (since that was all that was ever tested about it). If we had been more careful, and framed classical mechanics that way, then we might have had someone say "of course there really are ontic points inside those boxes, we only use boxes because of our epistemic limits in gathering information about those ontic points."

Indeed, that's what many people did say. Then along comes the hydrogen atom, and oops, those boxes are not boxes of ontic states at all. Why does this always seem to come as a surprise? The whole point of an epistemic treatment is to not pretend we know something we don't know-- like that epistemics is just a lack of information about ontics! If there was ever a lesson of quantum mechanics, it is that epistemics is something potentially much more general than just lack of information about ontics.
This is something that I find confusing. I'm tempted to say "none of it". Suppose that we consider all models that for each measuring device and each member of some set [itex]\Lambda[/itex] assigns a probability P(k|λ,M) to each result k to be "ontic". We have no way of knowing if the ontic states really represent properties, but that also means that nothing will go seriously wrong if we just pretend that we do.
It seems to me the key assumption is that the ontics decide what happens to the system, and the epistemics are just lack of information about the ontics. Could we not prove things about any theory that could be consistent with classical mechanics by making the same assumption, that inside any epistemic "box" in phase space there are ontic points that determine the outcomes of when a hydrogen atom recombines? But quantum mechanics does not respect the ontic points of what people imagined classical mechanics was (but never demonstrated by experiment that it was), yet quantum mechanics does reproduce every experimental prediction that classical mechanics works for. Quantum mechanics is a mathematical structure "at least as good as classical mechanics."

Now, granted, quantum mechanics also makes different predictions at small scales. But that's my point-- I think the real value of the PBR theorem is that it might help us to figure out experiments to test quantum mechanics that quantum mechanics might not get right. If it does that, then it will be a truly valuable theorem. But I don't think it tells us anything about quantum mechanics, any more than proving theorems about ontic points inside boxes in phase space tells us anything about classical mechanics. Classical mechanics never was a theory about ontic points in phase space, it was always, demonstrably, a theory about epistemic boxes in phase space. This is also true of quantum mechanics, with different epistemics. Ultimately, I claim that all theories are built of epistemic primitives, and it is only a kind of laziness that allows us to imagine that any physics theory is ontic.
I think that this is what the HS article does, because their first example of an ontic model (they may have used the term "hidden-variable theory" instead) simply defines [itex]\Lambda[/itex] to be the set of Hilbert subspaces of the Hilbert space of QM.
Expressing quantum mechanics in terms of Hilbert spaces is certainly a useful way to go, just as expressing classical mechanics in terms of points in phase space was. If that is what we mean by quantum mechanics (and that is indeed how it gets defined in the textbooks), then it is definitively ontic, as you point out later. But does this mean that it has to be an ontic theory to work as well as it does? I say no, it should be easy to replace the Hilbert space with a more epistemic version that restricts the theory to what has actually been verified by experiment. Such a theory would be completely equivalent in terms of its experimental justification, but would be much more "honest" (and less lazy but also less parsimonious), because it would not pretend to be an ontic theory when only its epistemic character has actually been tested. It would serve just as well, in every way except parsimony, as the theory we call "quantum mechanics". But we like parsimony, so we use the ontic theory, and that's fine-- as long as we recognize that in choosing parsimony over demonstrability, we have entered into a kind of pretense that we know more than we actually do. Look where that got is in DesCartes' era!
 
  • #122
Fredrik said:
From this point of view, it's a bit odd that ontic theories are allowed to make non-trivial probability assignments.
Yes, I'm a bit unclear on this issue as well. If a "property" can result in nothing but a statistical tendency, what you call a nontrivial probability, then what does it mean to have a property? I just don't see why quantum mechanics needs this element at all, quantum mechanics is about taking preparations and using them to calculate probabilities, there just isn't any step that looks like "now convert the state into properties." The state itself is ontic in the "lazy" (yet official) version of quantum mechanics, but the state is all you need to make predictions. If you simply define the predictions as the properties, how are predictions that come from state vectors something that leads to the state vectors? Quantum mechanics is a theory about state vectors and operators, not about properties, so why even mention them at all when trying to understand quantum mechanics?

If we make the requirement that each ontic state must represent all the properties of the system, and leave the term "properties" undefined, then the PBR result can't be considered a theorem.
Yes, not defining properties is bothersome, and I feel it raises the spectre of circularity. If one says "you know what I mean by a property" and move on, there is a danger that what I know what they mean is that a property is whatever it is that makes quantum mechanics work in experiments. Then when we note that state vectors is how quantum mechanics makes predictions, and we have assumed the predictions are right (to test what other theories are equivalent) and that what made the predictions right is the properties, then we have assumed that the means of making the predictions connects to the properties. Isn't that what is being claimed to be proven?
 
  • #123
Demystifier said:
I believe I have found a flaw in the paper.

In short, they try to show that there is no lambda satisfying certain properties. The problem is that the CRUCIAL property they assume is not even stated as being one of the properties, probably because they thought that property was "obvious". And that "obvious" property is today known as non-contextuality. Indeed, today it is well known that QM is NOT non-contextual. But long time ago, it was not known. A long time ago von Neumann has found a "proof" that hidden variables (i.e., lambda) were impossible, but later it was realized that he tacitly assumed non-contextuality, so today it is known that his theorem only shows that non-contextual hidden variables are impossible. It seems that essentially the same mistake made long time ago by von Neumann is now repeated by those guys here.

Let me explain what makes me arrive to that conclusion. They first talk about ONE system and try to prove that there is no adequate lambda for such a system. But to prove that, they actually consider the case of TWO such systems. Initially this is not a problem because initially the two systems are independent (see Fig. 1). But at the measurement, the two systems are brought together (Fig. 1), so the assumption of independence is no longer justified. Indeed, the states in Eq. (1) are ENTANGLED states, which correspond to not-independent systems. Even though the systems were independent before the measurement, they became dependent in a measurement. The properties of the system change by measurement, which, by definition, is contextuality. And yet, the authors seem to tacitly (but erroneously) assume that the two systems should remain independent even at the measurement. In a contextual theory, the lambda at the measurement is NOT merely the collection of lambda_1 and lambda_2 before the measurement, which the authors don't seem to realize.

I had a brief exchange of e-mails with the authors of that paper. After that, now I am even more convinced that I am right and they are wrong. Here are some crucial parts of that exchange, so that you can draw a conclusion by yourself:

> Prof. Barrett:
> Briefly, the vectors in Eq.(1) are entangled, yes but they don't represent
> the state of the system. They are the Hilbert space vectors which
> correspond to the four possible outcomes of the measurement.

Me (H.N.):
But in my view, the actual outcome of the measurement (i.e., one of those
in Eq. (1) ) DOES represent the state of the system.
Not the state before the measurement, but the state immediately after the
measurement. At the measurement the wave function "collapses",
either through a true von Neumann collapse, or through an effective
collapse as in the many-world interpretation or Bohmian interpretation.

.
.
.

> Prof. Barrett:
> The assumption is that the probabilities for the different outcomes of
> this procedure depend only on the physical properties of the systems at a
> time just before the procedure begins (along with the physical properties
> of the measuring device).

Me (H.N.):
Yes, I fully understand that if you take that assumption, you get
the conclusion you get. (In fact, that conclusion is not even
entirely new. For example, the Kochen-Specker theorem proves something
very similar.) But it is precisely that assumption that I don't
find justified. Any measurement involves an interaction, and any measurement
takes some time (during which decoherence occurs), so I don't think it is
justified to assume that the measurement does not
affect the probabilities for the different outcomes.
 
  • #124
In short, to make their results meaningfull, a correct title of their paper should be changed to
"The quantum state cannot be interpreted non-contextually statistically"

But that is definitely not new!
 
  • #125
Demystifier said:
I had a brief exchange of e-mails with the authors of that paper. After that, now I am even more convinced that I am right and they are wrong. Here are some crucial parts of that exchange, so that you can draw a conclusion by yourself:

Thanks for sharing!

Demystifier said:
Me (H.N.):
But in my view, the actual outcome of the measurement (i.e., one of those
in Eq. (1) ) DOES represent the state of the system.
Not the state before the measurement, but the state immediately after the
measurement. At the measurement the wave function "collapses",
either through a true von Neumann collapse, or through an effective
collapse as in the many-world interpretation or Bohmian interpretation.

.
.
.

> Prof. Barrett:
> The assumption is that the probabilities for the different outcomes of
> this procedure depend only on the physical properties of the systems at a
> time just before the procedure begins (along with the physical properties
> of the measuring device).

[bolding mine]

I could be wrong, but to me, it looks like you are talking about different things?

Prof. Barrett talks about "probabilities for the different outcomes" and you about "the actual outcome of the measurement". This could never represent the same thing, could it?

How could a definite measurement represent a superposition or entanglement? When the measurement is completed, these things are "gone"... aren’t they?
 
  • #126
Demystifier said:
In short, to make their results meaningfull, a correct title of their paper should be changed to
"The quantum state cannot be interpreted non-contextually statistically"

But that is definitely not new!


Would that be compatible to Matt Leifer’s conclusions?
Conclusions
The PBR theorem rules out psi-epistemic models within the standard Bell framework for ontological models. The remaining options are to adopt psi-ontology, remain psi-epistemic and abandon realism, or remain psi-epistemic and abandon the Bell framework. [...]​
 
  • #127
Ken G said:
But you don't know a point, you only know that the particle is in some box.
It sounds like you're just saying that a preparation procedure doesn't uniquely identify a ontic state. So it corresponds to a probability distribution of states. This means that to get the best predictions with the best estimates of the margins of error, we should use the epistemic state defined by the preparation procedure to assign probabilities to measurement results.

Ken G said:
The issue is what does it mean when we invoke a conceptual device and call it a property. Does it mean that if we knew all the properties, we'd understand the system completely?
It occurred to me after I went to bed that one can interpret the definition of an "ontic model" as saying that to know "all the properties" is to have the information that determines the probabilities of all possible measurement results for all possible preparation procedures.

Ken G said:
So I ask the same question-- for an approximate theory to work well, why does this require that there be an exact theory underlying it?
I doubt that it's possible that there's no exact description of reality. I would expect the universe to be more chaotic if that was the case, too chaotic for us to exist. But I too have a problem with the idea that the ultimate description of reality is an ontic model. They are just too convenient. However, I don't think any of the articles we have discussed are assuming that the relevant ontic model is exactly right.

Ken G said:
The whole point of an epistemic treatment is to not pretend we know something we don't know-- like that epistemics is just a lack of information about ontics! If there was ever a lesson of quantum mechanics, it is that epistemics is something potentially much more general than just lack of information about ontics.
Agreed.

Ken G said:
Quantum mechanics is a mathematical structure "at least as good as classical mechanics."
This is more clear in the algebraic and quantum logic approaches to QM. They show that QM can be thought of as a generalization of probability theory that includes classical probability theory as a special case.
 
  • #128
DevilsAvocado said:
I could be wrong, but to me, it looks like you are talking about different things?
The three points between two texts indicate that one is not a response to the other, but correspond to independent pieces of a dialog.
 
  • #129
Fredrik said:
It sounds like you're just saying that a preparation procedure doesn't uniquely identify a ontic state. So it corresponds to a probability distribution of states. This means that to get the best predictions with the best estimates of the margins of error, we should use the epistemic state defined by the preparation procedure to assign probabilities to measurement results.
Yes, and we should be aware that inside that "margin of error" might be something quite a bit more than just error and uncertainty-- there could be a whole new theory living down there, which we never dreamed of except for a few troubling questions about the theory we already had-- as was true with classical mechanics giving rise to quantum mechanics. That's why I don't understand why we should care what hidden variables theories could make all the same predictions as quantum mechanics-- what we actually want are hidden variables theories that make different predictions, we just want them to predict the same things in the arena that has been tested. That's why I think the real value of the PBR theorem will only be realized if it motivates experiments to look for cracks in quantum mechanical predictions. After all, isn't the wave function a "hidden variable" underlying classical mechanics?
It occurred to me after I went to bed that one can interpret the definition of an "ontic model" as saying that to know "all the properties" is to have the information that determines the probabilities of all possible measurement results for all possible preparation procedures.
Yes, it does seem to have some connection with a concept of "complete information." They seem to be saying, let's assume that such "complete information" is possible, and then ask if the wave function is one of the things that would appear as a primitive element of that complete information, something that is one of the ontic properties rather than something that emerges from the ontic properties but is not itself ontic. I'm not surprised that if there are such ontic properties, the wave function is one of them, but I just don't see why assuming there are ontic properties tells us something fundamental about quantum mechanics-- because quantum mechanics doesn't require that there be ontic properties, any more than classical mechanics did (remember, classical mechanics is still widely used, even now that we know it is not based on ontic properties at all). Theories are built top-down, not bottom-up, and they only penetrate so far. We only know everything from our theory on up, but never anything below our theory. Why does being a "realist" require ignoring everything that physics has ever demonstrably been and done, and pretending it was all about something below what physics has ever been or done?
I doubt that it's possible that there's no exact description of reality. I would expect the universe to be more chaotic if that was the case, too chaotic for us to exist. But I too have a problem with the idea that the ultimate description of reality is an ontic model. They are just too convenient.
What is the difference between an exact description and an ontic model? And aren't we the children of chaos? I have the opposite view-- I think that any universe that has an exact description is sterile and uninteresting, much like a mathematical system that cannot support a Godel statement.

However, I don't think any of the articles we have discussed are assuming that the relevant ontic model is exactly right.
They don't assume the theory is exactly right, but they do assume that the outcomes are determined by ontic properties. They seem to start from the same perspective that you are stating-- that there has to be something, call it properties, that determine what happens (perhaps only statistically, this is an unclear issue what that means), and can be expressed as a mathematical structure. That seems to be a key assumption-- the structure is what determines what happens. If the mathematical structure is only approximate, how can it determine what happens? It must be exact to claim that outcomes can be traced back to properties, even if only statistically exact, doesn't it?
 
  • #130
What is bothering me is that epistemic view where all scientific theories are and will be forever forbidden to make ontological claims... That's not science but a reversed dogmatism : we are sure that will never know for sure... Note the paradox...
In fact, there is all sorts of scientific theories some well founded and others a lot less... Some of their results could be considered as scientific facts and others not... We can discuss forever on the theory of everything that explains the epiphany of the universe, but no one can seriously deny our actual knowledge on the structure of the atom, per example... Even if we don't understand how something, as a particle, could be both a wave and a point like object...
There is a serious misunderstanding of what science is : an experience of knowledge between a group of subjects and a structure of objects... How some of us conclude that we don't even study the objects but we are only constructing theories (Hell, about what ? ) is a mystery for me... They urge science to give them a full understanding of the universe to give it the right to make ontological claims... Which is not reasonable...
When something looks like an orange, smells like an orange, tastes like an orange and have the DNA of the orange... It is an orange...
 
Last edited:
  • #131
Demystifier said:
The three points between two texts indicate that one is not a response to the other, but correspond to independent pieces of a dialog.

Sure, but despite this little 'dot technicality', you two seems to be talking about completely different things. And it doesn’t get any better when you finish up by changing your initial standpoint:
the actual outcome of the measurement [...] *DOES represent* the state of the system

To:
Any measurement involves an interaction, and any measurement takes some time (during which decoherence occurs), so I don't think it is justified to assume that the measurement does not *affect the probabilities* for the different outcomes.

In this situation claim that Prof. Barrett repeated "the von Neumann mistake", doesn’t convince me fully.

(Shouldn’t a professor be aware of Bohm’s theory and Bell’s work? Sounds strange...)
 
  • #132
Ken G said:
What is the difference between an exact description and an ontic model?
An ontic model can make predictions that disagree with experiments. This would make it wrong (even if it's a good theory). An exact description* can't be wrong, but it's also not required to make any predictions. This would disqualify it from being called a theory.

*) Note that this was just a term I made up for that post.
 
  • #133
Fredrik said:
An ontic model can make predictions that disagree with experiments. This would make it wrong (even if it's a good theory). An exact description* can't be wrong, but it's also not required to make any predictions. This would disqualify it from being called a theory.

*) Note that this was just a term I made up for that post.

In the Jaynes paper (http://bayes.wustl.edu/etj/articles/prob.in.qm.pdf), cited by the above article, this is made quite clear in the section titled "How would QM be different", pg 9)

For example if we expand ψ in the energy representation: ψ(x,t) = Ʃa_n(t)u_n(x), the
physical situation cannot be described merely as

"the system may be in state u1(x) with probability p1 = |a1|^2; or it may be in state u2(x) with probability p2 = |a2|^2, and we do not know which of these is the true state".
...
[Bohr] would never say (as some of his unperceptive disciples did) that |an|^2
is the probability that an atom is "in" the n'th state, which would be an unjustified ontological statement; rather, he would say that |an|^2 is the probability that, if we measure its energy, we shall find the value corresponding to the n 'th state.


...
But notice that there is nothing conceptually disturbing in the statement that a vibrating bell
is in a linear combination of two vibration modes with a definite relative phase; we just interpret the mode (amplitudes)^2 as energies, not probabilities. So it is the way we look at quantum theory, trying to interpret |ψ|^2 directly as a probability density, that is causing the difficulty.

The bold part explains the difference of how ψ is interpreted (ontic vs epistemic). The former is the ontic interpretation, and the latter is the epistemic interpretation.
 
  • #134
Ken G brought up some interesting issues about why we should care about hidden variables theories which merely reproduce the predictions of QM. I certainly concur that such a theory lacking any new empirical content would lack scientific value as a new theory. That does not mean it would be without value though. Even proving that such a theory is in principle possible, even lacking a specific theory, would be of some value. Much as the no-go theorems themselves have a degree of value. Certainly finding cracks in the predictions of QM is the ultimate game changer, but there are whole classes of possibilities which extends the empirical content and/or cohesiveness between QM and GR that do not involve invalidating any predictions QM is presently capable of. These would certainly extend the scientific value with or without explicit cracks in the predictions of QM as presently formulated well beyond simple equivalency. Now about the issues PBR article raises wrt this.

Traditionally it has been taught that the statistical nature of QM has a fundamentally different nature than the randomness associated with classical physics. Whereas in the later case randomness was simply a product of trading detailed information about positions and momentums for mean values, it the QM case no such underlying ontic states have ever been found, even in principle, to successfully derive the empirical content of QM from. Much less wed QM and GR or provide equivalent empirical content in combination with new empirical, content, or show any empirically invalid content in the present formulation. It was this fundamentally different nature of randomness associated with QM, distinct from classical randomness, that the PBR article took aim at. For those realist that interpreted quantum randomness in a quasi-classical sense in their personal interpretations the PBR article makes no explicit mention of one way or the other. In effect, when the PBR article states: "The quantum state cannot be interpreted statistically", it is equivalent to a claim stating: Quantum randomness is not as fundamentally different from classical randomness as traditionally claimed. The PBR definition of "statistical" then both justifies and leaves untouched the definition of "statistical" as defined by at least some realist in the field.

It boils down to a distinction between a causally independent verses a causally dependent concept of randomness. It's is unfortunate that the terminology for the prototypical distinction is given as quantum verses classical randomness. This unfortunate terminology provides for misrepresenting an authors claim that quantum statistics has some limited classical characteristics by a strawman argument supplanting such an authors use of the term "quantum statistics" with the definition of randomness that the prototypical term academically implies traditionally. Francis bacon anyone? The PBR article in effect is not refuting a statistical interpretation of QM in general, it is merely attempting to refute the prototype characterization of "statistical" that is traditionally implied by the term "quantum statistics", while using and denying that term purely within the context of that traditional (quantum) interpretation.

Consider a prototypical classical variable with a statistical character, such as temperature. Temperature is in fact a contextual variable. Given only a temperature it is fundamentally impossible to derive a complete specification for the ontic positions and momentums resulting in that temperature. In fact the number of possible ontic states grows exponentially with the number of degrees of freedom that system possess. Yet from a complete ontic specification of a classical system it is quiet trivial to derive the temperature. QM observables limit measurable even more. Suppose rather than measuring temperature on a scale we were limited to temperature measurements which could only determine whether a temperature was above or below some threshold, set in some ill defined way by the choice of measuring instrument used. We would then be stuck with temperature observables that imply that temperature has an indeterminate value before measurement but a binary value of either |0> or |+> after measurement.

Of course in the classical regime we can make a coordinate choice that for most practical purposes provides an independent framework to track positions and momentums of the presumed ontic elements. Along with Relativity to correct the incongruence in the more general case. Hence showing classical consistency between the presumed ontic states (such as positions and momentum) before and after measurements is trivial. Yet, given the malleability of spacetime as defined by GR, it is not unreasonable to presume that IIF nature has a fundamental ontic structure that the very definition of position and momentum are dynamically generated in a manner not unlike temperature. How then do you define a position and momentum of a thing in which the dynamics of that thing defines the very definition of the observables used to characterize it? This would entail that on a fine scale positions and momentums would fluctuate in the same way the temperature of a gas fluctuates on a sufficiently fine scale. Hence, on a fine scale, the position of an ontic element at rest with respect to an observer could in principle fluctuate wrt that observer as a result of the dynamic variances in an ensemble of ontic elements in the vicinity of that element.

This is not a claim or model to explain anything. It is merely an analogy in an attempt to broaden the conceptual possibilities as to what an observable represents that gets past a perception that an ontic model entails a linear or reversibly derivable relation between observables and ontic substructures. As noted, the classical observable [temperature] is not sufficient to establish the ontic state that gives rise to it either. The debate on the ontic nature of quantum observables is much more stark in GR and phenomena like virtual particles, interaction free measurements, etc, than it is in the QM formalism. For instance it is often denied that a vacuum represents a "nothing" by spelling out all the virtual entities zipping in and out of existence within that "nothing". Sort of like affirming that which is being denied in the same breath. The claim that ψ has an ontic substructure is tantamount to the claim that a vacuum has an ontic substructure independent of the virtual particles in it. The irreversible failings of classical physics entails removing the idea that any distinct observable be associated with any distinct singular ontic element, in an analogous manner as the temperature of a medium is not associated with any particular ontic element in that medium. Space, time, positions, and momentums included.

Perhaps there is no ontic substructure of this form to be described, even in principle. But to deny the possibility simply on the grounds that observables obviously don't posses classical Newtonian relationships with any presumed ontic elements undermines a whole world of scientific possibilities, which may or may not include the type of empirical updates Ken G wrote about. The ultimate scientific value will nonetheless explicitly depend on the empirical value provided, not the philosophical value. The PBR article at the very least raises the bar on the types of contextual constructs those working in foundational issues can attempt to work with.
 
  • #135
Nice post my_wan, but I have to say that my take on this detail is the opposite of yours:
my_wan said:
In effect, when the PBR article states: "The quantum state cannot be interpreted statistically", it is equivalent to a claim stating: Quantum randomness is not as fundamentally different from classical randomness as traditionally claimed.
If state vectors had corresponded to (overlapping) probability distributions of ontic states of a theory in which the ontic states determine all measurement results (rather than just their probabilities), then quantum probabilities would have been exactly the same as classical probabilities.

If state vectors had corresponded to (overlapping) probability distributions of ontic states of a theory in which the ontic states determine the probabilities of all measurement results, then quantum probabilities are similar to but not exactly the same as classical probabilities.

The theorem rules out both of these options, since they are just different kinds of ψ-epistemic ontic models. So I would say that this just leaves us the possibility that quantum probabilities are very different from classical probabilities.
 
  • #136
Fredrik said:
If state vectors had corresponded to (overlapping) probability distributions of ontic states of a theory in which the ontic states determine all measurement results (rather than just their probabilities), then quantum probabilities would have been exactly the same as classical probabilities.

If state vectors had corresponded to (overlapping) probability distributions of ontic states of a theory in which the ontic states determine the probabilities of all measurement results, then quantum probabilities are similar to but not exactly the same as classical probabilities.
Here I equivocated, not due to a perceived variance in the symmetries associated with classical verses quantum probabilities, but due to a more general uncertainty in the type and nature of what the presumed ontic variables associated with or responsible for ψ might be. Obviously they don't have precisely the same character as classical ontic variables, else quantum physics would be classical physics. Nor did the PBR article make such a claim, else rather than attempting to prove a theorem they would have simply defined such variables and derived QM from it. This of course being an independent issue from the question of whether just the probabilities themselves are fundamentally different in the quantum and classical regime. Something I am still searching for good criticisms of wrt the PBR article. My prejudices not withstanding. Although I think the article makes a valid point I think the strength or meaning of that point is more limited than I would like it to be, or that many people will likely try to make it out to be.

Fredrik said:
The theorem rules out both of these options, since they are just different kinds of ψ-epistemic ontic models. So I would say that this just leaves us the possibility that quantum probabilities are very different from classical probabilities.
Let's look at the notion of a ψ-epistemic ontic model in the context of the PBR article. In a prior post DevilsAvocado summed it up this way (note the qualifier: standard bell framework):
DevilsAvocado said:
epistemic state = state of knowledge
ontic state = state of reality

  1. ψ-epistemic: Wavefunctions are epistemic and there is some underlying ontic state.
  2. ψ-epistemic: Wavefunctions are epistemic, but there is no deeper underlying reality.
  3. ψ-ontic: Wavefunctions are ontic.
Conclusions
The PBR theorem rules out psi-epistemic models within the standard Bell framework for ontological models. The remaining options are to adopt psi-ontology, remain psi-epistemic and abandon realism, or remain psi-epistemic and abandon the Bell framework.[...]​

Reading Matt Leifer's blog, from which the above was pulled, would be useful in the context to come.

Now ask yourself if temperature is a classical epistemic or ontic variable. Though it is the product of presumably ontic entities it is a variable that is not dependent on the state of any particular ontic entity nor singular state of those entities as a whole. It is an epistemic state variable, in spite of having a very real existence. In this sense I would say it qualifies as "epistemic ontic", since it is an epistemic variable in which it's existence is contingent upon on underlying ontic group state. Momentum is another epistemic variable, since the self referential momentum of any ontic entity (lacking internal dynamics) is precisely zero. That's the whole motivation behind relativity.

Ironically, viewed in this way, by expecting QM to somehow conform to the ontic character of classical physics, we are using a prototypical epistemic variable [momentum] as a foundational variable upon which the presumed ontic construct must conform to, rather than the other way around as is typical of epistemic variables. Epistemic variables only exist in contextual states between ontic variables. The foundational question is whether or not nature is defined by these epistemic variables all the way down, or does the buck stop at some set of ontic entities somewhere down the hierarchy.

Now look at a quote from the PBR article:
If the quantum state is a physical property of the system (the first view), then either [itex]\lambda[/itex] is identical with [itex]|\phi_0>[/itex] or [itex]|\phi_1>[/itex], or [itex]\lambda[/itex] consists of [itex]|\phi_0>[/itex] or [itex]|\phi_1>[/itex], supplemented with values for additional variables not described by quantum theory. Either way, the quantum state is uniquely determined by [itex]\lambda[/itex].
Bolding added. Keep in mind in the following text that it said that the quantum state is uniquely determined by [itex]\lambda[/itex], and not necessarily that [itex]\lambda[/itex] is uniquely determined by the quantum state.

In effect the bolded part explicitly allowed the possibility that ψ constituted an epistemic variable, in the sense that temperature and momentum are epistemic variables, whereas the theorem only pertains to the character of [itex]\lambda[/itex]. If ψ and [itex]\lambda[/itex] were interchangeable in terms of what the theorem pertained to then there was no need in leaving open the possibility that ψ may or may not be "supplemented with values for additional variables not described by quantum theory". Hence, wrt 1 as posted by DevilsAvocado: "ψ-epistemic: Wavefunctions are epistemic and there is some underlying ontic state", the PBR article is moot on. This particular form of ψ-epistemic, i.e., ψ-epistemic ontic, is in fact allowed but not required by the articles theorem.

So what specifically did the articles theorem take aim at? This I previously attempted to reframe as a causally independent verses a causally dependent concept of randomness. Whereas the traditional prototypical language, which the article simply accepted as the de facto meaning without comment in spite of many realist being at odds with it, involves the terms quantum verses classical randomness. Hence the title claim: "cannot be interpreted statistically". Meaning statistically in the quantum prototype sense, not the classical prototype sense. More meaningfully that the statistical character of QM cannot be interpreted as a causally independent form of randomness. Just as classical randomness is not a causally independent form of randomness.

This is why I previously said that the theorem is much more limited in scope than some will try to make it out to be. This is also (more or less) why Matt Leifer, an epistemicists, does not have any real issues with the article, and even stated (out of context): "I regard this as the most important result in quantum foundations in the past couple of years". In context Leifer was quiet careful not to overstate the scope of what the theorem actually entails. Ruling out "ψ-epistemic ontic models", as opposed to purely ψ-epistemic models as defined by 2 in DevilsAvocado's post, is not one of the claims the theorem has sufficient scope to rule out.​
 
  • #137
DevilsAvocado said:
Would that be compatible to Matt Leifer’s conclusions?
Thank you very much for that link. It has been very useful, and now I believe I understand the content of the PBR theorem much better. Here is my summary and conclusion, which I have written there:

It simple terms, the PBR theorems claims the following:
If the true reality “lambda” is known (whatever it is), then from this knowledge one can calculate the wave function.

However, it does not imply that the wave function itself is real. Let me use a classical analogy. Here “lambda” is the position of the point-particle. The analogue of the wave function is a box, say one of the four boxes drawn at one of the Matt’s nice pictures. From the position of the particle you know exactly which one of the boxes is filled with the particle. And yet, it does not imply that the box is real. The box can be a purely imagined thing, useful as an epistemic tool to characterize the region in which the particle is positioned. It is something attributed to a single particle (not to a statistical ensemble), but it is still only an epistemic tool.
 
  • #138
Demystifier said:
Thank you very much for that link. It has been very useful, and now I believe I understand the content of the PBR theorem much better. Here is my summary and conclusion, which I have written there:

It simple terms, the PBR theorems claims the following:
If the true reality “lambda” is known (whatever it is), then from this knowledge one can calculate the wave function.

However, it does not imply that the wave function itself is real. Let me use a classical analogy. Here “lambda” is the position of the point-particle. The analogue of the wave function is a box, say one of the four boxes drawn at one of the Matt’s nice pictures. From the position of the particle you know exactly which one of the boxes is filled with the particle. And yet, it does not imply that the box is real. The box can be a purely imagined thing, useful as an epistemic tool to characterize the region in which the particle is positioned. It is something attributed to a single particle (not to a statistical ensemble), but it is still only an epistemic tool.

Thanks very much DM, this makes sense.

Do you understand why they are 'focusing' on zero probabilities?

"Finally, the argument so far uses the fact that quantum probabilities are sometimes exactly zero."

And in the first example (FIG 1) they are measuring NOT values:

29xyhzo.png


I had this "nutty guess" that they found a way to show that zero probabilities doesn’t mean "nothing" (in terms of probabilities), but something in terms of an actual measurement resulting in 0...?? not(1)

Or is this just completely nuts... :blushing:


P.S. Credit for the link goes to bohm2.
 
  • #139
DevilsAvocado said:
Do you understand why they are 'focusing' on zero probabilities?
Yes. When the probability of something is 0 (or 1), then you know WITH CERTAINTY that the system does not (or does) have certain property. But then you can ascribe this to a SINGLE system; You can say that this one single system does not (or does) have certain property. You don't need a statistical ensemble of many systems to make this claim meaningfull. In this sense, you can show that what you are talking about is something about a single system, not merely about a statistical ensemble. That is what their theorem claims for the quantum state.
 
  • #140
my_wan said:
Let's look at the notion of a ψ-epistemic ontic model in the context of the PBR article. In a prior post DevilsAvocado summed it up this way (note the qualifier: standard bell framework):
As you said, this is from Matt Leifer's blog. PBR doesn't seem to acknowledge option 2 at all. So I would describe their conclusion as "option 1 contradicts QM, and therefore experiments".

my_wan said:
Now look at a quote from the PBR article:

Bolding added. Keep in mind in the following text that it said that the quantum state is uniquely determined by [itex]\lambda[/itex], and not necessarily that [itex]\lambda[/itex] is uniquely determined by the quantum state.

In effect the bolded part explicitly allowed the possibility that ψ constituted an epistemic variable, in the sense that temperature and momentum are epistemic variables, whereas the theorem only pertains to the character of [itex]\lambda[/itex].
The bolded part only says that a ψ-ontic ontic model (i.e. one that's not ψ-epistemic) may not be ψ-complete. (See the HS article for the terminology, but note that they used the term "hidden-variable theory" instead of "ontic model"). The statement "the quantum state is uniquely determined by [itex]\lambda[/itex]" applies to ψ-ontic ontic models. The term "ψ-epistemic" is defined by the requirement that ψ is not uniquely determined by λ.

my_wan said:
Hence, wrt 1 as posted by DevilsAvocado: "ψ-epistemic: Wavefunctions are epistemic and there is some underlying ontic state", the PBR article is moot on. This particular form of ψ-epistemic, i.e., ψ-epistemic ontic, is in fact allowed but not required by the articles theorem.
I would say that option 1 is what they're ruling out. What you describe as "this particular form of ψ-epistemic" is (if I understand you correctly) what HS calls "ψ-supplemented". The ψ-supplemented ontic models are by definition not ψ-epistemic.
 

Similar threads

Replies
7
Views
1K
Replies
3
Views
2K
Replies
1
Views
1K
Replies
69
Views
7K
Replies
1
Views
1K
Replies
2
Views
2K
Replies
15
Views
3K
Back
Top