The quantum state cannot be interpreted statistically?

In summary, the Pusey, Barret, Rudolph paper of Nov 11th discusses the differing views on the interpretation of quantum states and argues that the statistical interpretation is inconsistent with the predictions of quantum theory. The authors suggest that testing these predictions could reveal whether distinct quantum states correspond to physically distinct states of reality. This preprint has attracted interest and discussion in the scientific community.
  • #36
OK, new summary. Simplified.
They are comparing two different schools of thought:
  1. A state vector represents the properties of the system.
  2. A state vector represents the statistical properties of an ensemble of identically prepared systems, and does not also represent the properties of a single system.
Their argument against the second view goes roughly like this:

Suppose that there's a theory that's at least as good as QM, in which a mathematical object λ represents all the properties of the system. Suppose that view 2 above is the correct one. Then λ doesn't determine the probabilities of all possible results of measurements. Yada-yada-yada. Contradiction! Therefore view 2 is false.​
I say that
  • The entire article rests on the validity on the statement in brown, which says that view 2 somehow implies that "all the properties" are insufficient to determine the probabilities. (If that's true, then why would anyone call them "all the properties"?)
  • The brown statement is a non sequitur. (A conclusion that doesn't follow from the premise).
  • The only argument the article offers in support of the brown claim, doesn't support the brown claim at all.
Am I wrong about something?
 
Physics news on Phys.org
  • #37
Fredrik said:
OK, new summary. Simplified.
They are comparing two different schools of thought:
  1. A state vector represents the properties of the system.
  2. A state vector represents the statistical properties of an ensemble of identically prepared systems, and does not also represent the properties of a single system.
Their argument against the second view goes roughly like this:

Suppose that there's a theory that's at least as good as QM, in which a mathematical object λ represents all the properties of the system. Suppose that view 2 above is the correct one. Then λ doesn't determine the probabilities of all possible results of measurements. Yada-yada-yada. Contradiction! Therefore view 2 is false.​
I say that
  • The entire article rests on the validity on the statement in brown, which says that view 2 somehow implies that "all the properties" are insufficient to determine the probabilities. (If that's true, then why would anyone call them "all the properties"?)
  • The brown statement is a non sequitur. (A conclusion that doesn't follow from the premise).
  • The only argument the article offers in support of the brown claim, doesn't support the brown claim at all.
Am I wrong about something?

I am probably missing something, but isn't the statement in brown what the difference between the two schools of thought is?
 
  • #38
martinbn said:
I am probably missing something, but isn't the statement in brown what the difference between the two schools of thought is?
That's what the authors of the article are saying. To me it seems like a completely unrelated assumption. Maybe I'm missing something.

I would say that the difference is that the second school of thought asserts that a complete specification of the preparation procedure determines the probabilities, but is insufficient to determine the properties (if it makes sense to talk about properties at all).
 
  • #39
They are comparing two different schools of thought:

A state vector represents the properties of the system.
A state vector represents the statistical properties of an ensemble of identically prepared systems, and does not also represent the properties of a single system.

Isn't the bolded part the problem or am I missing something?
 
Last edited:
  • #40
so they are going for the realist position, is that correct ?
 
  • #41
bohm2 said:
Isn't the bolded part the difference or am I missing something?
That's definitely the difference. :smile: So no, you're not missing anything. But since the article claims that this difference changes the truth value of the statement
"The properties determine the probabilities."​
from true to false, the story doesn't end with that observation.
 
Last edited:
  • #42
Fredrik said:
Am I wrong about something?

I don’t know because I haven’t read the full paper yet (isn’t this just typical :smile:), but is this really about ensembles (and the Ensemble interpretation)? Isn’t it about "state-as-probability" vs. "state-as-physical"?

I’ve cheated, and consumed the 'condensed version' by David Wallace (thanks inflector) and it looks convincing to me:
http://blogs.discovermagazine.com/c...lace-on-the-physicality-of-the-quantum-state/

Why the quantum state isn’t (straightforwardly) probabilistic
...
Consider, for instance, a very simple interference experiment. We split a laser beam into two beams (Beam 1 and Beam 2, say) with a half-silvered mirror. We bring the beams back together at another such mirror and allow them to interfere. The resultant light ends up being split between (say) Output Path A and Output Path B, and we see how much light ends up at each. It’s well known that we can tune the two beams to get any result we like – all the light at A, all of it at B, or anything in between. It’s also well known that if we block one of the beams, we always get the same result – half the light at A, half the light at B. And finally, it’s well known that these results persist even if we turn the laser so far down that only one photon passes through at a time.

According to quantum mechanics, we should represent the state of each photon, as it passes through the system, as a superposition of “photon in Beam 1″ and “Photon in Beam 2″. According to the “state as physical” view, this is just a strange kind of non-local state a photon is. But on the “state as probability” view, it seems to be shorthand for “the photon is either in beam 1 or beam 2, with equal probability of each”. And that can’t be correct. For if the photon is in beam 1 (and so, according to quantum physics, described by a non-superposition state, or at least not by a superposition of beam states) we know we get result A half the time, result B half the time. And if the photon is in beam 2, we also know that we get result A half the time, result B half the time. So whichever beam it’s in, we should get result A half the time and result B half the time. And of course, we don’t. So, just by elementary reasoning – I haven’t even had to talk about probabilities – we seem to rule out the “state-as-probability” rule.

Indeed, we seem to be able to see, pretty directly, that something goes down each beam. If I insert an appropriate phase factor into one of the beams – either one of the beams – I can change things from “every photon ends up at A” to “every photon ends up at B”. In other words, things happening to either beam affect physical outcomes. It’s hard at best to see how to make sense of this unless both beams are being probed by physical “stuff” on every run of the experiment. That seems pretty definitively to support the idea that the superposition is somehow physical.
 
  • #43
DevilsAvocado said:
I don’t know because I haven’t read the full paper yet (isn’t this just typical :smile:), but is this really about ensembles (and the Ensemble interpretation)? Isn’t it about "state-as-probability" vs. "state-as-physical"?
That's the same thing.

"state-as-probability" = "ensemble interpretation" = "statistical interpretation" = "Copenhagen interpretation" (although some people will insist that the CI belongs on the "state-as-physical" side).

The stuff I'm talking about is covered on the first one and a half pages, so you don't have to read the whole thing. I haven't, and I'm not going to unless someone can convince me that I'm wrong.

Wallace said:
But on the “state as probability” view, it seems to be shorthand for “the photon is either in beam 1 or beam 2, with equal probability of each”.
Maybe it seems that way, but this is not implied by my definition of the second "school of thought" above.

This is however a point that different statistical/ensemble interpretations disagree about. Ballentine's 1970 article "The statistical interpretation of quantum mechanics" explicitly made the assumption that all particles have well-defined positions, even when their wavefunctions are spread out. That assumption is notably absent from Ballentine's recent textbook, so maybe even he has abandoned that view.
 
Last edited:
  • #44
I'll try again..
Fredrik said:
The entire article rests on the validity on the statement in brown, which says that view 2 somehow implies that "all the properties" are insufficient to determine the probabilities. (If that's true, then why would anyone call them "all the properties"?)

The way I read it, what they mean by "all the properties" is some set of hidden variables or similar that are sufficient to determine the outcome of any measurement. The "real" state is represented by lambda, and the quantum state is just a classical statistical distribution over the various "lambda states". It's not a classical analogy, it is classical. Although whatever goes into putting the system into a particular lambda state is not necessarily deterministic or local or whatever; only point is that QM tells us that certain processes will allow us to prepare states with certain distributions.

So knowing lambda doesn't tell you how you got there. A coin's 'real' states could be 'heads' or 'tails' but measuring 'heads' doesn't tell you if you got it there by putting it in heads (process 1) or a coin-toss (process 2). All you know from QM is that process 1 will always cause you to measure 'heads' and process 2 results in either 'heads' or 'tails' with some associated probabilities.

By extension the main result here is that for two identical systems prepared in isolation from each other, the result predicted by quantum mechanics for a joint measurement cannot be enforced merely by knowing lambda1 and lambda2, since it doesn't tell you how you got it there, which has importance for what you measure.

But if lambda is actually the wave-function (or can tell you it), then obviously there's no problem.

I didn't really think it was that complicated? Maybe I'm the one under-thinking it.
 
  • #45
Fredrik said:
That's the same thing.

"state-as-probability" = "ensemble interpretation" = "statistical interpretation" = "Copenhagen interpretation" (although some people will insist that the CI belongs on the "state-as-physical" side).

The stuff I'm talking about is covered on the first one and a half pages, so you don't have to read the whole thing. I haven't, and I'm not going to unless someone can convince me that I'm wrong.

I’ll do that tomorrow. It’s 3:32 AM here so my brain is in an upside-down-superposition...

Fredrik said:
Maybe it seems that way, but this is not implied by my definition of the second "school of thought" above.

This is however a point that different statistical/ensemble interpretations disagree about. Ballentine's 1970 article "The statistical interpretation of quantum mechanics" explicitly made the assumption that all particles have well-defined positions, even when their wavefunctions are spread out. That assumption is notably absent from Ballentine's recent textbook, so maybe even he has abandoned that view.

Okay thanks. I have to reconnect tomorrow, I’m really... :zzz:
 
  • #46
What I gathered from reading the article at work:
1. they're assigning a definite state to the system after preparation
2. QM would then NOT be appropriate for describing the system - it is not in a pure state.
3. and the first experiment they show us gives a prediction different to QM, yet they're using it as refuting the statistical interpretation of QM - but I don't get that. They're practically saying QM is wrong.
4. on page 4, they use the conclusion as an assumption (a premise) in their argument.
 
  • #47
Fredrik said:
I'm not sure I understood what you consider the key issue. Is it the existence (vs. non-existence) of that theory in which a mathematical object λ represents all the properties of the system? That's an interesting issue, but (as you know) it's not what the article is about.
Right, I'm saying that to me, that's the real issue here. So I don't find the conclusions in the article to be particularly important, because they require making assumptions that I doubt are reliable. It seems to me that people who make those assumptions have already chosen a specific approach to interpreting quantum mechanics, so whether or not the ensemble interpretation is consistent with that specific approach is only interesting to people inclined to choose both the ensemble interpretation and that specific approach (and I don't count myself in either of those groups). But we can still analyze whether the paper reaches valid conclusions that people in both those groups should worry about.
Right. That's the part of the argument that I summarized as "If properties do not determine probabilities, then we're screwed."
But we aren't screwed in that case, we're just fine. If someone writes an article tomorrow that proves that quantum mechanics is not consistent with the attitude that properties determine probabilities, does quantum mechanics suddenly not work to predict our experiments? Nothing that we use quantum mechanics for requires that properties determine probabilities, instead what we need is for state vectors to determine probabilities, because that's how quantum mechanics works. Properties are completely irrelevant to doing physics, they are purely philosophical, and somewhat naive philosophy at that. That's my primary objection-- the fixation on the importance of "properties" is a very specific interpretation choice, but physics only requires that "properties" be a useful organizational principle, it never requires that we take this concept seriously, and certainly doesn't need us to make any mathematical proofs based on the notion. I doubt that systems actually have properties at all, that's just how we like to think about them.

The whole issue reminds me of Hume's lucid critique of taking the cause and effect relationship too seriously. He makes the point that even young children quickly develop a useful concept of cause and effect, but even the greatest philosophers cannot demonstrate any logical relationship there that you could use to prove anything, it is nothing but a practical correlation that we use to make actionable predictions. I think the concept of a "property" is just exactly like that too. So if someone hands me a physics proof that starts with "assume that the cause and effect relationship is a deterministic connection whereby some element of the cause leads, not by experience but by logical necessity, to some element of the effect", and goes on to say that interpretation X of theory Y can't be right, it is no kind of knock on interpretation X. Indeed, it makes me see interpretation X in a better light, that it failed to pass a test that it probably should fail!

In fact, I consider "properties do not determine probabilities" to be an absurd statement on its own.
It's not absurd if the whole concept of properties is already viewed as absurd. I agree it would be absurd to believe in properties that do not determine probabilities, for what would be the point in believing in properties like that, but the rational alternative is to view the whole "property" concept as an effective notion we create to make progress, like all the other effective notions we make in physics and should certainly have learned by now not to take so seriously as to prove things based on them as axioms. Or put differently, when we use them as assumptions and prove things, we should do it from the point of view of showing why we shouldn't have assumed that thing in the first place-- it forces us to imagine we are dictating to nature.
I have always been thinking that the statistical view (ensemble interpretation) and "properties determine probabilities" are both true. It has never even occurred to me to consider that a complete specification of all the system's properties would be insufficient to determine the probabilities. Where did they get the idea that the statistical view implies that properties are insufficient to determine probabilities?
This is an important question, and demands closer scrutiny. They seem to be saying they have proven that your position is internally inconsistent-- you cannot maintain both that a state vector is only a claim on the properties of an ensemble, not a claim on the properties of an individual system, and that properties of individual systems determine the probabilities for that system. I'm not sure exactly what they think the statistical interpretation is, but the one you expound sounds like a standard version, so they must feel that they have proven it to be internally inconsistent.

What it says is that a complete specification of the preparation procedure determines the probabilities, but is insufficient to determine the properties (if it makes sense to talk about properties at all).
That's my point too, because quantum mechanics (and physics) only involves a connection between a preparation procedure and probabilities. That's it, that's all the physics that's in there. There aren't any "properties" in the physics, that's some kind of added philosophical baggage that can be used to prove things but doesn't convince me it belongs there at all, so why should we care what can be proven from it?
 
  • #48
Ken G said:
That's my point too, because quantum mechanics (and physics) only involves a connection between a preparation procedure and probabilities. That's it, that's all the physics that's in there. There aren't any "properties" in the physics, that's some kind of added philosophical baggage that can be used to prove things but doesn't convince me it belongs there at all, so why should we care what can be proven from it?

I finally read the paper and I'm still lost. It seems to me that in this quote below the authors are conceding that if one takes that perspective you are suggesting (e.g. Fuchian) then their conclusions don't hold. If that's true then what does their theory suggest?

For these reasons and others, many will continue to hold that the quantum state is not a real object. We have shown that this is only possible if one or more of the assumptions above is dropped. More radical approaches (e.g. Fuchs) are careful to avoid associating quantum systems with any physical properties at all.


Their assumptions:

1. If a quantum system is prepared in isolation from the rest of the universe, such that quantum theory assigns a pure state, then after preparation, the system has a well defined set of physical properties.

2. It is possible to prepare multiple systems such that their physical properties are uncorrelated.

3. Measuring devices respond solely to the physical properties of the systems they measure.
 
Last edited:
  • #49
bohm2 said:
Their assumptions:

1. If a quantum system is prepared in isolation from the rest of the universe, such that quantum theory assigns a pure state, then after preparation, the system has a well defined set of physical properties.

I am assuming that by a 'well defined set of physical properties' that the system is in a definite state? That is the impression I'm getting from what they wrote under Figure 1, that the system is in either |0> or |1>.

You CANNOT use QM to predict the outcomes, clearly. QM doesn't deal with definite states. In the first experiment, they produce results different to what QM predicts.

And just looking at some of what they wrote, where do they get |-> from?
 
  • #50
I have to go to bed, so my answers to the stuff aimed at me will have to wait until tomorrow. Alxm's post gave me something to think about. It looks like I have misunderstood at least one important thing, so I will have to think everything through again.

StevieTNZ said:
I am assuming that by a 'well defined set of physical properties' that the system is in a definite state? That is the impression I'm getting from what they wrote under Figure 1, that the system is in either |0> or |1>.
No, either |0> or |+>. The latter is a superposition of |0> and |1>. |0> and |1> are the eigenstates of some operator, like a spin component operator. But they have one preparation device that always leaves the system in state |0> and another that always leaves the system in state |+>.
 
  • #51
Man, I'm an idiot! Where did I get |1> for |+>?!
 
  • #52
[my bolding]
alxm said:
... By extension the main result here is that for two identical systems prepared in isolation from each other, the result predicted by quantum mechanics for a joint measurement cannot be enforced merely by knowing lambda1 and lambda2, since it doesn't tell you how you got it there, which has importance for what you measure.

But if lambda is actually the wave-function (or can tell you it), then obviously there's no problem.


Isn’t this exactly what David Wallace describes in his simple https://www.physicsforums.com/showthread.php?p=3623347#post3623347"?
 
Last edited by a moderator:
  • #53
Two comments/questions:

1. Although I have seen various people claim an equivalence between "the statistical interpretation" and what they are calling view 2, I don't understand this claim. This looks to be similar to what Fredrik is saying. Don't physical properties include the probability distributions of all possible probes of the system?

2. Related to 1., it seems like I can understand their paper as giving me a particular experimental method to (more fully) determine [itex] \lambda [/itex] using additional experiments on composite systems.

For example, in the paragraph beginning "The simple argument is ...", sentence 3 is particularly interesting. Can we not argue that [itex] q [/itex] is zero since their later experiment can determine which of the two preparations was used? In other words, aren't they proving that we can always determine the "preparation method"? This is partially predicated on my confusion in 1. about why the "preparation method" story is equivalent to the statistical interpretation.
 
  • #54
DevilsAvocado said:
[my bolding]
Isn’t this exactly what David Wallace describes in his simple https://www.physicsforums.com/showthread.php?p=3623347#post3623347"?

Well, the conclusion is the same. But it seems to me that he's more describing the ordinary double-slit experiment.

One key difference between that and what's being described in the paper, is that the states of the double-slit/half-silvered mirror paths aren't created independently of each other. It's quite a bit less weird to have "spooky action at a distance" between a single state "split" in two, than between two states prepared in isolation that never had any interaction. That's what seems to be the main novelty here.
 
Last edited by a moderator:
  • #55
alxm said:
The way I read it, what they mean by "all the properties" is some set of hidden variables or similar that are sufficient to determine the outcome of any measurement. The "real" state is represented by lambda, and the quantum state is just a classical statistical distribution over the various "lambda states". It's not a classical analogy, it is classical. Although whatever goes into putting the system into a particular lambda state is not necessarily deterministic or local or whatever; only point is that QM tells us that certain processes will allow us to prepare states with certain distributions.

So knowing lambda doesn't tell you how you got there.
Thanks for posting this. This is a very nice explanation. I've been thinking that they probably meant something other than this, since they weren't very explicit about it. Now I'm thinking that this must have been what they meant.

I thought that they were leaving it undefined what it means for λ to represent all the properties of the system. Now I think that they are using a definition of "property" similar to this one:
A property of the system is a pair (D,d) (where D denotes a measuring device and d denotes one of its possible results) such that the theory predicts that if we perform a measurement with the device D, the result will certainly be d.​
To say that λ represents all the properties is to say that the super-awesome classical theory that λ is a part of can predict the result of every possible measurement.

I will do some more thinking and post a new summary when I have something.
 
Last edited:
  • #56
Ensemble interpretation says that QM works for ensembles but does not work for individual systems.
This paper under discussion says that indeed ensemble interpretation leads to contradiction if QM is applicable to individual systems (thought experiment in fig.1). So what?

Or in terms of properties. Quantum state is determined by properties of ensemble that include properties of individual systems and emergent properties. Then certain properties of individual systems can correspond to different quantum states but that does not mean that there is any ambiguity in correspondence between quantum state and properties of ensemble.
 
  • #57
So quantum mechanics is non-commutative probability. The basic problem we have with these probilities is interpeting them, early work of Von Neumann was directed at showing that non-commuting probabilities don't results as probability distributions over some classical theory.

The strongest result in this regard is the Kochen-Specker theorem which says that if there is a real deterministic theory underneath QM with matter in some state λ, then that theory can only model quantum mechanics if it allows contextuality (which basically implies non-locality in a relativistic theory). Basically QM can only be the statistical mechanics of some underlying "true" classical theory if that theory has faster-than-light signalling.

However this new paper appears to be pushing even further, saying that even contextual theories don't work and QM can't be seen as the statistical mechanics of any deterministic theory. Whether it actually does this remains to be seen.
 
  • #58
alxm said:
Well, the conclusion is the same. But it seems to me that he's more describing the ordinary double-slit experiment.

One key difference between that and what's being described in the paper, is that the states of the double-slit/half-silvered mirror paths aren't created independently of each other. It's quite a bit less weird to have "spooky action at a distance" between a single state "split" in two, than between two states prepared in isolation that never had any interaction. That's what seems to be the main novelty here.

Okay, thanks!


P.S. Although I’m sure that DrC can convince you that there’s absolutely nothing 'weird' about entanglement between objects that never had any interaction... :wink:
 
  • #59
zonde said:
Ensemble interpretation says that QM works for ensembles but does not work for individual systems.
This paper under discussion says that indeed ensemble interpretation leads to contradiction if QM is applicable to individual systems (thought experiment in fig.1). So what?

So what? You are also claiming "that there is difference between statistical "sum" of 1000 experiments with single photon and single experiment with 1000 photons".

So how can I take you seriously?
 
  • #60
Fredrik said:
This is wrong, and it's also a very different claim from the one made by this article. A state vector is certainly an accurate representation of the properties of an ensemble of identically prepared systems. It's conceivable that it's also an accurate representation of the properties of a single system. The article claims to be proving that it's wrong to say that it's not a representation of the properties of a single system.

This is even more wrong. Also, if you want to discuss these things, please keep them to the other thread where you brought this up.

As the first postulate of QM states clearly, the pure quantum state describes the state of a physical system, not of an ensemble.

The so-called «statistical interpretation» is wrong as both this paper and the link given by me before show. The paper is also right when it points that the «statistical interpretation» was introduced for eliminating the collapse of the quantum state. But this collapse is a real process, which is described by the von Neumann postulate, in QM, and by dynamical equations in more general formulations beyond QM.

I remark again that the paper is right: the quantum pure state is not «akin to a probability distribution in statistical mechanics», as some ill-informed guys still believe.

As any decent textbook in QM explains, ensembles in quantum theory are introduced by impure states not by pure states.
 
Last edited:
  • #61
Fredrik said:
That's the same thing.

"state-as-probability" = "ensemble interpretation" = "statistical interpretation" = "Copenhagen interpretation" (although some people will insist that the CI belongs on the "state-as-physical" side).

Those equality signs are misguided.
 
  • #62
zonde said:
Ensemble interpretation says that QM works for ensembles but does not work for individual systems.

But that's not true. QM makes some predictions about individual systems. For example, in an experiment that produces correlated electron-positron pairs with total spin 0, QM predicts with certainty that for any axis A, if you measure spin-up for the electron relative to axis A, then you will measure spin-down for the positron relative to axis A.

The existence of such definite predictions about a single experiment is what makes QM not a purely ensemble theory.
 
  • #63
bohm2 said:
Their assumptions:

1. If a quantum system is prepared in isolation from the rest of the universe, such that quantum theory assigns a pure state, then after preparation, the system has a well defined set of physical properties.

2. It is possible to prepare multiple systems such that their physical properties are uncorrelated.

3. Measuring devices respond solely to the physical properties of the systems they measure.

They are definitely not making assumption 1. If we let ψ be the quantum state, and λ the unknown physical state, they are saying that in the statistical view, ψ does not uniquely determine λ. A pure state does not mean that the physical properties are uniquely determined. The quantum state ψ only gives probability distribution on physical states λ, it doesn't uniquely determine it.
 
  • #64
New summary. I have a better idea what they meant now.

Definition: A property of the system is a pair (D,d) (where D denotes a measuring device and d denotes one of its possible results) such that the theory predicts that if we perform a measurement with the device D, the result will certainly be d.​
Note that this is a theory-independent definition in the sense that it explains what the word "property" means in every theory.
Assumption: There's a theory that's at least as good as QM, in which a set [itex]\lambda=\{(D_i,d_i)|i\in I\}[/itex] contains all the properties of the system.​
By calling this a "theory", we are implicitly assuming that it's possible to obtain useful information about the value of λ. (If it's not, then the "theory" isn't falsifiable in any sense of the word, and shouldn't be called a theory). So we are implicitly assuming that we can at least determine a probability distribution of values of λ.

By saying that this theory is at least as good as QM, we are implicitly assuming that the set [itex]\{D_i|i\in I\}[/itex] contains all the measuring devices that QM makes predictions about.

I will call this theory the super-awesome classical theory (SACT). It has to be considered a classical theory, because it assigns no probabilities other than "certainty" to results of measurements on pure states. (A system is said to be in a pure state if the value of λ is known, and is said to be in a mixed state if a probability measure on the set of values of λ is known. The simplest kind of mixed state is a system such that all but a finite number of values of λ can be ruled out with certainty, and the remaining values are all associated with a number in [0,1] to be interpreted as the probability that the system is in the pure state λ).

OK, that concludes my comments about the stuff I believe I understand. The stuff below this line are comments about things I don't understand, so don't expect them to make as much sense as the stuff above.

__________________________________________________


I still can't make sense of what two ideas they are comparing. If the above is what they meant when then said that λ corresponds to a complete list of properties of the system, then they appear to be comparing the following two ideas:
  1. A state vector corresponds to a subset of the set λ defined by the SACT.
  2. A state vector corresponds to a mixed state in the SACT.
But how are we to make sense of 1? If we only know a proper subset of λ, then aren't we still talking about a mixed state? Should we assume that the subset corresponding to the state vector contains the property that determines the result of the specific measurement we're going to make? Should we assume that it doesn't?

The fact that we're even talking about mixed states suggests that what they really want to compare are the following two ideas:
  1. The probabilities in QM have nothing to do with ignorance about properties of the system.
  2. The probabilities in QM are a result of our ignorance about the properties of the system.
But I have never thought of either of these as contradicting the statistical view. :confused:

What they actually end up comparing is of course the following two ideas:
  1. The state vector is always determined by λ.
  2. The state vector is not always determined by λ.
If a state vector corresponds to a mixed state (option 2 in the first list in this post), then this option 2 is just saying that a pure state isn't always determined by the mixed state it's a part of. These two clearly follow from the items on the first list in this post, but it's not clear to me how they are connected to more interesting statements like the ones on the second list or the ones on my original list:
  1. A state vector represents the properties of the system.
  2. A state vector represents the properties of an ensemble of identically prepared systems, and does not also represent the properties of a single system.
(I deleted the word "statistical" because I think it's more likely to confuse than to clarify).
I need to get something to eat and watch Fringe. Maybe Walter Bishop can inspire me to figure this out. I'll be back in a couple of hours.
 
  • #65
Let's clarify a few things that I feel need to be said at this point:
1) No one who holds the ensemble interpretation, and understands the first thing about quantum mechanics, believes that a pure state in quantum mechanics represents a classical probability distribution! They all know that quantum mechanics uses probability amplitudes, not probability distributions. Nothing in this new paper is aimed at defeating that blatant straw man. Instead, the ensemble distribution is the claim that the all-important correlations that quantum mechanics relies on, that don't appear in any classical probability distribution, are nevertheless correlations that only havng meaning for predicting the behavior of many trials. To me, the key flavor of the ensemble interpretation is a sense of incompleteness-- quantum mechanics is not a complete treatment of "what really happens" to a single system, it emerges as a description of many trials and that is its only connection with reality.
2) If you assume that a system has properties that completely determine the outcome of an experiment before it happens, then you are claiming that a hidden variables theory exists. It is far from "radical" to deny that possibility!

In short, I interpret their conclusion as "if individual systems have properties, then quantum mechanics states must refer to them." That really doesn't shock me, and I don't think it invalidates the ensemble interpretation, but then I view the ensemble interpretation as a rejection of the concept that individual systems have properties that determine the probability amplitudes (not determine the outcomes).
 
Last edited:
  • #66
I hope this wasn't linked already and I look like the idiot that I know I am but here is an interesting blog discussing this issue:

To understand the new result, the first question we should ask is, what exactly do PBR mean by a quantum state being “statistically interpretable”? Strangely, PBR spend barely a paragraph justifying their answer to this central question—but it’s easy enough to explain what their answer is. Basically, PBR call something “statistical” if two people, who live in the same universe but have different information, could rationally disagree about it. (They put it differently, but I’m pretty sure that’s what they mean.) As for what “rational” means, all we’ll need to know is that a rational person can never assign a probability of 0 to something that will actually happen.

...So, will this theorem finally end the century-old debate about the “reality” of quantum states—proving, with mathematical certitude, that the “ontic” camp was right and the “epistemic” camp was wrong? To ask this question is to answer it.

I expect that PBR’s philosophical opponents are already hard at work on a rebuttal paper: “The quantum state can too be interpreted statistically”, or even “The quantum state must be interpreted statistically.”

I expect the rebuttal to say that, yes, obviously two people can’t rationally assign different pure states to the same physical system—but only a fool would’ve ever thought otherwise, and that’s not what anyone ever meant by calling quantum states “statistical”, and anyway it’s beside the point, since pure states are just a degenerate special case of the more fundamental mixed states.

The quantum state cannot be interpreted as something other than a quantum state

http://www.scottaaronson.com/blog/?p=822
 
Last edited:
  • #67
I don't think it matters to them that pure states are idealizations. It is true that we only ever have substates, which we only treat as pure by treating all entanglements as entirely decohered by the preparation. So one might imagine that the ensemble approach is needed because we do not decohere all the entanglements when we prepare the system. But that's just the kind of eventuality that this paper is arguing against-- it is saying that even if the pure state is not the complete mathematical description of the properties, it is still something that constrains the physical reality of the individual system. If you imagine that there really are deterministic properties there, I don't see how you could have thought that a quantum mechanical state doesn't constrain the physical state of those properties, so to me, the ensemble interpretation always required denial of the concept of hidden variables

I realize a lot of people hold to an ensemble interpretation, while holding out hope for a more complete theory that unEarth's those deterministic hidden variables (like Einstein did), but I don't understand why those people don't just go with deBroglie-Bohm. If you want properties that determine outcomes, that's the way to do it. But it's kind of the opposite of the ensemble interpretation-- the ensemble interpretation says that the state is being over-interpreted if you think it specifies the reality of a given system, and deBroglie-Bohm says the state is being under-interpreted if you say that-- in that view the reality of the individual system is the state plus more, not something disjoint from the state that requires the state to only refer to many trials. That's why I'm not surprised that embracing properties forces one to adopt the state as a constraint on the physical reality of the system.

On the other hand, I think another problem here is there may not be agreement on just what the claims of the "ensemble interpretation" really are. Does someone who holds that interpretation want to explain just what it is that they are holding as true?
 
Last edited:
  • #68
Ken G said:
... but I don't understand why those people don't just go with deBroglie-Bohm.

My guess is that if you are a hardcore Ensemble'ist you don’t like non-locality, which comes with dBB...

Ken G said:
Does someone who holds that interpretation want to explain just what it is that they are holding as true?

[I’m not sure this paper is aimed at EI, but what the heck...]

I’m in on this one too, because I don’t understand what they are talking about (especially zonde’s "version"). I love Einstein, he’s my hero and probably one of the brightest souls ever lived, but I think he went into a dead end when trying to 'refute' QM. Bell finally proved him wrong. This is what he says about EI:
Albert Einstein said:
The attempt to conceive the quantum-theoretical description as the complete description of the individual systems leads to unnatural theoretical interpretations, which become immediately unnecessary if one accepts the interpretation that the description refers to ensembles of systems and not to individual systems.

It doesn’t make sense? QM can’t say anything useful about one single electron in the Double-slit experiment? Is this really true??

If we assume that the first electron fired in this single electron Double-slit experiment is the one in the top/left corner in picture a:

250px-Double-slit_experiment_results_Tanamura_2.jpg


Now according EI, does this first single electron in the corner exist, when it’s all alone? And could QM say anything about that single electron?

If not, in which one of the following frames b-e, will the single electron in the corner start to exist, and able to be describe by QM? And why this particular frame?

It doesn’t make sense, does it??

I’m not a fan of David Mermin’s "Shut up and calculate" approach, but I think he’s closer to the truth than Einstein:
David Mermin said:
For the notion that probabilistic theories must be about ensembles implicitly assumes that probability is about ignorance. (The 'hidden variables' are whatever it is that we are ignorant of.) But in a non-deterministic world probability has nothing to do with incomplete knowledge, and ought not to require an ensemble of systems for its interpretation.
...
The second motivation for an ensemble interpretation is the intuition that because quantum mechanics is inherently probabilistic, it only needs to make sense as a theory of ensembles. Whether or not probabilities can be given a sensible meaning for individual systems, this motivation is not compelling. For a theory ought to be able to describe as well as predict the behavior of the world. The fact that physics cannot make deterministic predictions about individual systems does not excuse us from pursuing the goal of being able to describe them as they currently are.
 
Last edited:
  • #69
bohm2 said:
I hope this wasn't linked already and I look like the idiot that I know I am but here is an interesting blog discussing this issue:



The quantum state cannot be interpreted as something other than a quantum state

http://www.scottaaronson.com/blog/?p=822
This blog post looks pretty good. I have only skimmed it, but I will return to it for a closer look later. The most useful detail on that page appeared in the comments. Two of the commenters (Lubos Motl and Matt Leifer) posted a link to this article about hidden-variable theories. It explains the basic terminology and some previous results. I have started to read it, and it looks pretty good. I will read at least a few more pages before I return to the article that this thread is about.

Some of you might find it entertaining to read the blog post by Lubos Motl (the angriest man in physics) about the topic. It will not help you understand anything, but it's mildly amusing to see how aggressively he attacks everything. It has a calming effect on me actually. I'm thinking about how I expressed some irritation earlier, and I'm thinking "I hope I don't sound like that". :smile:
 
  • #70
Fredrik said:
New summary. I have a better idea what they meant now.

Definition: A property of the system is a pair (D,d) (where D denotes a measuring device and d denotes one of its possible results) such that the theory predicts that if we perform a measurement with the device D, the result will certainly be d.​
Note that this is a theory-independent definition in the sense that it explains what the word "property" means in every theory.
Assumption: There's a theory that's at least as good as QM, in which a set [itex]\lambda=\{(D_i,d_i)|i\in I\}[/itex] contains all the properties of the system.​
By calling this a "theory", we are implicitly assuming that it's possible to obtain useful information about the value of λ. (If it's not, then the "theory" isn't falsifiable in any sense of the word, and shouldn't be called a theory). So we are implicitly assuming that we can at least determine a probability distribution of values of λ.

By saying that this theory is at least as good as QM, we are implicitly assuming that the set [itex]\{D_i|i\in I\}[/itex] contains all the measuring devices that QM makes predictions about.

I will call this theory the super-awesome classical theory (SACT). It has to be considered a classical theory, because it assigns no probabilities other than "certainty" to results of measurements on pure states. (A system is said to be in a pure state if the value of λ is known, and is said to be in a mixed state if a probability measure on the set of values of λ is known. The simplest kind of mixed state is a system such that all but a finite number of values of λ can be ruled out with certainty, and the remaining values are all associated with a number in [0,1] to be interpreted as the probability that the system is in the pure state λ).

OK, that concludes my comments about the stuff I believe I understand. The stuff below this line are comments about things I don't understand, so don't expect them to make as much sense as the stuff above.

__________________________________________________


Very nice Fredrik! Keep up the good work and tell us what the heck this is all about!

Now I shall read the paper on a netbook, horizontally entangled with Walternate... :smile:
 

Similar threads

Replies
7
Views
1K
Replies
3
Views
2K
Replies
1
Views
1K
Replies
69
Views
7K
Replies
1
Views
1K
Replies
2
Views
2K
Replies
15
Views
3K
Back
Top