Copenhagen: Restriction on knowledge or restriction on ontology?

In summary: But if they're genuinely random then they can't be observed, so they must exist in some sense outside of observation.
  • #211
A. Neumaier said:
This is the case for every closed system, in particular for the universe as a whole. Of course I don't claim conservation in open systems.

Only the quantum state as interpreted today, which is what I had already asserted. But in 1927, the notion of state was different!
I don't care for outdated history. Somehow now you distract always from my demand of giving really an interpretation by opening irrelevant historical and long overcome discussions!
 
Physics news on Phys.org
  • #212
zonde said:
Any non-local treatment requires you to pick preferred reference frame. It is not supposed to work in any reference frame the way it works in preferred reference frame.
Why should this be required? It requires you to pick reference frames but nothing nonlocal forces any preferred frame upon you.
  • A. Peres and D.R. Terno, Quantum information and special relativity, Int. J. Quantum Information 1 (2003), 225--235.
  • A. Peres and D.R. Terno, Quantum information and relativity theory, Rev. Mod. Phys. 76(2004), 93--123.
 
Last edited:
  • Like
Likes DarMM and rubi
  • #213
DarMM said:
The latter is what @rubi is talking about. It's how observables are formulated.
@rubi defined functions ##A_\alpha## and ##B_\beta## in his post #193 and they are something different than what you are talking about.
 
  • #214
vanhees71 said:
Somehow now you distract always from my demand of giving really an interpretation by opening irrelevant historical and long overcome discussions!
Note that this thread is about the Copenhagen interpretation, which originated in 1927, and not about my thermal interpretation, for which there is a separate thread!
 
  • #215
Demystifier said:
Beables are not observables. The concept of beables was introduced by John Bell precisely to emphasize that they are different from observables. See e.g. the paper in my signature for more details.
Well, I'd like to restrict the discussion to physics. We don't need to obscure this apparently still debated issues by even more philosophy. Physics is about what's observable in nature and theoretical physics is about in how far we can describe what's observables in mathematical terms. It's not about the question of ontology and other metaphysical issues.
 
  • #216
A. Neumaier said:
Note that this thread is about the Copenhagen interpetation, which originated in 1927, and not about my thermal interpretation, for which there is a separate thread!
Well, I didn't get this. I thought it's forked from the thermal-interpretation thread and also deals with it. Then I'll unwatch this thread from now on.
 
  • #217
zonde said:
@rubi defined functions ##A_\alpha## and ##B_\beta## in his post #193 and they are something different than what you are talking about.
I'm pretty sure he is talking about Random Variables/Observables as he says they're functions on the sample space:
rubi said:
No, it has nothing to do with hidden variables. Every Kolmogorovian probability theory needs a probability space ##\Lambda## and observables modeled by functions on this space. If you relax this requirement, you're not dealing with classical probability theory or classical stochastic processes anymore. Whether this space represents a space of physical hidden variables or not is irrelevant.
 
  • Like
Likes rubi
  • #218
vanhees71 said:
Well, I didn't get this. I thought it's forked from the thermal-interpretation thread and also deals with it. Then I'll unwatch this thread from now on.
It was started completely independent of it; see post #1.
 
  • #219
zonde said:
Probability space does not contain state space.
"State space" is just the name I assigned to the set ##\Lambda##. Of course, this set becomes a probability space if one assigns probabilities. There's nothing wrong with that.

zonde said:
Any non-local treatment requires you to pick preferred reference frame. It is not supposed to work in any reference frame the way it works in preferred reference frame.
Even if there was a preferred frame, one could still arrange for the situation I described. Given the (hypothetical) preferred frame, one can have Alice and Bob perform their measurements simultaneously with respect to this frame.

zonde said:
@rubi defined functions ##A_\alpha## and ##B_\beta## in his post #193 and they are something different than what you are talking about.
No, DarMM's post is accurate and explains correctly how I used the terms.
 
  • #220
A. Neumaier said:
Why should this be required? It requires you to pick reference frames but nothing nonlocal forces any preferred frame upon you.
  • A. Peres and D.R. Terno, Quantum information and special relativity, Int. J. Quantum Information 1 (2003), 225--235.
  • A. Peres and D.R. Terno, Quantum information and relativity theory, Rev. Mod. Phys. 76(2004), 93--123.
Sorry I can't check what your references are talking about but it is rather obvious that changing reference frames (and simultaneity conventions) breaks any superluminal causality.
 
  • #221
DarMM said:
I'm pretty sure he is talking about Random Variables/Observables as he says they're functions on the sample space:
Did you actually read the post #193?
rubi said:
The situation is really simple: There is no state space ##\Lambda## such that ##A_\alpha: \Lambda\rightarrow\{-1,1\}## and ##B_\beta: \Lambda\rightarrow\{-1,1\}## are functions on this state space for every ##\alpha## and ##\beta## such that the correlations ##\left<A_\alpha B_\beta\right>## match the predictions of quantum mechanics. That's just a mathematical fact and there is just no way to circumvent it. It also has nothing to do with macroscopic or microscopic observables.

The only way to model the system on a single state space is to give up the idea that all ##A_\alpha## and ##B_\beta## are functions on the state space for all ##\alpha## and ##\beta##. One has to admit, that this is a truly novel situation, which never occurs in classical theories.
He talks about state space and functions from state space to outcomes.
 
  • #222
zonde said:
Did you actually read the post #193?
Well it seems I did:
rubi said:
No, DarMM's post is accurate and explains correctly how I used the terms.
 
  • Like
Likes rubi
  • #223
zonde said:
it is rather obvious that changing reference frames (and simultaneity conventions) breaks any superluminal causality.
Yes, but there has never been a claim of superluminal causality (which doesn't exist, precisely because it is not Lorentz invariant), only that long-distance entanglement experiments demonstrate superluminal correlations.
 
  • #224
N88 said:
I do not give up on physical realism -- a mind-and-theory independent viewpoint according to which an external reality exists, independent of observation...

But physics cannot help regarding physical realism, because physics can reveal no real world beyond what is observed. Such questions have to be addressed to philosophy.
 
  • #225
DarMM said:
I made this point in post #146 where @stevendaryl replied with a card analogy. Maybe take a look at that since you seem to be explaining things in a much cleaner fashion than I was.
Sorry, I somehow forgot to respond to your post.

The difference is the following: stevendaryl's probability space carries only four functions: ##A, B, \alpha, \beta##. Alice's (Bob's) particle only ever has one property, namely ##A## (##B##), for instance "up" (= ##1##). This value is correlated with the measurement angles (##\alpha, \beta##).

But the question we're trying to answer is: Does the particle have the properties ##A_\alpha## for all ##\alpha## simultaneously and are these properties influenced in a non-local fashion? In order for this to be true, the probability space would have to carry infinitely many functions (##A_\alpha## for all ##\alpha## and analogously for ##B_\beta##), rather than four. Now the theorem says that this is never possible if the correlations should match the quantum predictions, independent of any (non-)locality assumptions. So it is not possible that Alice's particle has both the properties "spin along the ##x## axis" and "spin along the ##y## axis" simultaneously and thus, trivially, no non-local mechanism could influence those non-existing properties.

In stevendaryl's model, the particle has the property "spin along the ##x## axis" if and only if ##\alpha## has the corresponding value. In this situation, the particle just doesn't have the property "spin along the ##y## axis", because the particle precisely has only one property, namely ##A##. The measurement angle ##\alpha## determines, which property the particle has. Of course, this is very odd compared to classical mechanics, where particles have properties independent of the measurement settings. For example, a particle always has angular momentum along all axes simultaneously. As I said, we're dealing with a truly novel situation and we should recognize this.
 
  • Like
Likes mattt and DarMM
  • #226
stevendaryl said:
I still don't get it. You can certainly concoct a classical model with the same probabilities as, say, the EPR experiment. Bell's theorem just tells us that you can't do it using local interactions.

For example:
  1. Alice and Bob each submit their detector settings, ##\alpha## and ##\beta##, respectively.
  2. With probabilities ##\frac{1}{2} sin^2(\frac{\theta}{2}), \frac{1}{2} sin^2(\frac{\theta}{2}), \frac{1}{2} cos^2(\frac{\theta}{2}), \frac{1}{2} sin^2(\frac{\theta}{2})##, we select one of the pairs: ##(up, up), (down, down), (down, up), (up, down)## (where ##\theta = \beta - \alpha##)
  3. Then Alice's result is the first element of the pair, and Bob's result is the second element of the pair.
That is implementable using classical stochastic processes. There is nothing quantum about it. And it has the same statistics as the quantum EPR experiment for anti-correlated spin-1/2 particles.
Remember that a single sample space is assumed in the derivation of Bell's theorem. See https://doi.org/10.1209/0295-5075/87/60007
 
  • Like
Likes DarMM
  • #227
lodbrok said:
Remember that a single sample space is assumed in the derivation of Bell's theorem. See https://doi.org/10.1209/0295-5075/87/60007

Bell's theorem shows the impossibility of a local hidden-variables model that reproduces EPR. It has nothing to say about nonlocal models. That's the point. If you don't assume locality, then it's trivial to come up with a model with the same predictions as EPR.
 
  • #228
stevendaryl said:
Bell's theorem shows the impossibility of a local hidden-variables model that reproduces EPR. It has nothing to say about nonlocal models. That's the point. If you don't assume locality, then it's trivial to come up with a model with the same predictions as EPR.
Yeah so basically you assume a single sample space of local variables. So you can either drop the locality (Bohmian Mechanics) or drop the single sample space (Copenhagen views).
 
  • Informative
Likes Demystifier
  • #229
rubi said:
In stevendaryl's model, the particle has the property "spin along the ##x## axis" if and only if ##\alpha## has the corresponding value. In this situation, the particle just doesn't have the property "spin along the ##y## axis", because the particle precisely has only one property, namely ##A##. The measurement angle ##\alpha## determines, which property the particle has. Of course, this is very odd compared to classical mechanics, where particles have properties independent of the measurement settings. For example, a particle always has angular momentum along all axes simultaneously. As I said, we're dealing with a truly novel situation and we should recognize this.
I like your way of phrasing it more than mine!

I basically tried to phrase the absence of particles having properties independent of the measuring device by saying how you couldn't "extract" the measuring device from any single sample space you attempt to construct (in posts #178 and #179). So the device is "embedded" in the results linking into what's called in Quantum Foundations "Participatory Realism". The observing system determines which quantities are realized.

Of course your example of how an infinite number of functions are necessary just to describe a spin systems nicely brings in Hardy's baggage theorem.
 
  • Like
Likes rubi
  • #230
DarMM said:
So you can either drop the locality (Bohmian Mechanics) or drop the single sample space (Copenhagen views).
The question of this thread is this: If we drop the single sample space, do we drop knowledge or ontology?
 
  • #231
stevendaryl said:
It has nothing to say about nonlocal models.
But a Bell-like argument can be used to say something even about non-local models too. In particular, it forbids non-local models, in which particles have a spin property along more than one axis. In any case, one must come up with a four variable type model (see post #225), even if one gives up locality. This is an instance of the Kochen-Specker theorem. If you want to reproduce quantum statistics with classical probability theory, you will get a contextual model, even if you make it non-local.

DarMM said:
Of course your example of how an infinite number of functions are necessary just to describe a spin systems nicely brings in Hardy's baggage theorem.
Well, a model with infinitely many functions does not necessarily contain excess baggage. For example, in classical mechanics, you have infinitely many angular momentum observables ##L_{\vec n}## along all possible axes ##\vec n##. However, it is enough to know ##L_x##, ##L_y## and ##L_z##, because you can compute ##L_{\vec n} = \vec n \cdot \vec L##, so the infinitely many functions are actually related. In principle, it would be desirable to have a one-to-one mapping ##\hat S_{\alpha} \rightarrow A_\alpha## of quantum operators into classical random variables, but Hardy says that we should then expect ontological excess baggage.
 
  • Like
Likes mattt and DarMM
  • #232
rubi said:
Well, a model with infinitely many functions does not necessarily contain excess baggage. For example, in classical mechanics, you have infinitely many angular momentum observables ##L_{\vec n}## along all possible axes ##\vec n##. However, it is enough to know ##L_x##, ##L_y## and ##L_z##, because you can compute ##L_{\vec n} = \vec n \cdot \vec L##, so the infinitely many functions are actually related. In principle, it would be desirable to have a one-to-one mapping ##\hat S_{\alpha} \rightarrow A_\alpha## of quantum operators into classical random variables, but Hardy says that we should then expect ontological excess baggage.
My understanding was in this case the ##A_{\alpha}## would need to be unrelated as such. If I'm missing something please tell me.
 
  • Like
Likes rubi
  • #233
rubi said:
But a Bell-like argument can be used to say something even about non-local models too. In particular, it forbids non-local models, in which particles have a spin property along more than one axis. In any case, one must come up with a four variable type model (see post #225), even if one gives up locality. This is an instance of the Kochen-Specker theorem. If you want to reproduce quantum statistics with classical probability theory, you will get a contextual model, even if you make it non-local.
Yeah I think it's important to realize what a nonlocal model actually entails to see why people reject it. Obviously one factor is the "Relativity only looks like it is true" aspect, but also as mentioned above such models need to be contextual and require infinitely many degrees of freedom where QM has only a finite amount (Hardy's baggage theorem).

For just two particles and looking purely at discrete quantities like spin there are an infinite number of contextual degrees of freedom all communicating nonlocally.
 
  • Like
Likes rubi
  • #234
DarMM said:
My understanding was in this case the ##A_{\alpha}## would need to be unrelated as such. If I'm missing something please tell me.
No, you're absolutely correct. I only wanted to point out that having infinitely many observables doesn't necessarily qualify as having excess baggage. The baggage shows up nevertheless if you want these infinitely many observables to reproduce quantum predictions.
 
  • Like
Likes DarMM
  • #235
Demystifier said:
The question of this thread is this: If we drop the single sample space, do we drop knowledge or ontology?
I'd say we end up with a very odd and unintuitive ontology, the notion of Participatory realism where you can't obtain a detached "third person" account of a system. The best mathematical description of the world is a probability theory for subject-object outcomes that's non-representational (doesn't say what things are actually like).

From there you can either be like Bohr ("Well that's just the way things are") or QBism ("What must the world be like to force this")

Bub and Healey are sort of in the middle. That's basically the best a mathematical description can do, but perhaps we can obtain some idea in the future of why.
 
  • #236
DarMM said:
I'd say we end up with a very odd and unintuitive ontology
If the point of interpretations is not to make measurable predictions (because we already have the unambiguous quantum formalism for that), then what is the point of interpretation that is not intuitive?
 
  • Informative
Likes DarMM
  • #237
Demystifier said:
If the point of interpretations is not to make measurable predictions (because we already have the unambiguous quantum formalism for that), then what is the point of interpretation that is not intuitive?
I would say, the point of interpretations is to make quantum mechanics comprehensible and reasonable. We're looking for a satisfactory explanation. The inner workings of the world needn't be intuitive. Relativity isn't intuitive either. Yet, we find it very explanatory. If the laws of nature aren't intuitive, we need to adjust our intuition, not the laws of nature.
 
  • Like
Likes mattt, DarMM and martinbn
  • #238
Demystifier said:
If the point of interpretations is not to make measurable predictions (because we already have the unambiguous quantum formalism for that), then what is the point of interpretation that is not intuitive?
Well have a look at the kinds of views left to us:
  1. Every system contains an infinite number of contextual degrees of freedom which include nonlocal degrees of freedom that contribute to the dynamics. Bohmian Mechanics and other nonlocal theories
  2. There is a continuous infinity of worlds, only one of which can be perceived. Everything you see about you is just a particular "slice" of the giant universal wavefunction that is correlated with the sensory apparatus of this version of you. Ultimately even space and time are an illusion, there is only the giant complex wavefunction and nothing else. Many Worlds
  3. The multiple potential futures communicate with the past to realize the present. Transactional Interpretation
  4. There is no dynamics. The history of the world is just that which solves a 4D constraint that does not permit a picture of a 3D world evolving in time in general. For microscopic objects in an experiment this constraint is solved against the presence of classical detector objects. For classical objects it is solved against the presence of other classical objects and so on. Thus the world isn't decomposable, i.e. things don't reduce to their parts because the parts have the whole and other objects on the scale of the whole as a constraint for their properties. Relational Block World
  5. QM isn't really true. The initial state of the universe was just such that we are determined to perform experiments that accidently give statistics that make it look like it is true. Superdeterminism à la 't Hooft.
  6. The world is just such a way that mathematics only goes so deep. The best you can do is a probabilistic account with the observer embedded in the description to some degree. Beyond that the world becomes non-mathematical. Copenhagen, QBism
Anything anybody comes up with is going to fall into at least one of these categories. Currently nobody really has a view that combines them, but you could have multiple retrocausal worlds for example.

None of these categories make me at least go "Wow, that's really intuitive and obvious!". As Matt Leifer has said there will be no return to the classical world. Options two and five have a large issue in that they haven't managed to replicate experimental data yet. A proof of the Born rule being the reason for both.

I can easily see how somebody might favor option six over the others.

N.B. Though I think there is a clear element of psychology here regarding preferences. Note how only in the Many Worlds view does the world remain non-contextual, reductive and mechanical. No surprise it is a favorite with programming and rationalist communities online.
 
Last edited:
  • Like
  • Informative
Likes mattt, dextercioby, julcab12 and 1 other person
  • #239
rubi said:
But a Bell-like argument can be used to say something even about non-local models too. In particular, it forbids non-local models, in which particles have a spin property along more than one axis.

Right. Those type of microscopic properties can't have values in a realistic model.
 
  • #240
Demystifier said:
The title of this thread is motivated by frequent arguments I had with other members here, especially @DarMM and @vanhees71 .

The so called "Copenhagen" interpretation of QM, known also as "standard" or "orthodox" interpretation, which is really a wide class of related but different interpretations, is often formulated as a statement that some things cannot be known. For instance, one cannot know both position and momentum of the particle at the same time. But on other hand, it is also not rare that one formulates such an interpretation as a statement that some things don't exist. For instance, position and momentum of the particle don't exist at the same time.

Which of those two formulations better describes the spirit of Copenhagen/standard/orthodox interpretations? To be sure, adherents of such interpretations often say that those restrictions refer to knowledge, without saying explicitly that those restrictions refer also to existence (ontology).

This is an interesting question that puts the finger on some difference in which paradigm one uses to describe natural science, the ontological one or the epistemological one. I think they they are in a way complementary and does not really contradict each other, and none is necessarily more primary.

I was thinking about this last week during a walk: What is more primary, the scientific method or the result of the scientific method.

There sult is often a "model", impliying some some kind of resulting "ontology", with some kind of implicit assumptions on how this ontological structure responds to perturbation. This "model" once corroborated by the scientific method, simply provides us with predictive power, and make us more fit to survive.

The method OTOH, is the process that verifies or questions the structure. And this process is necessarily a physical interaction. And one can argue that the method itself puts limitations on what is possible to infer. In particular it limits the degree of certainty we can have.

I then came to the conclusion that epistemological process of "physical inquiry" and the ontological structure are related like a compressed encoding of the same computational process that is making the physical inquiries.

Ie. the ontological structure is a result of sort of like abduction to best predictive model, and its necessarily truncated to fit the available capacity. So the physical inquiry singles out the ontology for a given observer (ie. we sort of renormalize the ontology to the observer complexity). And vice versa, the ontology for a given observer ENCODES its expectations about the behaviour that you would expect from a hypothetical inquiry.

My hunch from the various writings I've read before from the founders like Bohr and Heisenberg is that they put most focus on the epistemological point. But if Demystifiers hints that there is a duality here i totally agree. Its just that i think the ontology is process dependent ;) This is maybe where we disagree, but i agree that none of them is more fundamental! its a chicke and egg situation, that is IMHO best understood in an evolutionary perspective, where you stop asking which comes first.

/Fredrik
 
  • Informative
Likes Demystifier
  • #241
DarMM said:
Well have a look at the kinds of views left to us:
  1. Every object contains an infinite number of contextual degrees of freedom that interact with the same infinite set of degrees of freedom of other objects nonlocally. Bohmian Mechanics and other nonlocal theories
  2. There is a continuous infinity of worlds, only one of which can be perceived. Everything you see about you is just a particular "slice" of the giant universal wavefunction that is correlated with the sensory apparatus of this version of you. Ultimately even space and time are an illusion, there is only the giant complex wavefunction and nothing else. Many Worlds
  3. The multiple potential futures communicate with the past to realize the present. Transactional Interpretation
  4. There is no dynamics. The history of the world is just that which solves a 4D constraint that does not permit a picture of a 3D world evolving in time in general. For microscopic objects in an experiment this constraint is solved against the presence of classical detector objects. For classical objects it is solved against the presence of other classical objects and so on. Thus the world isn't decomposable, i.e. things don't reduce to their parts because the parts have the whole and other objects on the scale of the whole as a constraint for their properties. Relational Block World
  5. QM isn't really true. The initial state of the universe was just such that we are determined to perform experiments that accidently give statistics that make it look like it is true. Superdeterminism à la 't Hooft.
  6. The world is just such a way that mathematics only goes so deep. The best you can do is a probabilistic account with the observer embedded in the description to some degree. Beyond that the world becomes non-mathematical. Copenhagen, QBism
Anything anybody comes up with is going to fall into at least one of these categories. Currently nobody really has a view that combines them, but you could have multiple retrocausal worlds for example.
Where within your classification would you place the thermal interpretation?
 
  • #242
A. Neumaier said:
Where within your classification would you place the thermal interpretation?
Category 1. Though I should rephrase it possibly.

A given object has an infinite set of properties, e.g. ##\langle S_x \rangle##, ##\langle S_x^{2} \rangle## etc so you have Hardy's theorem manifesting. Those properties are contextual. And there are nonlocal properties as well.

The only thing is that ##A \otimes B## for a two photon system isn't really a property of two individual photons in the TI. I'll try to rework it as maybe "interacts" isn't the correct phrasing for the TI.
 
Last edited:
  • Informative
Likes Demystifier
  • #243
There I've edited it to say "infinite number of contextual degrees of freedom which include nonlocal degrees of freedom that contribute to the dynamics".

I think this allows for the distinction between Bohmian Mechanics and the TI. Where in Bohmian mechanics you'll have two photons interacting nonlocally. However in the TI you have a single object physicists colloquially call a "two photon system" that happens to possesses nonlocal dynamical properties.

So it removes the decomposability bias that "interact with other objects" implies.
 
  • #244
A short article from Griffiths:
https://arxiv.org/abs/1901.07050
It discusses many of the points that have come up in this thread, i.e. the CHSH inequalities coming from the assumption of the correlators ##E(a,b)## being marginals on a single sample space and also counterfactual definiteness.

Section 5.2 relates to the point I mentioned in #115. The force of Counterfactual indefiniteness isn't really affected by the fact that only one history occurs.

In case this provokes the single sample space discussion again (:nb)), remember the point is that there isn't a single sample space for the quantum observables, not that you can't get a sample space for the experimental outcomes (although even that single sample space will be "odd" with the observer embedded due to lack of a single sample space for the observables).
 
Last edited:
  • #245
And another paper by Quintino et al that shows a relation between nonlocality and incompatibility as mentioned earlier in the thread:
https://arxiv.org/abs/1902.05841
Basically if Alice's measurements and Bob's measurements are locally incompatible enough you can have correlations that would require a hidden variable theory to be nonlocal.

This follows a long literature on the subject, such as results that any set of local dichatomic incompatible POVMs can have Bell violating statistics:
https://arxiv.org/abs/0905.2998
There's also a nice paper here showing the links between incompatibility and contextuality. Has some references for how Contextuality is a resource for quantum computers:
https://arxiv.org/abs/1805.02032
Unsurprisingly you have a direct link between when Alice and Bob observables would each alone require a contextual hidden variable model and when Alice-Bob correlations would require a nonlocal model. There's a sort of trade-off between contextuality and nonlocality in any given situations:
https://arxiv.org/abs/1507.08480https://arxiv.org/abs/1603.08254https://arxiv.org/abs/1307.6710
Note the third paper again points how these results arise from assuming a common sample space.

A summary paper of the work by Cabello on the exclusivity principle have a nice demonstration of Contextuality and Nonlocality cases have the same graphs of incompatible outcomes, i.e. they can be the same thing embedded differently in spacetime:
https://arxiv.org/abs/1801.06347
And finally to tie back to the observer/agent view of Copenhagen and QBism the paper above and this paper:
https://arxiv.org/abs/1901.11412get a good amount of QM out from generalized Bayesian reasoning.
 
  • Like
Likes dextercioby
Back
Top