Copenhagen: Restriction on knowledge or restriction on ontology?

In summary: But if they're genuinely random then they can't be observed, so they must exist in some sense outside of observation.
  • #71
DarMM said:
That's perfect, but in order for it to be a sample space what's the measure?
You mean probability measure? It is product of two probabilities, one is probability of each outcome within subset according to QM the other is probability of particular measurement settings combination that again is product of two probabilities determined by RNG at each end (usually it would be 0.5*0.5=0.25).
 
Physics news on Phys.org
  • #72
zonde said:
You mean probability measure? It is product of two probabilities, one is probability of each outcome within subset according to QM the other is probability of particular measurement settings combination that again is product of two probabilities determined by RNG at each end (usually it would be 0.5*0.5=0.25).
This violates ##\mu\left(\Omega\right) = 1## though. Tsirelson proved that these sets of pairs cannot be combined into a single sample space. I just don't think what you are doing is possible.
 
  • Like
Likes dextercioby
  • #73
stevendaryl said:
My feeling is that if it doesn't mean that, then it's badly named. This paper gives a definition:
https://arxiv.org/pdf/1605.04889.pdf
That seems in keeping with what I said: If it's counterfactually definite, then it means that the outcomes of measurements are functions of the variables describing the state of the system (plus measuring device) at the time the measurement is performed. That is not the case with a stochastic theory. In a stochastic theory, the outcome of a measurement is not determined (so it's not a function of any particular collection of variables).
That paper then goes on to discuss counterfactually definite Kolmogorov stochastic processes though. That definition is compatible with stochastic theories and the authors seem to think so as well as far as I can see.

Take their next paragraph:
This definition means that the outcomes of measurements must be described by functions of a set of independent variables
So a measurement of any quantity say ##A## must be a function of independent variables ##\omega_i##, i.e. ##A\left(\omega_i \right)##.

However this is not in any way inconsistent with the ##\omega_i## being drawn from a space ##\Omega## with measure ##\mu\left(\Omega\right)## thus constituting a Kolmogorov probability theory.

The lack of counterfactual definiteness in QM is much deeper than that. For then one has random variables ##A## and ##B## that cannot be consider as even random variables over the same set of independent variables ##\omega_i##. It is this "deeper" (for lack of a better word) form of randomness that permits the CHSH inequalities without violating locality.

The form of counterfactual definiteness in their paper is the standard (found in Peres as well as they say) and is equivalent to pure states being point mass measures. Again this is something QM lacks.
 
Last edited:
  • #74
stevendaryl said:
Counterfactual definiteness means that there is a definite answer to counterfactual questions: "What would have happened if I had done X rather than Y?" What do you mean by "counterfactual definiteness"?
So counterfactual definitness is that if I can measure ##A## or ##B##, then if I measure ##A## let us say and obtain an outcome ##a## then there was also an outcome for ##B##, i.e. there is an outcome ##(a,b)##.

Thus the probability for a given value of ##A## is a marginal of the overall probability distribution:
$$p(a) = \int{p(a,b)db}$$
However this does not require the outcomes to be definite and thus does not exclude stochastic processes.

It means: even if Nature is fundamentally random there is an outcome for all variables.

It has the implication that if I have a system with ##N## observables and I measure ##M < N## of them, then the distributions for those ##M## is a marginal of the distribution for all ##N##.

Quantum Mechanics in these views rejects this. There isn't an outcome for all variables, thus in many cases one can escape marginal constraints which is what allows the CHSH inequality breaking.

So I could have a stochastic theory where I measure Spin in the z-direction and nature randomly generates a spin vector ##(S_x, S_y, S_z)##. I obtain ##S_z## but all the others were present and had a value as well. The difference in these views of QM is that only ##S_z## is generated.

Both Peres and Nielsen & Chuang prove the Bell's theorem via counterfactual definiteness and they don't exclude Stochastic theories with counterfactual definiteness from this proof. I don't agree with:
stevendaryl said:
That doesn't give any insight into quantum mechanics, because classical stochastic theories lack counterfactual definiteness, as well.

So the major thing is that in these views QM escapes nonlocality because if I measure ##(A_1,B_1)## the world only randomly generates ##(a_1,b_1)## and not ##(a_1,a_2,b_1,b_2)##. Thus QM describes a specific type of random world, not just a generic stochastic one from classical probability theory. I genuinely don't understand how this is an illusion of explanation, especially considering how this lack of marginalization can be put to use in Quantum Information theory.
 
Last edited:
  • #75
stevendaryl said:
The strangeness of quantum mechanics is not about the nondeterminism, but about the certainty. If Alice gets spin-up for a measurement of the z-component of spin for her particle, then it is certain that Bob will get spin-down for a measurement of the z-component of spin for his particle. It's not the lack of definiteness that is interesting, it is the certainty.

The certainty is no more interesting than the lack of definiteness.

The two things play off each other to become interesting.
What makes the certainty interesting is the lack of definiteness.
What makes the lack of definiteness interesting is the certainty.
Both are after the same thing.

We can reduce to:
How can there be correlation where the properties are not already existing?
Good question.
What's interesting is entanglement.
 
  • #76
DarMM said:
This violates ##\mu\left(\Omega\right) = 1## though. Tsirelson proved that these sets of pairs cannot be combined into a single sample space. I just don't think what you are doing is possible.
Any Bell inequality test that aims to close communication loophole has to combine all four experiments into one single experiment. Experiments like that have been performed number of times. Here is one https://arxiv.org/abs/quant-ph/9810080

So you are not making much sense to me.
 
  • #77
zonde said:
Any Bell inequality test that aims to close communication loophole has to combine all four experiments into one single experiment. Experiments like that have been performed number of times. Here is one https://arxiv.org/abs/quant-ph/9810080

So you are not making much sense to me.
Of course they combine them into one experiment, but not one sample space.

Let me try this. Yes your set of pairs is the set of all outcomes of the experiment. However it's not possible to put a probability measure on that set such that it becomes a sample space that replicates the statistics.

I think you're confusing "outcome" in the experimental sense with "outcome" in the probability theory sense.

The pairs are all outcomes of the experiment but they're not outcomes of a common probability space.

What I'm saying is very standard and a basic aspect of treatments of entanglement. Known since the works of Landau and Tsirelson. For example R.F. Streater "Lost Causes in and Beyond Physics" p.85:
The paradox is understood by the remark that there is no overall sample space in Quantum Mechanics

Seriously just try listing the value of the measure for each of your pairs and you'll see what I mean.
 
  • Like
Likes dextercioby
  • #78
DarMM said:
Both Peres and Nielsen & Chuang prove the Bell's theorem via counterfactual definiteness and they don't exclude Stochastic theories with counterfactual definiteness from this proof.

Well, I don't see any point in reproving Bell's theorem. It was clear enough originally.

So the major thing is that in these views QM escapes nonlocality because if I measure ##(A_1,B_1)## the world only randomly generates ##(a_1,b_1)## and not ##(a_1,a_2,b_1,b_2)##. Thus QM describes a specific type of random world, not just a generic stochastic one from classical probability theory. I genuinely don't understand how this is an illusion of explanation, especially considering how this lack of marginalization can be put to use in Quantum Information theory.

Well, I'm on the opposite end of this. I don't think that QM escapes nonlocality, and I think that at best, thoughts along these lines are just useful as classification; they can explain exactly what kind of nonlocality QM involves, and redefine that to be locality. As I said in a previous post, QM IS nonlocal, in the sense that facts about what will happen in one region of spacetime (Bob measuring the z-component of spin of his particle) can be predicted with certainty using distant information (the result of Alice's measurement), but cannot be predicted using only local information. Probabilities in quantum mechanics don't "factor" into local probabilities. That's an essential nonlocality of QM, and finding terminology that can classify that is actual local is just playing word games, it seems to me.

The real issue for quantum mechanics is that it doesn't give a way to assign probabilities until you pick an observable, or pick a measurement to perform. But, speaking realistically, whether something is a measurement and what exactly is being measured is dependent on your experimental setup. As I have said before, a measurement is simply an interaction---presumably described by quantum mechanics, since the measuring device is just so many electrons, protons and neutrons interacting through electroweak and nuclear forces. What makes something a "measurement" is that the interaction causes a microscopic quantity such as the z-component of an electron's spin to be amplified so that its value has macroscopic consequences. So this business about "different sample spaces" is, to me, a red herring when it comes to foundational issues of quantum mechanics. We don't really have a choice of different bases. What we really have is that some observables are macroscopic. Those macroscopic observables are approximately commuting, so they be assigned values simultaneously. Then the whole framework of quantum mechanics boils down to a mathematical way of describing a stochastic theory of those macroscopic variables.

Stated this way, it's pretty blatant that we're treating some quantum-mechanical variables, the macroscopic ones, as more special than others.
 
  • #79
stevendaryl said:
Well, I don't see any point in reproving Bell's theorem. It was clear enough originally.
Neilsen & Chuang is a textbook. I don't think there is any harm in coming up with other proofs in expositions. It's very common in physics and maths, different views on the same thing. Unless you think things should always be proved in their original form in textbooks in order to retain historical context.

stevendaryl said:
As I said in a previous post, QM IS nonlocal, in the sense that facts about what will happen in one region of spacetime (Bob measuring the z-component of spin of his particle) can be predicted with certainty using distant information (the result of Alice's measurement), but cannot be predicted using only local information. Probabilities in quantum mechanics don't "factor" into local probabilities. That's an essential nonlocality of QM, and finding terminology that can classify that is actual local is just playing word games, it seems to me.
I don't see how that could be true, that nonfactorizability can occur in classical probability theories that don't have product measures.

stevendaryl said:
Then the whole framework of quantum mechanics boils down to a mathematical way of describing a stochastic theory of those macroscopic variables.

Stated this way, it's pretty blatant that we're treating some quantum-mechanical variables, the macroscopic ones, as more special than others.
I don't disagree with any of this. I am only concentrating on the locality issue in these interpretations in my responses to you. Certainly these views have a very odd approach to physics utterly different to classical mechanics where by the observer cannot be removed and their devices and outcomes are treated very differently from the objects which they are studying. If this discussion about locality and sample spaces reaches a conclusion then I will say more about these views and the measurement problem. However it's a separate issue.

stevendaryl said:
So this business about "different sample spaces" is, to me, a red herring when it comes to foundational issues of quantum mechanics.
I still think this is too strong. You are using the lack of a solution given to the measurement problem to dismiss any insight from the different probabilistic structure of the theory. The fact of different sample spaces has many implications in Quantum Information, it's not just nothing or a red herring because it doesn't provide a solution to the measurement problem.

Let me put it this way. It could theoretically be the case that what is going on in the Bell experiments is the fact that only a subset of values are generated and that this would explain the apparent nonlocality. However this would not mean there isn't an issue with explaining how these values are generated (i.e. the measurement problem).

Something can be a solution to one problem without being a solution to all problems.

stevendaryl said:
So this business about "different sample spaces" is, to me, a red herring when it comes to foundational issues of quantum mechanics. We don't really have a choice of different bases. What we really have is that some observables are macroscopic. Those macroscopic observables are approximately commuting, so they be assigned values simultaneously.
They cannot in total. In a purely quantum treatment of the measurement interaction the macro-observables of the device ##A_i## and (for simplicty) a single observable of the microscopic system ##S_z## can be assigned simultaneous values. The Boolean algebra of random variables cannot be enlarged beyond this to include ##S_x##, even if ##S_x## is taken to be a macroscopic outcome. That's what the multiple sample spaces mean.

##S_x## and ##S_z## cannot be taken to have simultaneous values even when viewed as macroscopic consequences of the measurement interaction. That's a mathematical fact of the theory encoded by their representation as Kolmogorov probabilities, there isn't a single Gelfand representation that you usually use to map C*-algebra elements to a Kolmogorov theory.
 
  • #80
DarMM said:
I don't see how that could be true, that nonfactorizability can occur in classical probability theories that don't have product measures.

Well, I would be interested in an example. Not a mathematical example, but a physical example---a situation where there is such nonlocality in the probabilities.

I can make up an example: Suppose we have two coins. As far as anybody can tell, it's a fair coin---a 50/50 chance of resulting in "heads" or "tails". But for some unexplained reason, it's always the case that the ##n^{th}## flip of one coin produces the opposite result of the ##n^{th}## flip of the other, no matter how far apart the coins are.

Such a pair of coins could not be used to communicate FTL. But I think that people would assume that either there is something nonlocal going on, or that the coins are secretly preprogrammed to give specific results on each flip. Either hidden variables or nonlocality.

I don't disagree with any of this. I am only concentrating on the locality issue in these interpretations in my responses to you.

Well, it seems to me that the various ways of saying that quantum mechanics is local is just a matter of shunting the issues that are of interest elsewhere---onto the measurement problem, or the single outcome problem.

Certainly these views have a very odd approach to physics utterly different to classical mechanics where by the observer cannot be removed and their devices and outcomes are treated very differently from the objects which they are studying. If this discussion about locality and sample spaces reaches a conclusion then I will say more about these views and the measurement problem. However it's a separate issue.

I don't think they are separate.

I still think this is too strong. You are using the lack of a solution given to the measurement problem to dismiss any insight from the different probabilistic structure of the theory. The fact of different sample spaces has many implications in Quantum Information, it's not just nothing or a red herring because it doesn't provide a solution to the measurement problem.

It's of no foundational interest. It might be interesting mathematics.

They cannot in total. In a purely quantum treatment of the measurement interaction the macro-observables of the device ##A_i## and (for simplicty) a single observable of the microscopic system ##S_z## can be assigned simultaneous values. The Boolean algebra of random variables cannot be enlarged beyond this to include ##S_x##, even if ##S_x## is taken to be a macroscopic outcome. That's what the multiple sample spaces mean.

What I'm saying is that in the spirit of Copenhagen we can forget microscopic variables except as computational tools for computing the probabilities for macroscopic variables. That's not very satisfying for someone who wants to understand what's going on, but the quantum formalism just doesn't give any more information than that.

##S_x## and ##S_z## cannot be taken to have simultaneous values

And, from the point of view of Copenhagen, who cares?
 
  • #81
stevendaryl said:
The real issue for quantum mechanics is that it doesn't give a way to assign probabilities until you pick an observable, or pick a measurement to perform.

That's no issue. The probabilities depend on the measurement performed. It's simply how nature is. How nature is, can never be an issue. And whatever mathematics is calibrated to how nature is, is a mathematics without issue.

stevendaryl said:
But, speaking realistically, whether something is a measurement and what exactly is being measured is dependent on your experimental setup.

Correct.

stevendaryl said:
As I have said before, a measurement is simply an interaction---presumably described by quantum mechanics

No, that's just a way of thinking of it. I think measurement is a deeper circumstance than as you described. But 'interaction' is a reliable way of achieving measurement.

stevendaryl said:
What makes something a "measurement" is that the interaction causes a microscopic quantity such as the z-component of an electron's spin to be amplified so that its value has macroscopic consequences.

You are saying that measurement is decoherence. Well, it is not certain that decoherence is the issue, as opposed to, decoherence co-occurs with what (actually) is the issue.

It is a good rule of thumb though.
 
  • #82
stevendaryl said:
I don't think they are separate.
Okay I appreciate that. However there are many interpretations where their solutions are not directly related. For example the Relational Blockworld or Many Worlds. Since there is no theorem directly proving they are related or that an interpretation is forced to solve them in the same manner it's still valid not to think so. Of course that's not to say your intuition is wrong, but it's not obviously true. The closest to evidence that they are related is perhaps the Colbeck-Renner theorem, at least in my reading of it.

stevendaryl said:
It's of no foundational interest. It might be interesting mathematics.
Considering multiple experts in Quantum Information and Quantum Foundations think otherwise can you explain why? Or at least in light of their opinions why you are so confident to assert it flat out certainly is of no foundational interest.

Do you think Contextuality, which is simply another phrasing of multiple sample spaces, is of any foundational interest?

Contextuality, Counterfactual definiteness, multiple sample spaces are all basically the same thing. I just can't see why they're irrelevant or only of mathematical interest.

stevendaryl said:
And, from the point of view of Copenhagen, who cares?
I don't understand the content of this aside from being contrarian. Asher Peres, Christopher Fuchs, Jeffrey Bub and many others consider this of direct import. Just read Bub's "Why Bohr was (mostly) right" or his "Two quantum dogmas". Bohr himself consider this (complimentarity) to be of fundamental importance. So it certainly seems like Copenhagen type views care. So what exactly do you mean by this?
 
Last edited:
  • Like
Likes lodbrok and dextercioby
  • #83
DarMM said:
Okay I appreciate that. However there are many interpretations where their solutions are not directly related. For example the Relational Blockworld or Many Worlds.

I don't consider either of those to be 100% successful. In any case, the locality issue does not seem very pressing without resolving more fundamental questions.

Considering multiple experts in Quantum Information and Quantum Foundations think otherwise can you explain why?

Well, I would turn that around and ask why anyone would consider them of foundational interest. There is a difference between mathematical developments about quantum mechanics---for instance, quantum logic, quantum probability, the theory of Hilbert spaces, etc.--and the sort of foundational questions that I'm interested in, which is understanding what is going on in the real world. The mathematics of QM is very elegant, but I really don't think it sheds much light on what's actually happening. If it's necessary to separate "measurement" from other kinds of interactions, then it can't be the full story. The universe existed for billions of years before there were creatures capable of performing measurements. Physics presumably governed the universe during that time.

Do you think Contextuality, which is simply another phrasing of multiple sample spaces, is of any foundational interest?

No.

Contextuality, Counterfactual definiteness, multiple sample spaces are all basically the same thing. I just can't see why they're irrelevant or only of mathematical interest.

Yes, they're basically all the same thing.

I don't understand the content of this aside from being contrarian.

I'm saying that if you view quantum mechanics as a stochastic theory of macroscopic observables, then it's not necessary to interpret microscopic variables as "observables". Microscopic "observables" are operators. There is no reason for operators to commute.
 
  • #84
stevendaryl said:
If it's necessary to separate "measurement" from other kinds of interactions, then it can't be the full story. The universe existed for billions of years before there were creatures capable of performing measurements. Physics presumably governed the universe during that time.
I think Copenhagenists would agree with you on this. They're not saying QM is the whole story and one of the main things they claim is that there isn't a solution to the measurement problem within QM. It is an interpretation of QM after all. I think this gets confusing due to what different interpretations aim at. Some try to say what the world is like, others try to say just what QM is like.

As an analogy imagine if we had come across statistical mechanics by modelling the macrostate alone. Somebody might come along an realize that the macrostate was an epistemic quantity and tell you that statistical mechanics has an epistemic core. They would be correct about this despite not formulating an underlying theory.

I think these views are along similar lines.

stevendaryl said:
No.
stevendaryl said:
Well, I would turn that around and ask why anyone would consider them of foundational interest.
That's a bold claim and genuinely in disagreement with the majority of experts. In fact in most interpretations it's a fairly major element. Not only that but QM can be reconstructed from contextual considerations (One of my favorites is https://arxiv.org/abs/1801.06347, but there are others).

I think beyond this point I would have to advocate for these interpretations and say why contextuality is important. Which I won't do since I'm not convinced of these views myself and thus will leave it there. Perhaps some day in another thread you can explain more in depth why you disagree with this majority view. It would be interesting to hear.
 
  • #85
Once again, my claim is that the empirical content of quantum mechanics is summarized by:
  1. Associated with a system (no matter how complex) is a wave function (or a density matrix) that evolves according to Schrodinger's equation.
  2. At any time the probability of being in a certain macroscopic configuration is the square of the amplitude corresponding to that configuration.
That's certainly unsatisfactory in many ways, but I don't think that more sophisticated treatments help much. In particular, talking about measurements and eigenvalues and probability spaces seems to me to not add much. The only measurement we need to consider is the observation of which macroscopic configuration we are in, and that's the only probability space we need to consider.

I'm speaking at a foundational level. Of course, for practical purposes we don't want to have to consider the physics of huge systems involving astronomical numbers of particles. We want to consider systems involving one or two particles. But what I believe to be true is that the rule of thumb that we adopt when working with microscopic systems is equivalent to the above theory. We talk about "measuring a microscopic quantity" because we want to abstract away from the details of the hugely complicated measuring device. But what's really going on in a measurement is that the microscopic variable is affecting the measuring device, and what we're really measuring is some macroscopic property of that device.
 
  • #86
stevendaryl said:
No.
One final thing about this, since I'm so surprised by it. You know Bell's theorem follows from Contextuality. See Leifer's paper p.89:
https://arxiv.org/abs/1409.1570
Note that most ##\psi##-ontic interpretations are contextual as well (MWI and Bohmian Mechanics) so even there it matters.
 
  • #87
DarMM said:
That's a bold claim and genuinely in disagreement with the majority of experts.

Yes, it's a little presumptuous, but I just don't believe that the experts have made any substantial progress in answering the most fundamental questions about quantum mechanics.

In fact in most interpretations it's a fairly major element. Not only that but QM can be reconstructed from contextual considerations (One of my favorites is https://arxiv.org/abs/1801.06347, but there are others).

I think that there are intriguing hints in the formalism of quantum mechanics about what might be going on. The fact that quantum mechanics has rules for combining amplitudes that are directly analogous to the rules for combining probabilities in stochastic random walks is interesting. But at best, they are hints.

I think beyond this point I would have to advocate for these interpretations and say why contextuality is important.

As I have said on a number of occasions, the empirical content of quantum mechanics can be summarized as a stochastic theory of macroscopic configurations. There is no need for multiple probability spaces; the only probability space you need is the probability space for those macroscopic configurations.
 
  • Like
Likes DarMM
  • #88
DarMM said:
Note that most ##\psi##-ontic interpretations are contextual as well (MWI and Bohmian Mechanics) so even there it matters.

To the extent that contextuality is relevant in Bohmian mechanics, it's derivable. It's not an input.
 
  • #89
stevendaryl said:
To the extent that contextuality is relevant in Bohmian mechanics, it's derivable. It's not an input.
I'm aware, but the point is that Kochen-Specker contextuality is logically prior to non-classical correlations. So saying contextuality has no foundational content, but CHSH violation does, is confusing to me.

stevendaryl said:
There is no need for multiple probability spaces; the only probability space you need is the probability space for those macroscopic configurations.
That's just a mathematical fact of QM. There's no injective Gelfand representation for the algebra of observables, so there simply are multiple sample spaces. I don't really understand mathematically how this probability space for macroscopic configurations avoids multiple sample spaces.

Regardless I think I understand your view that even if you show that counterfactual indefiniteness (etc) allows one to retain locality, you haven't really achieved anything by being silent on the measurement problem. I'm not convinced by this, considering the weight of contradictory evidence, but this would involve "advocation" as it were.
 
  • Like
Likes dextercioby
  • #90
DarMM said:
I don't really understand mathematically how this probability space for macroscopic configurations avoids multiple sample spaces.
The point is that after the whole statistics is collected, one knows the empirical distribution of all measurement results exactly; thus this defines a probability measure. In the limit of an infinite number of measurements you get a limiting measure for all measured variables. This measure, and only this, is sufficient to determine the outcomes - no matter how many multiple sample spaces are used in the microscopic description.

Because macroscopic outcomes are definite (by definition of outcomes) they should be (under the standard interpretations) eigenproperties of the macroscopic state. This is @stevendaryl's argument.
 
  • #91
DarMM said:
That's just a mathematical fact of QM. There's no injective Gelfand representation for the algebra of observables

My point is that the word "observable" is misapplied. Microscopic observables are not observables. The fact that the operators ##x## and ##\frac{\partial}{\partial x}## don't have simultaneous eigenstates is not a fact about quantum mechanics, it's a fact about functions which was true before quantum mechanics was ever invented.
 
  • #92
stevendaryl said:
Microscopic observables are not observables.
This rejects all traditional interpretations and brings you close to the thermal interpretation, where operators are q-observables only, and observable are inaccurate values for ##\langle A\rangle##.
 
  • Like
Likes dextercioby
  • #93
stevendaryl said:
My point is that the word "observable" is misapplied. Microscopic observables are not observables. The fact that the operators ##x## and ##\frac{\partial}{\partial x}## don't have simultaneous eigenstates is not a fact about quantum mechanics, it's a fact about functions which was true before quantum mechanics was ever invented.

Actually, ##x## doesn't have any eigenstates at all, but you know what I mean.
 
  • #94
stevendaryl said:
My point is that the word "observable" is misapplied. Microscopic observables are not observables. The fact that the operators ##x## and ##\frac{\partial}{\partial x}## don't have simultaneous eigenstates is not a fact about quantum mechanics, it's a fact about functions which was true before quantum mechanics was ever invented.
Well certainly the mathematical properties of those operators existed prior to QM, but I'm not sure that means it's not of any physical content with regard to QM. I mean it was a fact that there were non-trivial Adjoint bundles prior to Yang-Mills theories, but that doesn't mean these topological sectors are of no physical import in Yang-Mills theories. Otherwise we'd be close to saying the mathematics of a theory has no physical content.

Or like saying the Riemann tensor would have a decomposition into Weyl and Ricci curvature prior to GR. Sure, but it still means something physically via GR where such tensors have a physical meaning.

It directly tells you about a there being no state with those two observables having sharp values beyond a certain limit.
 
Last edited:
  • Like
Likes dextercioby
  • #95
A. Neumaier said:
The point is that after the whole statistics is collected, one knows the empirical distribution of all measurement results exactly; thus this defines a probability measure. In the limit of an infinite number of measurements you get a limiting measure for all measured variables. This measure, and only this, is sufficient to determine the outcomes - no matter how many multiple sample spaces are used in the microscopic description
Over what space though? Over the space of all trials, i.e. ##\omega_i## with ##i## a trial index?

I think I get what you mean, but regardless there won't be a common ##S_z## and ##S_x## sample space.
 
  • #96
Do either of you know of a reference for this macroscopic configuration probability measure idea?
 
  • #97
DarMM said:
Over what space though? Over the space of all trials, i.e. ##\omega_i## with ##i## a trial index?
Yes.
DarMM said:
I think I get what you mean, but regardless there won't be a common ##S_z## and ##S_x## sample space.
But a common ##S_z'## and ##S_x'## sample space, where ##S_z'## and ##S_x'## refer to the macroscopic pointer variables actually read when taking a measurement. @stevendaryl's claim is that only these need to be explained in terms of an interpretation of QM since these are being read, whereas the connection between these and the system measured is (in principle) a matter of theory.
 
  • #98
DarMM said:
Do either of you know of a reference for this macroscopic configuration probability measure idea?
It seems to be @stevendaryl's original idea, which he tried to communicate here on PF.
 
  • Like
Likes DarMM
  • #99
A. Neumaier said:
It seems to be @stevendaryl's original idea, which he tried to communicate here on PF.
Thank you. I'll need to think about it a bit as I'm not so sure it is correct. Despite ##S_z'## and ##S_x'## being macroscopic quantities I'm not so sure they really do have a common sample space in a meaningful way that contradicts or renders irrelevant the observation that they don't due to contextuality. Especially in light of some discussions by Jeffrey Bub who considered exactly this in his papers on the interpretation of QM.

@stevendaryl has had a lot of original ideas on this thread:
  1. The foundational irrelevancy of Contextuality
  2. The physical irrelevancy of the operators representing observables not commuting
  3. The irrelevancy of different sample spaces due to the quantities from each being amplified up to the macroscopic realm
Each of these ideas alone runs counter to most thinking in quantum foundations, quantum probability and quantum information. So I'll have to stop there to absorb and respond to them due to their novelty.
 
  • #100
DarMM said:
Thank you. I'll need to think about it a bit as I'm not so sure it is correct. Despite ##S_z'## and ##S_x'## being macroscopic quantities I'm not so sure they really do have a common sample space in a meaningful way that contradicts or renders irrelevant the observation that they don't due to contextuality. Especially in light of some discussions by Jeffrey Bub who considered exactly this in his papers on the interpretation of QM.

@stevendaryl has had a lot of original ideas on this thread:
  1. The foundational irrelevancy of Contextuality
  2. The physical irrelevancy of the operators representing observables not commuting
  3. The irrelevancy of different sample spaces due to the quantities from each being amplified up to the macroscopic realm
Each of these ideas alone runs counter to most thinking in quantum foundations, quantum probability and quantum information. So I'll have to stop there to absorb and respond to them due to their novelty.
I believe that 1. and 2. are a consequence of 3., and my previous remarks apply to 3.
 
  • #101
A. Neumaier said:
I believe that 1. and 2. are a consequence of 3., and my previous remarks apply to 3.
Yes that is correct, they're all ultimately the same thing in a sense.
 
  • #102
stevendaryl said:
There is no need for multiple probability spaces; the only probability space you need is the probability space for those macroscopic configurations.
I've tried to work this out, but I'm not seeing it. Can you give an example with some mathematical details, even sketched not necessarily in full detail.

Like what exactly are the macroscopic degrees of freedom, what sample space do they use and how do you still end up with CHSH violations despite the single sample space?
 
  • #103
DarMM said:
I still think this is too strong. You are using the lack of a solution given to the measurement problem to dismiss any insight from the different probabilistic structure of the theory. The fact of different sample spaces has many implications in Quantum Information, it's not just nothing or a red herring because it doesn't provide a solution to the measurement problem.

stevendaryl said:
Well, it seems to me that the various ways of saying that quantum mechanics is local is just a matter of shunting the issues that are of interest elsewhere---onto the measurement problem, or the single outcome problem.

Is this related?
Eric G. Cavalcanti
https://arxiv.org/abs/1602.07404
 
  • Like
Likes DarMM
  • #104
DarMM said:
I've tried to work this out, but I'm not seeing it. Can you give an example with some mathematical details, even sketched not necessarily in full detail.

Like what exactly are the macroscopic degrees of freedom, what sample space do they use and how do you still end up with CHSH violations despite the single sample space?
Have you seen Eberhard's proof of Bell inequalities? It might be the thing you are asking for. It is contained in this paper: https://journals.aps.org/pra/abstract/10.1103/PhysRevA.47.R747. This paper is behind paywall but I have posted the part of Bell inequality proof here: https://www.physicsforums.com/threa...y-on-probability-concept.944672/#post-5977632
 
  • #105
vanhees71 said:
What's local are the interactions
Interactions between what? Between field operators, obviously. But according to the statistical ensemble interpretation, the field operator is a tool to analyze the ensembles of systems, not the individual systems. Hence the local interactions are interactions between the ensembles of systems, not between the individual systems.

So what then are the interactions between the individual systems? The statistical ensemble interpretation of QFT does not tell. But the Bell theorem tells us that, if interactions between individual systems exist at all, then those interactions are nonlocal.
 
Back
Top