Quantum Mechanics without Measurement

In summary, Quantum Mechanics without Measurement refers to the theoretical framework of quantum mechanics that explores the behavior of particles without the need for measurement or observation. It suggests that particles can exist in multiple states simultaneously, known as superposition, until they are measured or observed. This idea challenges traditional concepts of causality and determinism, and has led to groundbreaking theories and applications in fields such as quantum computing and cryptography. However, the concept of measurement remains a central and controversial aspect of quantum mechanics, with ongoing debates and research surrounding its implications and limitations.
  • #71
atyy said:
Regardless of whether CH is local, I think it is nonrealistic because there are multiple incompatible frameworks, and you can choose anyone of these frameworks to describe "reality".

I have to be honest and admit that I don't understand CH well enough to judge if this is the case or not. However if CH is nonrealistic, then Griffiths has paid that "high price" that he rejects in his book and this, to me, makes this story even more inconsistent...

But if we assume that CH is nonrealistic, could you explain – step by step – what happens in an EPR-Bell experiment, according to CH and multiple incompatible frameworks?

atyy said:
To me the question is whether CH is nonlocal and nonrealistic, or local and nonrealistic.

If CH is nonlocal and nonrealistic... Griffiths has paid the "high price" twice, and then maybe we are beyond inconsistent storytelling...

atyy said:
Regarding "classical logic": would it be it would be more accurate to say, like Devils Avocado's comment above, that the usual rules of probability to classical reality are not applied?

To avoid any confusion, maybe I should explain what I mean by "classical probability" (in this allegory):

  • Take a coin, and let it spin at very high speed on both vertical and horizontal axes.

  • Initial conditions are completely unknown and the outcome is regarded as 100% random.

  • Send the coin toward a metal plate with vertical and horizontal slit +.

  • The coin will always go through the vertical or horizontal slit with a 50/50 chance.

  • Now we introduce a second coin, with exactly the same properties, and send both coins in opposite direction towards two space-like separated metal plates with a vertical/horizontal slit +.

  • When we check the outcome, the two coins are always correlated, i.e. if they have gone through the same orientation they show the same face, if they have gone through the opposite orientation they show the opposite face.

  • We make the conclusion that "something magical" happened at the source when we created the spin of the two coins, that make them act randomly but correlated.

  • We also make the conclusion that there is no "spooky action at a distance" going on (the source is the explanation) and also make the conclusion that these coins are real, it's just that with current technology we can't inspect all their properties.
This is the "classical probability", however now we change the setup:

  • We modify the metal plates to tilt randomly between 0° = + and 45° = X, and repeat the experiment.

  • To our surprise it turns out that when metal plates have the same tilting, we get exactly the same results as in previous setup. But when metal plates have the different tilting, we get a random correlation of 50% head or tail, and there is no explanation on how the two space-like separated coins 'knew' they were going through different orientations, none whatsoever, and the "common source explanation" can't save us this time.

  • Now an extensive debate starts – whether the coins are real or not, or if there is some non-local influence on the coins – which is still ongoing...
This would be "non-classical probability".
 
Physics news on Phys.org
  • #72
Demystifier said:
If Bell was nominated for the Nobel Prize, it was because he made a new measurable prediction, which was tested by an actual experiment. I don't think that it was the case with Griffiths.

Ehh... it was meant more like a 'joke'... sorry, my silly humor again... :blushing:

Demystifier said:
Speaking of nominations for the Nobel Prize, is there an official site where one can see who was nominated and when?

I don't think so, they are very secretive in the committee and nominations are kept secret for 50 years.
http://www.nobelprize.org/nomination/physics/

But some (old) data are available in the nomination database (not Physics though??):
http://www.nobelprize.org/nomination/archive/

But there is always 'talk' and I take it for granted that Jeremy Bernstein somehow has gotten the correct information.
(page 13)
http://arxiv.org/abs/1007.0769
 
  • #73
DevilsAvocado said:
And the "high price" is to abandon either locality or realism [...]
That's not what he says in your quote. He says if we want to construct a hidden variables theory, Bell tells us that we have to embrace either non-locality or backwards causation. His "solution" is simple: like Bohr, he doesn't want to construct a hidden variables theory in the first place. So what he rejects is EPR realism. Calling his theory realistic may be sensible from another point of view but this is certainly not EPR realism which is what Bell's theorem is about.

/edit: I also wrote a statement about locality here but actually, I think this should be discussed in an own thread.
 
Last edited:
  • #74
I think a lot of confusion arises because there isn't much clarity about the terms realism and locality.

Do we not just consider CH to have the same types of locality and realism as MWI?

Locality is preserved, though splitting is global and instantaneous.

Realism is preserved in that all observers in the same framework have the same reality.

These concepts are compatible with those which apply to other interpretations too, since they are not concerned with splitting, worlds or frameworks, though in those interpretations it is not possible for both to preserved.

If we follow these, I don't see how Bell Inequality can apply, because there is no hidden variable or information transfer.

Is it not true that in order to calculate the Bell Inequality in this context, we would incorporate quantities outside of the universe?

I don't see how there is a modification to the rules of logic here, simply a clarification that in order to generate inference by combining statements, they must pertain to the same universe.

Not of this undermines the significance of Bell's work, but it's applicability was to information transfer via hidden variables, which neither the MWI nor CH are concerned with.
 
Last edited:
  • #75
kith said:
That's not what he says in your quote. He says if we want to construct a hidden variables theory, Bell tells us that we have to embrace either non-locality or backwards causation.

And this shows that Griffiths has not gotten the complete picture, since there are other options for non-realism than backwards causation. Shouldn't a professor, claiming to have a new solution to this problem, be better informed?

kith said:
His "solution" is simple: like Bohr, he doesn't want to construct a hidden variables theory in the first place. So what he rejects is EPR realism. Calling his theory realistic may be sensible from another point of view but this is certainly not EPR realism which is what Bell's theorem is about.

Most of us doesn't care what Griffiths wants, we're more interested in what he can prove (which seems to be nothing, this far). Introducing something as "almost real" and then name this new invention "consistent", would generally be considered a joke.

I don't know how many times I have asked this question:
Could you please explain – step by step – what happens in an EPR-Bell experiment, according to CH and the new "Almost-realism"?

(Even if Griffiths don't acknowledge EPR realism, I sure hope he accept experimental outcomes...)
 
  • #76
craigi said:
Is it not true that in order to calculate the Bell Inequality in this context, we would incorporate quantities outside of the universe?

I don't see how there is a modification to the rules of logic here, simply a clarification that in order to generate inference by combining statements, they must pertain to the same universe.

I could be wrong, but my firm belief is that if we incorporate "stuff" outside this universe to solve scientific problems inside this universe, we have to move to the Vatican and finish our thesis inside these walls.

It's probably even possible to prove the existents of the flying Centaur, if we just have the option to throw any unpleasant data in the "I-Don't-Like-Bin", and just toss it out of this universe.

But I could be wrong, of course...

[Note: strong irony warning]
 
  • #77
DevilsAvocado said:
And this shows that Griffiths has not gotten the complete picture, since there are other options for non-realism than backwards causation. Shouldn't a professor, claiming to have a new solution to this problem, be better informed?
I'm a bit puzzled by your fixation on this. Why exactly do you think that Griffiths thinks something about Bell's theorem needs to be "solved"? In everything I have read from him, Griffiths says that it doesn't make sense to search for hidden variable theories because Bell's theorem tells us that they are ugly. This is simply the mainstream view. I don't know what he says about the definition of the terms "locality" and "realism", but this is just a semantic sidenote and really not the core issue of this thread.

What Griffiths wants to solve (and what caused stevendaryl to open this thread) is the problem that textbooks assign a special role to the concept of measurement and make it seem like QM can't be used to describe the measurement process.
 
  • #78
DevilsAvocado said:
And this shows that Griffiths has not gotten the complete picture, since there are other options for non-realism than backwards causation. Shouldn't a professor, claiming to have a new solution to this problem, be better informed?



Most of us doesn't care what Griffiths wants, we're more interested in what he can prove (which seems to be nothing, this far). Introducing something as "almost real" and then name this new invention "consistent", would generally be considered a joke.

I don't know how many times I have asked this question:
Could you please explain – step by step – what happens in an EPR-Bell experiment, according to CH and the new "Almost-realism"?

(Even if Griffiths don't acknowledge EPR realism, I sure hope he accept experimental outcomes...)

I'm not sure what it about Griffiths' interpretation that's bugging you so much, but none of the interpretations prove any new physics. That is not their purpose. Their goal is epistemological rather than ontological. Some, including myself, believe that an interpretation could hint at something of ontological value, but this hasn't happened yet.

Of course Griffiths understands the EPR experiments very well. He is one of the leading experts in the field of QM and by no means denies the results of the experiments, which are not in the slightest inconsistent with his interpretation.
 
Last edited:
  • #79
DevilsAvocado said:
I could be wrong, but my firm belief is that if we incorporate "stuff" outside this universe to solve scientific problems inside this universe, we have to move to the Vatican and finish our thesis inside these walls.

It's probably even possible to prove the existents of the flying Centaur, if we just have the option to throw any unpleasant data in the "I-Don't-Like-Bin", and just toss it out of this universe.

But I could be wrong, of course...

[Note: strong irony warning]

That's the point, we don't incorporate stuff outside of this universe and that is where part of the Bell Inequality calculation lies, under the CH interpretation. I can understand a reactionary attitude to this terminology, I don't like it either, because it does sound like something from science fiction, or perhaps as you suggest, theology. You can just consider it, stuff that does not happen.

All of the interpretations throw out stuff they don't like in favour of stuff that they do, but none of these things are tangible physical things, purely concepts that we use to try make sense of them.
 
Last edited:
  • #80
kith said:
In everything I have read from him, Griffiths says that it doesn't make sense to search for hidden variable theories because Bell's theorem tells us that they are ugly. This is simply the mainstream view.

Agreed, a lot of things don't make sense. Regarding ugly HV, I think that is something you have to confront Demystifier, or maybe atyy with, personally I'm agnostic.

kith said:
I don't know what he says about the definition of the terms "locality" and "realism", but this is just a semantic sidenote and really not the core issue of this thread.

Okay, "semantic sidenote" is fine by me, with the reservation that if an interpretation can't handle Bell's theorem it's basically dead, and if I'm not mistaken, that's also what stevendaryl said last time he posted.
 
  • #81
craigi said:
I'm not sure what it about Griffiths' interpretation that's bugging you so much, but none of the interpretations prove any new physics. That is not their purpose. Their goal is epistemological rather than ontological. Some, including myself, believe that they could something of ontological value, but this hasn't happened yet.

But I think the Devil's Avocado wants (and I couldn't find it by googling) is a demonstration of how CH works by applying it to the EPR problem. What are the possible sets of consistent histories, and what would be an example of an inconsistent set?

It's a little complicated to see how to apply the technical definition, because the notion of "consistency" involves time evolution of projection operators. But once you involve macroscopic objects like measuring devices, we don't have a comprehensible expression for the time evolution (because it involves an ungodly number of particles).

Let me just think out loud:

My guess would be that a (simplified, approximate) history would have 6 elements:
  1. Alice's detector orientation. ([itex]\theta_A[/itex])
  2. Bob's detector orientation. ([itex]\theta_B[/itex])
  3. A spin state for Alice's particle immediately before detection. ([itex]\sigma_A[/itex])
  4. A spin state for Bob's particle immediately before detection. ([itex]\sigma_B[/itex])
  5. Alice's result (spin up or spin down) ([itex]R_A[/itex])
  6. Bob's result (spin up or spin down) ([itex]R_B[/itex])

So a history is a vector of six elements:
[itex]\langle \theta_A, \theta_B, \sigma_A, \sigma_B, R_A, R_B \rangle[/itex]

To apply Griffiths' approach, we need to first figure out which collections of 6-tuples are consistent. What I think is true is that any macroscopic state information is consistent, in Griffiths' sense (although it might have probability zero). So whatever rules for consistent histories should only affect the unobservable state information (the particle spins).
 
  • #82
stevendaryl said:
But I think the Devil's Avocado wants (and I couldn't find it by googling) is a demonstration of how CH works by applying it to the EPR problem. What are the possible sets of consistent histories, and what would be an example of an inconsistent set?

Thanks a lot Steven, finally! :thumbs:

I'll study your explanation and get back.
 
  • #83
I have now read Griffiths' paper and I am not sure what to think of it.

Firstly, my previous notion of one histroy being the "right" one isn't what he has in mind (he explicitly acknowledges different, mutually exclusive histories to be equally valid in the middle of section VI). So the catch phrase "Many worlds without the many worlds" doesn't seem appropriate to me.

Now what does he do? In section V, he uses a toy model to analyze the measurement process. This analysis seems conceptually not very different from what Ballentine or a MWI person would do.

In section VI, he introduces his families of histories to explore which assumptions about properties before performing a measurement can be combined consistently. A history is a succession of statements about the system, while a family of histories is a set of possible histories. Although within one family, the realized outcome of an experiment may be only compatible with one history, different views about the possible intermediate states corresponding to different families are possible. As mentioned above, he thinks that all of these families / points of view about intermediate states should be considered equally valid or "real". Therefore, CH seems more lika a meta interpretation to me.

Now what I don't understand is the relevance of the existence of more than one family of histories to the measurement problem. For example, his analysis of the measurment process takes place before he even introduces them.
 
  • #84
DevilsAvocado said:
Thanks a lot Steven, finally! :thumbs:

I'll study your explanation and get back.

I haven't explained anything. I was trying to publicly work out what the CH description of EPR might look like. I'm not finished, because I'm stuck on figuring out which collections of histories are "consistent" in Griffiths' sense.
 
  • #85
kith said:
Now what I don't understand is the relevance of the existence of more than one family of histories to the measurement problem. For example, his analysis of the measurement process takes place before he even introduces them.

The way I understand it is that we choose to use a family of histories in which macroscopic objects (e.g., measuring devices) have definite macroscopic states. But one could instead choose a different family of histories, where macroscopic objects are in macroscopic superpositions. The latter family would be pretty much useless for our purposes, but would be perfectly fine as far as the Rules of Quantum Mechanics (and the CH interpretation) are concerned. So CH makes it a matter of usefulness that we treat measuring devices specially--it's a choice on our part, rather than being forced on us by the physics. So to me it seems very much like Copenhagen, except that the "wave function collapse caused by measurement" is no longer considered a physical effect, but is instead an artifact of what we choose to analyze.

I think that in some ways, CH is like Copenhagen, and in other ways, it's like MWI, although there are two completely different notions of "alternatives" considered at the same time. Within a particular family of histories, there are alternative histories. So that's one notion of alternative, and it's the one that people normally think of when they think of many worlds. But there is a second kind of alternative, which is the choice of which family to look at.
 
  • #86
Here's a first note that maybe could help you to get further:

stevendaryl said:
My guess would be that a (simplified, approximate) history would have 6 elements:
  1. Alice's detector orientation. ([itex]\theta_A[/itex])
  2. Bob's detector orientation. ([itex]\theta_B[/itex])
  3. A spin state for Alice's particle immediately before detection. ([itex]\sigma_A[/itex])
  4. A spin state for Bob's particle immediately before detection. ([itex]\sigma_B[/itex])
  5. Alice's result (spin up or spin down) ([itex]R_A[/itex])
  6. Bob's result (spin up or spin down) ([itex]R_B[/itex])

So a history is a vector of six elements:
[itex]\langle \theta_A, \theta_B, \sigma_A, \sigma_B, R_A, R_B \rangle[/itex]

If you have definite spin in 3 & 4, everything I know tells me that the only way to handle 5 & 6 is by non-locality, since what settles the level of correlations in 5 & 6 is the relative angle between 1 & 2.
 
  • #87
stevendaryl said:
I haven't explained anything. I was trying to publicly work out what the CH description of EPR might look like. I'm not finished, because I'm stuck on figuring out which collections of histories are "consistent" in Griffiths' sense.

It's okay, your post is definitely a progress compared to what we (including myself) have produced in this thread lately. :wink:
 
Last edited:
  • #88
kith said:
That's not what he says in your quote. He says if we want to construct a hidden variables theory, Bell tells us that we have to embrace either non-locality or backwards causation. His "solution" is simple: like Bohr, he doesn't want to construct a hidden variables theory in the first place. So what he rejects is EPR realism. Calling his theory realistic may be sensible from another point of view but this is certainly not EPR realism which is what Bell's theorem is about.

/edit: I also wrote a statement about locality here but actually, I think this should be discussed in an own thread.

But is it true that not having hidden variables is enough to make quantum mechanics local? Gisin http://arxiv.org/abs/0901.4255 (Eq 2) argues that the wave function itself can be the "hidden variable", but a nonlocal one. Laloe http://arxiv.org/abs/quant-ph/0209123 (p50) says it is still unsettled whether quantum mechanics is itself local.
 
  • #89
stevendaryl said:
The way I understand it is that we choose to use a family of histories in which macroscopic objects (e.g., measuring devices) have definite macroscopic states. But one could instead choose a different family of histories, where macroscopic objects are in macroscopic superpositions.
Let me check if I get you right: In order to describe measurements, we use a family with an observable whose eigenstates are product states of system+apparatus. It would be equally valid to use another family with an observable which is incompatible with the first one. Such an observable could have entangled states of system+apparatus as eigenstates. In the second family, a measurement wouldn't yield a definite state but a state with different probabilities for macroscopic superpositions. Do you agree with this so far?
 
  • #90
atyy said:
But is it true that not having hidden variables is enough to make quantum mechanics local? Gisin http://arxiv.org/abs/0901.4255 (Eq 2) argues that the wave function itself can be the "hidden variable", but a nonlocal one. Laloe http://arxiv.org/abs/quant-ph/0209123 (p50) says it is still unsettled whether quantum mechanics is itself local.
I don't really have an informed opinion on this. QM without simultaneous hidden variables still allows for different ontologies and I think it depends mostly on them whether we say it is local or not.
 
  • #91
kith said:
Let me check if I get you right: In order to describe measurements, we use a family with an observable whose eigenstates are product states of system+apparatus. It would be equally valid to use another family with an observable which is incompatible with the first one. Such an observable could have entangled states of system+apparatus as eigenstates. In the second family, a measurement wouldn't yield a definite state but a state with different probabilities for macroscopic superpositions. Do you agree with this so far?

I think that's correct. As I said in another post, reasoning about macroscopic objects using the apparatus of quantum mechanics is very difficult, because you can't really write down a wave function for the object. So there is a certain amount of handwaving involved, and it's never clear (to me, anyway) whether whatever conclusions we draw are artifacts of the handwaving or are real implications of QM.
 
  • #92
stevendaryl said:
The latter family would be pretty much useless for our purposes, but would be perfectly fine as far as the Rules of Quantum Mechanics (and the CH interpretation) are concerned. So CH makes it a matter of usefulness that we treat measuring devices specially--it's a choice on our part, rather than being forced on us by the physics.
Isn't the conncection to physics that although we can easily predict what happens using the second family, we cannot build the corresponding measurement devices because the fundamental interactions between the device and the system will decohere the macroscopic superposition eigenstates very quickly? Or put differently: We will always have the ambiguity of multiple histories from this family because we never end up in eigenstates.
 
  • #93
kith said:
Isn't the conncection to physics that although we can easily predict what happens using the second family, we cannot build the corresponding measurement devices because the fundamental interactions between the device and the system will decohere the macroscopic superposition eigenstates very quickly? Or put differently: We will always have the ambiguity of multiple histories from this family because we never end up in eigenstates.

I'm on shaky grounds here, but that sounds right. And philosophical, I find it to be an improvement over Copenhagen, in that, as I said, the assumption that measuring devices always have definite macroscopic states is a practical, subjective choice, rather than there being something magical about the measurement process. In the end, you probably get the same quantitative predictions either way, so maybe it's a matter of taste.
 
  • #94
stevendaryl said:
I haven't explained anything. I was trying to publicly work out what the CH description of EPR might look like. I'm not finished, because I'm stuck on figuring out which collections of histories are "consistent" in Griffiths' sense.

Try chapter 12 here:
http://www.siue.edu/~evailat/

I can't vouch for this but it does seem to cover it.

I'm sure Griffiths must have published his own treatment of the problem, though.
 
  • #95
Jilang said:
It would appear that if you can live with negative probabilities there should be no problem. This is the only concession to realism that is really necessary. Rather than meaningless perhaps it would be better to think of the amplitude as being imaginary, so the probability is negative. Of course we measure that as a zero hence the violation of the inequality.
http://drchinese.com/David/Bell_Theorem_Negative_Probabilities.htm

I once worked out for myself a way to "explain" EPR results using negative probabilities. I may have already posted about it, but it's short enough that I can reproduce it here.

Let's simplify the problem of EPR by considering only 3 possible axes for spin measurements:

[itex]\hat{a}[/itex] = the x-direction
[itex]\hat{b}[/itex] = 120 degrees counterclockwise from the x-direction, in the x-y plane.
[itex]\hat{c}[/itex] = 120 degrees clockwise from the x-direction, in the x-y plane.

We have two experimenters, Alice and Bob. Repeatedly we generate a twin pair, and have Alice measure the spin of one along one of the axes, and have Bob measure the spin of the other along one of the axes.

Let [itex]i[/itex] range over [itex]\{ \hat{a}, \hat{b}, \hat{c} \}[/itex].
Let [itex]X[/itex] range over { Alice, Bob }
Let [itex]P_X(i)[/itex] be the probability that experimenter [itex]X[/itex] measures spin-up along direction [itex]i[/itex].
Let [itex]P(i, j)[/itex] be the probability that Alice measures spin-up along axis [itex]i[/itex] and Bob measures spin-up along axis [itex]j[/itex]. The predictions of QM are:

  1. [itex]P_X(i) = 1/2[/itex]
  2. [itex]P(i,j) = 3/8[/itex] if [itex]i \neq j[/itex]
  3. [itex]P(i, i) = 0[/itex]

One approach for a hidden-variables explanation would be this:
  • Associated with each twin-pair is a hidden variable [itex]\lambda[/itex] which can take on 8 possible values: [itex]\lambda_{\{\}}, \lambda_{\{a\}}, \lambda_{\{b\}}, \lambda_{\{c\}}, \lambda_{\{a, b\}}, \lambda_{\{a, c\}}, \lambda_{\{b, c\}}, \lambda_{\{a, b, c\}}[/itex]
  • The probability of getting [itex]\lambda_x[/itex] is [itex]p_x[/itex] (where [itex]x[/itex] ranges over all subsets of [itex]\{ a, b, c \}[/itex].)
  • If the variable has value [itex]\lambda_x[/itex], then Alice will get spin-up along any of the directions in the set [itex]x[/itex], and will get spin-down along any other direction.
  • If the variable has value [itex]\lambda_x[/itex], then Bob will get spin-down along any of the directions in the set [itex]x[/itex], and will get spin-upalong any other direction (the opposite of Alice).

So if you assume symmetry among the three axis, then it's easy to work out what the probabilities must be to reproduce the predictions of QM. They turn out to be:

[itex]p_{\{\}} = p_{\{a, b, c\}} = -1/16[/itex]
[itex]p_{\{a\}} = p_{\{b\}} = p_{\{c\}} = p_{\{a, b\}} = p_{\{a, c\}} = p_{\{b, c\}} = 3/16[/itex]

So the probability that Alice gets spin-up along direction [itex]\hat{a}[/itex] is:

[itex]p_{\{a\}} + p_{\{a, b\}} + p_{\{a, c\}} + p_{\{a, b, c\}} = 3/16 + 3/16 +3/16 - 1/16 = 1/2[/itex]

The probability that Alice gets spin-up along direction [itex]\hat{a}[/itex] and Bob gets spin-up along direction [itex]\hat{b}[/itex] is:

[itex]p_{\{a\}} + p_{\{a, c\}} = 3/16 + 3/16 - 1/16 = 3/8[/itex]

So if we knew what a negative probability meant, then this would be a local hidden-variables model that reproduces the EPR results.
 
  • Like
Likes 1 person
  • #96
I'm not sure this is related to the negative probabilities above, but thought I'd mention it. There is a standard object in quantum mechanics, called the Wigner function, which is considered the closest thing to a joint probability distribution over canonical variables like position and momentum. As with a classical probability distribution, integrating over momentum gives a classical position distribution, and integrating over position gives a classical momentum distribution. For a free particle or harmonic oscillator, the Wigner function evolves as a classical probability distribution. In general the Wigner function itself has negative parts, which prevents it from being interpreted as a classical probability distribution, but when it is entirely positive, such as for a Gaussian wavefunction, I believe it is ok to assign trajectories to quantum particles.
 
  • #97
kith said:
I don't really have an informed opinion on this. QM without simultaneous hidden variables still allows for different ontologies and I think it depends mostly on them whether we say it is local or not.

Yes. For example, many-worlds evades the Bell theorem because the Bell theorem assumes that each measurement has only one outcome, but in many-worlds all outcomes appear. Incidentally, Wallace seems to say the state vector in many-worlds is nonlocal. At any rate, it seems clear in many-worlds why the Bell theorem is evaded. The question is whether in CH the requirement of consistency is enough to evade the Bell theorem, or whether something more is required. What exactly is the means by which CH evades the Bell theorem, if it does?
 
  • #99
http://arxiv.org/abs/1201.0255
Quantum Counterfactuals and Locality
Robert B. Griffiths
Found. Phys. 42 (2012) pp. 674-684

"Stapp asserts that the validity of a certain counterfactual statement, SR in Sec. 4 below, referring to the properties of a particular particle, depends upon the choice of which measurement is made on a different particle at a spatially distant location. ... It will be argued that, on the contrary, the possibility of deriving the counterfactual SR depends on the point of view or perspective that is adopted—specifically on the framework as that term is employed in CQT—when analyzing the quantum system, and this dependence makes it impossible to construct a sound argument for nonlocality, contrary to Stapp’s claim."

"Our major disagreement is over the conclusions which can be drawn from these analyses. Stapp believes that because he has identified a framework which properly corresponds to his earlier argument for nonlocal influences, and in this framework the ability to deduce SR is linked to which measurement is carried out on particle a, this demonstrates a nonlocal influence on particle b. I disagree, because there exist alternative frameworks in which there is no such link between measurement choices on a and the derivation of SR for b."

So CH is nonlocal in some frameworks?
 
  • #100
http://arxiv.org/abs/0908.2914
Quantum Locality
Robert B. Griffiths
(Submitted on 20 Aug 2009 (v1), last revised 13 Dec 2010 (this version, v2))
Foundations of Physics, Vol. 41, pp. 705-733 (2011)

"It is argued that while quantum mechanics contains nonlocal or entangled states, the instantaneous or nonlocal influences sometimes thought to be present due to violations of Bell inequalities in fact arise from mistaken attempts to apply classical concepts and introduce probabilities in a manner inconsistent with the Hilbert space structure of standard quantum mechanics. Instead, Einstein locality is a valid quantum principle: objective properties of individual quantum systems do not change when something is done to another noninteracting system. There is no reason to suspect any conflict between quantum theory and special relativity."

"Many errors contain a grain of truth, and this is true of the mysterious nonlocal quantum influences. Quantum mechanics does deal with states which are nonlocal in a way that lacks any precise classical counterpart."

"The analysis in this paper implies that claims that quantum theory violates “local realism” are misleading."

!
 
Last edited:
  • #101
craigi said:
I'm sure Griffiths must have published his own treatment of the problem, though.

He has:
"Correlations in separated quantum systems: a consistent history analysis of the EPR problem," Am. J. Phys. 55 (1987).

Its also in his book, Consistent Quantum Theory which I have a copy of - see Chapters 23 and 24.

It not only explains his interpretation, but is a good resource about the interpretive issues with QM in general.

Thanks
Bill
 
  • #102
bhobba said:
He has:
"Correlations in separated quantum systems: a consistent history analysis of the EPR problem," Am. J. Phys. 55 (1987).

Its also in his book, Consistent Quantum Theory which I have a copy of - see Chapters 23 and 24.

It not only explains his interpretation, but is a good resource about the interpretive issues with QM in general.

Thanks
Bill

Given that consistent histories can be used to describe how a particle interacts with a measuring apparatus
and that randomness of A1 A2 can arise during measurement process, no joint probability distribution. Does Griffiths anywhere have a local non realistic ( non counterfactual definiteness) explanation/model for violations of Bell inequalities ?
 
Last edited:
  • #103
morrobay said:
Does Griffiths anywhere have a local non realistic ( non counterfactual definiteness) explanation/model for violations of Bell inequalities ?

He only discusses his interpretation.

As I have said he believes his interpretation is realistic, but if it really is that is an open issue.

I like CH, but its not my favorite because I find it a bit more complex than I think necessary, with frameworks and what not. I simply assume after decoherence the improper mixed state is a proper one - easy as far as I am concerned without this baggage of frameworks, histories, blah, blah, blah.

I am the wrong person to ask about if an interpretation is non counterfactual etc. Terms like that to me is philosophical verbosity. I can't even remember without looking it up exactly what it means.

My view is much simpler. QM is basically the most reasonable general probability model for physical systems that allows continuous transformations or equivalently entanglement. Its entanglement with the environment and measurement apparatus that leads to observations - properties exist because of that, and systems don't actually have properties apart from that. So, just prior to observation outcomes are actualized via dechoerence - but before that - blah. Is that counterfactual definite - maybe, maybe not - I will let others judge. As I said I am not into that sort of thing.

Thanks
Bill
 
Last edited:
  • #104
atyy said:
"The analysis in this paper implies that claims that quantum theory violates “local realism” are misleading."

!

wow... just wow... if this is not refuting QM & Bell's theorem, then what is??

If he can't provide anything more than his own words of unverified ideas, without any form of mathematical/logical formulation, it looks like my first claim about a "preposterous word-salad" is rightfully justified indeed...

He's up against a whole world of professional and rigorous experiments, working flawlessly every time... what on Earth will this man say when Anton Zeilinger and Alain Aspect receive the Nobel Prize in Physics – the whole world is wrong and he is right, even if he can't prove it?

Gosh
 
  • #105
stevendaryl said:
So if we knew what a negative probability meant, then this would be a local hidden-variables model that reproduces the EPR results.

I think that negative probabilities means that it is simply impossible to mimic this feature by any means of classical tools. LHV requires 'something' to be 'there' all the time, definitely. If the probability of 'something' to be 'there' is negative – it means it's not 'there', i.e. it's not definite.

DrC has a useful page that effectively proves the mathematical impossibility of LHV – it just doesn't work! (i.e. unless 'someone' wants to refute mathematics as well...)

http://www.drchinese.com/David/Bell_Theorem_Easy_Math.htm
 

Similar threads

Replies
11
Views
446
Replies
2
Views
881
Replies
11
Views
1K
  • Quantum Physics
Replies
12
Views
814
Replies
2
Views
572
Replies
12
Views
840
Replies
1
Views
856
  • Quantum Physics
Replies
4
Views
1K
  • Quantum Physics
Replies
3
Views
542
  • Quantum Physics
Replies
3
Views
588
Back
Top