Local realism ruled out? (was: Photon entanglement and )

In summary, the conversation discussed the possibility of starting a new thread on a physics forum to discuss evidence for a specific perspective. The topic of the thread was related to the Bell theorem and its potential flaws on both theoretical and experimental levels. The original poster mentioned that their previous posts on this topic had been criticized, but their factual basis had not been challenged until recently. They also noted that the measurement problem in quantum mechanics is a well-known issue and cited a paper that they believed supports the idea that local realism has not been ruled out by existing experiments. The other participant in the conversation disagreed and stated that the paper did not rule out local realism and provided additional quotes from experts in the field. Ultimately, the conversation concluded with both parties holding differing views
  • #246
ThomasT said:
If an experiment is designed to produce entanglement, then that entails (via the execution of that design) a statistical dependency between the data sets, A and B.

No, by making such a statement, you are assuming that the experiment will be successful, and you are assuming that the QM definition of entanglement is correct. That is precisely what these experiments were designed to measure. If they had failed to produce entanglement, or if the QM predictions had been incorrect, then that would have been reflected in the experimental results (i.e. no statistical dependence would have been observed between A and B).

Coincidence counting is about matching the separate data streams wrt some criterion or criteria, and then counting the coincidences.

Yes ... that is how the statistical dependence or independence of the A and B sets is determined .. aren't we saying the same thing here?

The correlation is between the angular difference |a-b| (or Theta, where a and b are the settings of the analyzers at A and B), and the rate of coincidental detection.

Ok, I agree with that too ...

To [STRIKE]get[/STRIKE] observe the QM-predicted, cos2Theta, angular dependency the experimental design has to involve and the execution has to [STRIKE]produce[/STRIKE] reveal a statistical dependency between the separately accumulated data sets.

If you allow the change I made above, then I agree .. to say "get" and "produce" in the context above implies that the data is somehow being "cooked", and I don't agree with that. The QM prediction is either right or wrong, the experiment tests it. The experiment can either succeed or fail .. if it fails (i.e no violation is observed), then EITHER it was a poor experiment OR it was a good experiment and QM is wrong. If the experiment succeeds .. then it either supports the QM prediction, at least up to the ability of the experiment to test it, or there is some flaw in the experiment which leaves the result ambiguous (i.e. these loopholes we have been discussing elsewhere in the thread).

My point here is that the possibility of failure is inherent in these experimental designs, so in my view they are in no way biasing the set of possible results by their construction, as you seem to be saying. I still don't understand why you are making that claim.

There is no correlation between the supposedly entangled photons -- except for two settings, and at these settings, Theta = 0 and
Theta = pi/2, an LHV formulation can also show perfect correlation and anticorrelation, respectively. For all other values of Theta there's absolutely no correlation between A and B.

Ok, this just seems flat wrong. What do you mean there is "absolutely no correlation between A and B"? Do you think the results of the experiments is wrong? Do you think the predictions of Q.M. are wrong? Because they definitely measure/predict correlations at all values for the relative angle between the two detectors, with the possible exception of 45 degrees, where the results should appear random.

In any case, if correlations were not possible at all measurement angles, then there would be no way to formulate the Bell inequalities for these systems. Perhaps we are using different definitions of the term "correlation"?

I'll continue with this reply when I get time.

I look forward to reading it ...
 
Physics news on Phys.org
  • #247
zonde said:
In Mermin’s model 5/9 you get from 1 for the same setting and 2/6 for different settings. With respective probabilities for the same settings - 1/3 and for different settings - 2/3 you have:
1*1/3+2/6*2/3=5/9
This is Mermin’s model.


Mermin's model propose value that exceeds 0.33.
Marmet sample with discarding every second R is more complicated.
For the same settings: match - 1/2, blank - 1/2
For different settings: match - 1/12, mismatch - 4/12, blank - 7/12
So the probability is actually 0.2 ( 1/(1+4) ) i.e. below required value.
That way Marmet model does not reproduce all required probabilities but anyways Mermin's inequality is proved not to hold for realistic experimental conditions.

Let's just talk about the different settings, which means a correlation rate of .33 or higher for the full universe. Marmet's model actually produces a HIGHER - not lower - value for the biased sample. I believe he simply made a mistake and got confused. At any rate, his model is NOT symmetric (as you noted in an earlier post) and cannot be made so. It does not reproduce Malus AND it would be easily detectable via experiment.
 
  • #248
SpectraCat said:
Right, and my problem (well, one of them) with the Marmet paper is that the phase I quoted previously was being used to describe the Apsect experiments .. Marmet was claiming that coincidence measurements would be observed 5/9 of the time there (which is wrong), and only using "the denominator is an odd integer" to justify his reasoning.
Strange, I understand this sentence differently:
"However, one finds then that the relevant feature b is not satisfied now, because statistically [using any of Mermin's instruction sets except RRR and GGG], lights will flash the same color 5/9 (0.5555) of the time, instead of 0.50 that should be obtained [as predicted by QM and observed in experiment]."
I can agree that his argument about "the denominator is an odd integer" is sloppy. But then this is much better explained in brief description of Mermin's article that I gave in my first link.


SpectraCat said:
I don't agree that discarding every second R (or G) is any more realistic than assuming 100% efficiency. It implies a very strict ordering of events that has no basis in reality as far as I can tell. If you want to say that the detector misses half of the time when it is supposed to blink red, I can live with that, but the ordering should be random in my view.
Ordering can be random that is no problem. It can statistically miss half of R detections.

SpectraCat said:
I also don't understand how you got matching only half the time under your setup when the detector settings are the same. You seem to be neglecting the times when both detectors blink green, which are not attenuated in your model, and so the matching rate should be greater than 0.5. A similar comment also pertains to your model when the settings are different.
Let me explain.
Proposed model is considering only instruction sets RRG;RGR;GRR.
So for the same settings before considering misses we have 2/3 RR, 1/3 GG.
Now as we consider misses we have 1/6 R-, 1/6 -R, 1/6 --, 1/6 RR and 1/3 GG
and so there are 1/6+1/3=1/2 successful detections and 3/6=1/2 misses.

SpectraCat said:
Finally, if we can assume random ordering of the detector "failures", then we are free to throw out all of the times when one of the lights doesn't blink. But of course now you are going to tell me that is just the free sampling assumption .. and it is. My point is that without a realistic physical model to understand why there should be (or even could be) a bias for the "missed" detection events, it seems most reasonable to assume they are random.
This realistic model that says why there should be unfair sampling is Bell inequalities.
That's how I see this. :smile:

SpectraCat said:
Another point is that the randomness of missed detections could be tested in principle, by deliberately blocking one of the beams in a random fashion. If there is a bias is the "missed" events due to detector design, then there should be a measurable difference between sets of results where the beam is never blocked, and those where it is blocked randomly.
That can be modeled mathematically to see if there should be any difference.
Another a bit harder part is that you should work out QM prediction for that modified setup.
 
  • #249
zonde said:
This realistic model that says why there should be unfair sampling is Bell inequalities.
That's how I see this. :smile:

If the model hypothesis is consistent, then it can be accepted. The problem is that NONE of the model hypotheses are ever consistent. The physical explanation CANNOT be true. That is what I discovered about the De Raedt simulation, and so now I know how to apply to any LR model hypothesis.

Even ignoring this issue, the model is wrong. If you look at a dataset that is not hand picked, you will see this.
 
  • #250
DrChinese said:
Let's just talk about the different settings, which means a correlation rate of .33 or higher for the full universe. Marmet's model actually produces a HIGHER - not lower - value for the biased sample. I believe he simply made a mistake and got confused. At any rate, his model is NOT symmetric (as you noted in an earlier post) and cannot be made so. It does not reproduce Malus AND it would be easily detectable via experiment.
Where is my mistake in this then :confused::
"For different settings: match - 1/12, mismatch - 4/12, blank - 7/12
So the probability is actually 0.2 ( 1/(1+4) ) i.e. below required value."

But yes this model is not very useful apart from revealing shortcomings of Mermin's argument.
 
  • #251
zonde said:
1. But yes this model is not very useful apart from revealing shortcomings of Mermin's argument.

2. Where is my mistake in this then :confused::
"For different settings: match - 1/12, mismatch - 4/12, blank - 7/12
So the probability is actually 0.2 ( 1/(1+4) ) i.e. below required value."

1. Mermin's argument is essentially identical to Bell's. There is definitely nothing wrong with it.

2. I doubt we are applying the rules the same. Here is my sample run:

a. Must use RRG, RGR or GRR instruction sets.
b. For any observer (Alice or Bob), strike every other R.
c. Any run with at least one struck R means it is ignored.

Here is my run (and I always have Alice looking at the first, Bob for the second, no one for the third - the third is there only to prove realism is in place):

01 RRG Match
02 GRR Bob strikes
03 RRG Alice strikes
04 RGR
05 RRG Alice and Bob strike
06 GRR
07 GRR Bob strikes
08 RRG Match
09 RGR Alice strikes
10 GRR Bob strikes
11 RGR
12 RGR Alice strikes

2 matches, 3 nonmatches, so 40% match rate is greater than 33%. You can also pick a sample that is as low as 20% but in general I don't think it can be 25%.

Of course, the issue is that R is suppressed while G is not, a condition which does not occur in real life so this whole exercise is moot.
 
  • #252


Demystifier said:
So basically, you believe that QM is wrong. Am I right?

Demystifier,

Let me first repeat my rationale and then formulate my conclusion and a short answer to your question, because without this rationale the answer may be misleading.

Some time ago I asked you about the status of the projection postulate in the de Broglie - Bohm interpretation, namely, if it is a precise law or an approximation. You answered:

Demystifier said:
Yes, it is an approximation. However, due to decoherence, this is an extremely good approximation. Essentially, this approximation is as good as the second law of thermodynamics is a good approximation.

My reasoning is as follows.

1) Standard quantum mechanics (SQM) includes both unitary evolution (UE) and the projection postulate (PP).

2) UE and PP directly contradict each other, as UE cannot provide irreversibility or destroy superposition, while PP does just that.

3) Therefore, I cannot accept both UE and PP and believe that one of them (namely, PP) is, strictly speaking, wrong.

4) Therefore, I do believe that SQM is, strictly speaking wrong.

Just two concluding comments.

First, it looks like what I am saying is consistent with what you are saying.

Second, whatever Frame Dragger says, I have not tried to hide my views and stated the same things from the very beginning of this thread.
 
  • #253
zonde said:
Where is my mistake in this then :confused::
"For different settings: match - 1/12, mismatch - 4/12, blank - 7/12
So the probability is actually 0.2 ( 1/(1+4) ) i.e. below required value."

After reconsidering, I see where you get the 20% from and I think I agree with that particular item after all. Please note that it is impossible, in this model, to get a GG match (which would be noticable).

I guess your idea would be to make this symmetric somehow, curious as to whether that is possible. Note that the hypothesis only works if you suppress the detection of matches. I think you will see pretty quick that if G cases are considered for striking, the stats go back to normal as there is no preference for striking matches any longer.
 
  • #254


akhmeteli said:
Demystifier,

Let me first repeat my rationale and then formulate my conclusion... [not a shock]...

[leading to the inevitable]...Therefore, I do believe that SQM is, strictly speaking wrong.

You need to reiterate your case AGAIN to simply say that, no, for this particular reason you believe that (S)QM doesn't lend itself to a reasonable physical interpretation, or that it's just wrong. Why am I not surprised.

akhmeteli said:
Second, whatever Frame Dragger says, I have not tried to hide my views and stated the same things from the very beginning of this thread.

Whether you tried or not, the end result is that it is now many pages in that you've been asked this question point blank. That would imply your views were not clear earlier, unless you feel that your last sobriquet was really a brief restatement of your earlier posts. The fantastic thing about the internet, is that people can just read the text and draw their own conclusions.
 
  • #255
ThomasT said:
If an experiment is designed to produce entanglement, then that entails (via the execution of that design) a statistical dependency between the data sets, A and B.

SprectraCat said:
No, by making such a statement, you are assuming that the experiment will be successful, and you are assuming that the QM definition of entanglement is correct.
I'm assuming the experiment will be executed according to its design. If it is, then (from previous experiments) I'm assuming that the QM predictions will be accurate, and, in this sense, the QM definition of entanglement is correct. But it's not a deep, realistic definition.


SpectraCat said:
That is precisely what these experiments were designed to measure. If they had failed to produce entanglement, or if the QM predictions had been incorrect, then that would have been reflected in the experimental results (i.e. no statistical dependence would have been observed between A and B).
The statistical dependence between A and B is a result of the data matching process (wrt certain criteria, eg. time of detection) So, even if the QM-predicted correlation between Theta and coincidence rate isn't produced, there's still a statistical dependence between A and B.

ThomasT said:
Coincidence counting is about matching the separate data streams wrt some criterion or criteria, and then counting the coincidences.

SpectraCat said:
Yes ... that is how the statistical dependence or independence of the A and B sets is determined .. aren't we saying the same thing here?
I thought so.

SpectraCat said:
The QM prediction is either right or wrong, the experiment tests it. The experiment can either succeed or fail .. if it fails (i.e no violation is observed), then EITHER it was a poor experiment OR it was a good experiment and QM is wrong. If the experiment succeeds .. then it either supports the QM prediction, at least up to the ability of the experiment to test it, or there is some flaw in the experiment which leaves the result ambiguous (i.e. these loopholes we have been discussing elsewhere in the thread).
Not quite. If the experiment matches QM predictions, there might still be some in "flaw in the experiment which leaves the result ambiguous (i.e. these loopholes we have been discussing elsewhere in the thread)."

The point is that even if there are some remaining loopholes, this doesn't matter wrt the consideration of locality/nonlocality in Nature -- because that's not what Bell tests test.
 
Last edited:
  • #256
I just wanted to point out a very interesting post from another, unrelated thread that provides an alternate mathematical explanation/justification for the observed features of entanglement, as well as a proof for the Bell theorem.

https://www.physicsforums.com/showpost.php?p=2594441&postcount=34

I reference this with the caveat that I don't really understand the underlying details yet (I'm not completely clear what a C*-algebra even is), but it certainly seems relevant, and perhaps others here would like to comment on it. I will try to read up on the required mathematical background in the meantime.
 
  • #257
SpectraCat said:
To get (observe) the QM-predicted, cos2Theta, angular dependency the experimental design has to involve and the execution has to produce (reveal) a statistical dependency between the separately accumulated data sets.

If you allow the change I made above, then I agree .. to say "get" and "produce" in the context above implies that the data is somehow being "cooked", and I don't agree with that.
The data is "cooked" via the data matching process.

The data sets at A and B can't be matched up just any way. The matching proceeds according to certain assumptions. For example, in the '82 Aspect et al. experiment, the design called for pairing detection attributes wrt detection time intervals. The idea being to pair detection attributes associated with optical disturbances emitted by the same atom. This requirement is based on the assumption that the underlying entanglement relationship (responsible for the observed angular dependency between Theta and coincidence rate) is produced via the emission process.

Bell, in his formulation, also assumes this. And, this assumption seems to be support by the experimental results.

However, Bell's formulation also assumes statistical independence between A and B, which is contradicted by the data matching requirement. And this can account for the violation of inequalities based on Bell's formulation.

SpectraCat said:
My point here is that the possibility of failure is inherent in these experimental designs, so in my view they are in no way biasing the set of possible results by their construction, as you seem to be saying. I still don't understand why you are making that claim.
If you match the data sets according to some criterion (or criteria) then this limits (biases) the set of possible results. If you do it "on the fly" via coincidence circuits that are gated open upon a detection at one end, then the sample space at the other end is thereby immediately altered.

ThomasT said:
There is no correlation between the supposedly entangled photons -- except for two settings, and at these settings, Theta = 0 and Theta = pi/2, an LHV formulation can also show perfect correlation and anticorrelation, respectively. For all other values of Theta there's absolutely no correlation between A and B.

SpectraCat said:
Ok, this just seems flat wrong. What do you mean there is "absolutely no correlation between A and B"?
For all settings other than Theta = 0 and Theta = pi/2, the individual detection attribute at one end, given the detection attribute at the other end, is random, ie. uncorrelated.

SpectraCat said:
Do you think the results of the experiments is wrong? Do you think the predictions of Q.M. are wrong? Because they definitely measure/predict correlations at all values for the relative angle between the two detectors, with the possible exception of 45 degrees, where the results should appear random.
That correlation is between Theta and coincidence rate. There's only a correlation between A and B for two values of Theta.
 
  • #258


akhmeteli said:
4) Therefore, I do believe that SQM is, strictly speaking wrong.
The question is: Is there some CONCRETE non-standard variant of QM for which you believe that it might be correct? Namely, all non-standard variants of QM I know predict non-local correlations of the EPR type, and you seem to not believe in any of such variants.
 
  • #259


akhmeteli said:
My reasoning is as follows.

1) Standard quantum mechanics (SQM) includes both unitary evolution (UE) and the projection postulate (PP).

2) UE and PP directly contradict each other, as UE cannot provide irreversibility or destroy superposition, while PP does just that.

3) Therefore, I cannot accept both UE and PP and believe that one of them (namely, PP) is, strictly speaking, wrong.

4) Therefore, I do believe that SQM is, strictly speaking wrong.

Whats about non-collapse interpretations?
 
  • #260


akhmeteli said:
2) UE and PP directly contradict each other, as UE cannot provide irreversibility or destroy superposition, while PP does just that.
Why not think of them as tools that complement each other?

akhmeteli said:
3) Therefore, I cannot accept both UE and PP and believe that one of them (namely, PP) is, strictly speaking, wrong.
And yet using it in conjunction with UE gives accurate predictions.

akhmeteli said:
4) Therefore, I do believe that SQM is, strictly speaking wrong.
Wrong wrt what?

If the goal is a realistic theory, then SQM is incomplete.
 
  • #261


ThomasT said:
If the goal is a realistic theory, then SQM is incomplete.

That's exactly what EPR said! :smile:
 
  • #262
ThomasT said:
The problem is that A and B are not independent due to the data matching process (a trackable local process).

SpectraCat said:
Ok, I don't get this last sentence at all. The data matching process (I assume you mean coincidence counting here) does not in any way imply statistical dependence between A and B as far as I can see.
No, I don't mean the coincidence counting. The data matching process includes the criterion wrt which the data are matched (eg. time of detection via the assumption that detection attributes associated with the same time interval were, say, emitted by the same atom and thereby entangled at emission due to conservation of angular momentum).

SpectraCat said:
One could run the same experiments with separate, randomly-polarized sources, and there would be no observed correlation between the measurement sets at A and B, so the coincidence counting would conclude that the two sets are statistically independent, right?
Yes. In this case there's no design to relate the data sets (ie. the experiment is designed to produce two independent data sets) -- and, presumably, they could be matched according to any criterion and P(A,B) would never deviate from P(A) P(B).

ThomasT said:
No LHV formulation of an entangled state can possibly conform to Bell's ansatz.

SpectraCat said:
I am not sure how to parse this, and I definitely don't see how it follows from the previous arguments (even if I agreed those were correct). I think it would be useful if you could re-state it in the context of the Mermin gedanken experiment. I would also like a definition or at least an example of an "LHV formulation of an entangled state".
Bell's generic LHV form (for the expectation value of joint detection) is

P(a,b) = ∫dλρ(λ)A(a,λ)B(b,λ) .


Bell locality can be written

P(A,B) = P(A)P(B) .


Statistical independence is defined as

P(A,B) = P(A)P(B) .


Statistical dependence is designed into, and independence is structured out of, Bell tests -- presumably ... if they're executed correctly.

So, any Bell local hidden variable formulation is, prima facie, in direct contradiction to an essential design element of any Bell test.

SpectraCat said:
The only assumption made about the data sets at A & B involves the travel times of the photons, in that only a certain subset of detection events at A and B satisfy the criterion of coincidence.
That's based on the assumption that the relationship (the entanglement) between the separately analyzed(filtered) disturbances is produced at emission (or via some other local common cause), prior to filtration.

SpectraCat said:
The experimenters are always quite careful about this when defining what "coincident detection" means in the context of their experiments.
Yes, and, eg. wrt Aspect '82, the experiment was designed to pair detection attributes associated with optical disturbances emitted in opposite directions by the same atom during the same transition interval.

SprectraCat said:
Essentially, what you seem to be saying is that the entangled photons could have received "instruction sets" controlling their measurement results, and this is *exactly* what the Bell theorem and the Mermin gedanken show is impossible.
I don't know about instruction sets. I sense, intuitively :smile:, that that way of looking at Bell's theorem might tend to obfuscate rather than clarify its meaning.

The experiments themselves are about presumably related optical emissions, and crossed polarizers, and spacelike separated joint detection events, etc. -- the underlying physics of which is still a mystery -- not instruction sets.

An apparent disparity between Bell's LHV form and experimental design has been exposited, and imho Bell's theorem doesn't mean what it's commonly taken to mean for the rather simple reason that I've presented.
 
  • #263


Demystifier said:
The question is: Is there some CONCRETE non-standard variant of QM for which you believe that it might be correct? Namely, all non-standard variants of QM I know predict non-local correlations of the EPR type, and you seem to not believe in any of such variants.

Certainly, your question is justified, but, I am afraid, a direct answer would be misleading again (I guess, Frame Dragger will have a field day:-) )

You see, I would like to emphasize first that I believe that UE of standard quantum mechanics is correct, and it describes pretty much everything you need. This is basically what nightlight said: use UE, add the Born rule (only as an operational principle), and you are fine.

Another note. You said "all non-standard variants of QM I know predict non-local correlations of the EPR type". "Non-local correlations" is one thing, but it seems a bit vague, so, to clarify this issue, let me ask you the following questions: Do they predict any experimental results incompatible with any LR models, as is the case for standard quantum mechanics? If you say they do (for example, I am not even sure if this is the case for dBB), then my second question is: Does the relevant proof (an analog of the proof of the Bell theorem in SQM) uses the projection postulate or something like that?

So let me try to summarize. Actually, I don't know the situation in non-standard variants of QM very well, so I am not sure about their being correct or wrong. (Neither do I know if they exclude any LR models.) For example, I strongly dislike MWI or GRW, I am not enthusiastic about the current forms of dBB, but I don't know if they are correct or wrong. As for SQM, it contains contradictory statements, that's why I know that, strictly speaking it is wrong.

So, to answer your question "Is there some CONCRETE non-standard variant of QM for which you believe that it might be correct?", yes, there is. For example, I believe SQM without the projection postulate might be correct (I guess I can call this a CONCRETE non-standard variant of QM:-) ).

If, however, you actually wanted to ask me if there is a concrete explicitly local variant of QM that I believe might be correct, please advise.
 
  • #264


Dmitry67 said:
Whats about non-collapse interpretations?

Please see my reply to Demystifier, which can be summarized as follows: I don't know enough about non-collapse interpretations.
 
  • #265


ThomasT said:
Why not think of them as tools that complement each other?

If you are comfortable with contradictions, why not?

ThomasT said:
And yet using it in conjunction with UE gives accurate predictions.

I tend to think that a contradiction suggests some limitations of applicability. Loophole-free Bell tests may be such area of limitation.

ThomasT said:
Wrong wrt what?.

With respect to itself. This is what a contradiction is about.

ThomasT said:
If the goal is a realistic theory, then SQM is incomplete.

To my taste, it is too complete:-), so something should be excluded, not added:-).
 
  • #266
SpectraCat said:
Unfortunately I can't understand the details of that post, as I already mentioned. However, even if I accepted its content, I am not sure why it is relevant to this discussion ... and earlier you said "nonlinear" PDE's, only here do you mention that they are local. Can you please break down the physical significance of this statement in the current context, or link to a post where you described it previously?

Sorry, I still don't have time for the explanation, I'll try do do something about it later.
 
  • #267
ThomasT said:
Bell's generic LHV form (for the expectation value of joint detection) is

P(a,b) = ∫dλρ(λ)A(a,λ)B(b,λ) .


Bell locality can be written

P(A,B) = P(A)P(B) .


Statistical independence is defined as

P(A,B) = P(A)P(B) .


Statistical dependence is designed into, and independence is structured out of, Bell tests -- presumably ... if they're executed correctly.

So, any Bell local hidden variable formulation is, prima facie, in direct contradiction to an essential design element of any Bell test.

I really don't get what you are saying. The fact is, local realists deny that entanglement is a state. They say it is all coincidence, and there is a common cause. So it is true that Bell tests - which demonstrate entanglement as a state - will always violate LR.

But so what? All experiments are intended to show some aspect of our world. Bell tests show that entangled photons operate in a different spacetime view than the local realist would envision.
 
  • #268


akhmeteli said:
You said "all non-standard variants of QM I know predict non-local correlations of the EPR type". "Non-local correlations" is one thing, but it seems a bit vague, so, to clarify this issue, let me ask you the following questions: Do they predict any experimental results incompatible with any LR models, as is the case for standard quantum mechanics?
Yes they do. For example, they all predict violation of Bell inequalities for the (ideal) case of detectors with perfect efficiency.

akhmeteli said:
If you say they do (for example, I am not even sure if this is the case for dBB), then my second question is: Does the relevant proof (an analog of the proof of the Bell theorem in SQM) uses the projection postulate or something like that?
Many-world and Bohmian interpretations do not use a projection postulate or anything like that.

akhmeteli said:
So let me try to summarize. Actually, I don't know the situation in non-standard variants of QM very well, so I am not sure about their being correct or wrong. (Neither do I know if they exclude any LR models.)
That's fair to say. Anyway, if you did know more about them, it would probably much easier for you to accept quantum nonlocality, at least in the sense of violation of Bell inequalities for the (ideal) case of detectors with perfect efficiency.

akhmeteli said:
If, however, you actually wanted to ask me if there is a concrete explicitly local variant of QM that I believe might be correct, please advise.
Well, I would advise you to give up of searching for a variant of QM that does not predict violation of Bell inequalities for the (ideal) case of detectors with perfect efficiency.
 
  • #269
ThomasT said:
No, I don't mean the coincidence counting. The data matching process includes the criterion wrt which the data are matched (eg. time of detection via the assumption that detection attributes associated with the same time interval were, say, emitted by the same atom and thereby entangled at emission due to conservation of angular momentum).

Yes. In this case there's no design to relate the data sets (ie. the experiment is designed to produce two independent data sets) -- and, presumably, they could be matched according to any criterion and P(A,B) would never deviate from P(A) P(B).



Bell's generic LHV form (for the expectation value of joint detection) is

P(a,b) = ∫dλρ(λ)A(a,λ)B(b,λ) .


Bell locality can be written

P(A,B) = P(A)P(B) .


Statistical independence is defined as

P(A,B) = P(A)P(B) .


Statistical dependence is designed into, and independence is structured out of, Bell tests -- presumably ... if they're executed correctly.

So, any Bell local hidden variable formulation is, prima facie, in direct contradiction to an essential design element of any Bell test.

That's based on the assumption that the relationship (the entanglement) between the separately analyzed(filtered) disturbances is produced at emission (or via some other local common cause), prior to filtration.

Yes, and, eg. wrt Aspect '82, the experiment was designed to pair detection attributes associated with optical disturbances emitted in opposite directions by the same atom during the same transition interval.

I don't know about instruction sets. I sense, intuitively :smile:, that that way of looking at Bell's theorem might tend to obfuscate rather than clarify its meaning.

The experiments themselves are about presumably related optical emissions, and crossed polarizers, and spacelike separated joint detection events, etc. -- the underlying physics of which is still a mystery -- not instruction sets.

An apparent disparity between Bell's LHV form and experimental design has been exposited, and imho Bell's theorem doesn't mean what it's commonly taken to mean for the rather simple reason that I've presented.

I still completely fail to understand your point of view. You are simultaneously accepting and denying entanglement in separate points of your argument. You say that the experiment is designed to produce entanglement, and therefore the A and B sets are statistically dependent. Then you go on to say that there are no correlations between the A and B measurements except when the angle between the detector setting is 0 or pi, and that this can be explained by a purely local mechanism. Huh? That seems contradictory and non-sensical ... you can't have it both ways.

But there is a more basic issue with your arguments in my view. Consider the following:

The detectors and coincidence circuitry are controlled by Alice, who has no knowledge of the source conditions ... all she has is a definition of what a coincidence is in the context of the experiment. Bob has two experimental setups P and Q, both produce oppositely polarized pairs of counter-propagating photons, but in the case of P, they are entangled, and in Q they are not. From your previous statements, you appear to agree that for source P, the sets A and B will show a statistical dependence, and for source Q they will not. Therefore, simply from her observations, and without communicating with Bob, Alice can determine which source is being used, based on her measured coincidence statistics.

My point here is that it doesn't matter what the experimenters are *trying* to do with the source, because the detection scheme allows for the possibility that their design would fail, as I argued above.
 
  • #270


akhmeteli said:
Certainly, your question is justified, but, I am afraid, a direct answer would be misleading again (I guess, Frame Dragger will have a field day:-) )

You see, I would like to emphasize first that I believe that UE of standard quantum mechanics is correct, and it describes pretty much everything you need. This is basically what nightlight said: use UE, add the Born rule (only as an operational principle), and you are fine.

Another note. You said "all non-standard variants of QM I know predict non-local correlations of the EPR type". "Non-local correlations" is one thing, but it seems a bit vague, so, to clarify this issue, let me ask you the following questions: Do they predict any experimental results incompatible with any LR models, as is the case for standard quantum mechanics? If you say they do (for example, I am not even sure if this is the case for dBB), then my second question is: Does the relevant proof (an analog of the proof of the Bell theorem in SQM) uses the projection postulate or something like that?

So let me try to summarize. Actually, I don't know the situation in non-standard variants of QM very well, so I am not sure about their being correct or wrong. (Neither do I know if they exclude any LR models.) For example, I strongly dislike MWI or GRW, I am not enthusiastic about the current forms of dBB, but I don't know if they are correct or wrong. As for SQM, it contains contradictory statements, that's why I know that, strictly speaking it is wrong.

So, to answer your question "Is there some CONCRETE non-standard variant of QM for which you believe that it might be correct?", yes, there is. For example, I believe SQM without the projection postulate might be correct (I guess I can call this a CONCRETE non-standard variant of QM:-) ).

If, however, you actually wanted to ask me if there is a concrete explicitly local variant of QM that I believe might be correct, please advise.

I'm not having a field day, however as others here have concluded you clearly believe in LHVs, but also accept the predictive capacity of SQM. I don't see how you can have it both ways, but that's your business. Anyway, I'm only one of several here who have questioned your basic assumptions, and the desire for a "concrete non-standard variant of QM".

Personally I respect and am constantly impressed by the ability of QM to predict, but I still can't bring myself to believe it's a theory which accurately depicts reality, or is complete. That said, I simply say that and go on with a combination of intellectual curiosity, and Instrumentalism in practice. Really, all I was trying to point out earlier is that when you're already looking for an alternative to the theory in which the question being discussed is couched, it's best to lead with that fact, and your assumptions.

Obviously I struck a nerve, or just plain annoyed you, but please do leave me out of future posts unless I'm actually involved, especially when I'm hardly alone in questioning you.

EDIT: I just have to say, when you say you want "concrete" out of the uncertainty and probabilities of QM I feel like screaming, "Get in line!" No offense, it's just a gut reaction and not angry.
 
  • #271
SpectraCat said:
I still completely fail to understand your point of view. You are simultaneously accepting and denying entanglement in separate points of your argument. You say that the experiment is designed to produce entanglement, and therefore the A and B sets are statistically dependent.
Yes, without statistical dependency between A and B you can't demonstrate entanglement. It's the successful matching of the separate data sets wrt certain criteria that makes the difference between seeing the QM-predicted correlations or not.

SpectraCat said:
Then you go on to say that there are no correlations between the A and B measurements except when the angle between the detector setting is 0 or pi, and that this can be explained by a purely local mechanism. Huh? That seems contradictory and non-sensical ... you can't have it both ways.
The correlation that the experiment is designed to produce, and that QM and proposed LHV models are making predictions about is the correlation between θ (the angular difference between the analyzer settings) and the rate of joint detection.

There's no correlation between individual detections at A and B except for θ=0 and θ=90 degrees. Wrt these two settings a simple LHV model (producing a linear correlation function between θ and rate of joint detection) predicts the same thing as QM for θ=0 and θ=90 degrees (as well as θ=45 degrees).

So, there is an LHV account of any correlation between A and B. What there's no complete LHV account of is the correlation between θ and rate of joint detection for values of θ between 0 and 90 degrees.

SpectraCat said:
But there is a more basic issue with your arguments in my view. Consider the following:

The detectors and coincidence circuitry are controlled by Alice, who has no knowledge of the source conditions ... all she has is a definition of what a coincidence is in the context of the experiment. Bob has two experimental setups P and Q, both produce oppositely polarized pairs of counter-propagating photons, but in the case of P, they are entangled, and in Q they are not. From your previous statements, you appear to agree that for source P, the sets A and B will show a statistical dependence, and for source Q they will not. Therefore, simply from her observations, and without communicating with Bob, Alice can determine which source is being used, based on her measured coincidence statistics.
Statistical dependence between A and B means that a detection at A changes the sample space at B, and vice versa, in a nonrandom way. Setup P is designed to produce related counter-propagating photons via the emission process. Setup Q isn't.

The criterion for data matching has to do with the relationship between the counter-propagating photons.

So, yes Alice should observe that the P and Q results are different and that the P correlations closely resemble those predicted for certain entangled states.

SpectraCat said:
My point here is that it doesn't matter what the experimenters are *trying* to do with the source, because the detection scheme allows for the possibility that their design would fail, as I argued above.
I don't follow what you're saying here. The criterion for data matching has to do with the relationship between the counter-propagating photons. Setup P is designed to produce related counter-propagating photons via the emission process. Setup Q isn't.
 
  • #272
akhmeteli said:
4) Therefore, I do believe that SQM is, strictly speaking wrong.

not wrong, rather incomplete or aproximate...

read:

Quantum Theory: Exact or Approximate?
http://arxiv.org/PS_cache/arxiv/pdf/0912/0912.2211v1.pdf

...There are two distinct approaches. One is to assume that quantum theory is exact, but that the interpretive postulates need modification, to eliminate apparent contradictions. Many worlds, decoherent histories, Bohmian mechanics, and quantum theory as information, all fall in this category. Although their underlying mathematical formulations differ, empirically they are indistinguishable, since they predict the same experimental results as does standard quantum theory.
The second approach is to assume that quantum mechanics is not exact, but instead is
a very accurate approximation to a deeper level theory, which reconciles the deterministic
and probabilistic aspects. This may seem radical, even heretical, but looking back in the
history of physics, there are precedents. Newtonian mechanics was considered to be exact
for several centuries, before being supplanted by relativity and quantum theory, to which
classical physics is an approximation. But apart from this history, there is another important
motivation for considering modifications of quantum theory. This is to give a quantitative
meaning to experiments testing quantum theory, by having an alternative theory, making
predictions that differ from those of standard quantum theory, to which these experiments
can be compared...




http://arxiv.org/PS_cache/arxiv/pdf/1001/1001.3964v1.pdf

...quantum phenomena possibly emerge only at larger scales than LP (planck sclae) , the scale of spacetime discreteness...




http://arxiv.org/PS_cache/arxiv/pdf/0912/0912.2845v2.pdf

....The outcome of the second measurement will evidently be different from what quantum mechanics predicts for a pair of successive measurements...



akhmeteli said:
Please see my reply to Demystifier, which can be summarized as follows: I don't know enough about non-collapse interpretations.



rather, objective collapse ?
Continuous Spontaneous Localization (Dynamical Reduction Models).



http://arxiv.org/PS_cache/quant-ph/pdf/0701/0701014v2.pdf

...This idea, that the environment somehow naturally guarantees the emergence of definite properties when moving from the micro to the macro, by destroying coherence among different terms of a superposition, is very appealing. But wrong...

...I note here that the division between a system and its environment is not
a division dictated by Nature. Such a division is arbitrarily set by the Physicist because he or
she is not able to solve the Schrodinger equation for the global system; he or she then decides to select some degrees of freedom as the relevant ones, and to trace over all other degrees. This is a very legitimate division, but not compelling at all. Such a division is more or less equivalent to the division between a quantum system and a measuring device: it’s artificial, just a matter of practical convenience. But if the physicist were able to analyze exactly the microscopic quantum system, the macroscopic apparatus and the surrounding environment together, i.e. if he or she used the Schr¨odinger equation to study the global system, he or she would get a very simple result: once more, because of linearity, all terms of the superposition would be present at the same time in the wave function, no one of them being singled out as that which really occurs when the measurement is performed in the laboratory.
The so called measurement problem of Quantum Mechanics is an open problem still waiting
for a solution. Dynamical reduction models, together with Bohmian Mechanics, up to now are,
in my opinion, the most serious candidates for a resolution of this problem...

...He (S. Adler) assumes precisely that quantum mechanics is not a fundamental theory of nature but an emergent phenomenon arising from the statistical mechanics of matrix models with a global unitary invariance...
 
Last edited by a moderator:
  • #273


Demystifier said:
Yes they do. For example, they all predict violation of Bell inequalities for the (ideal) case of detectors with perfect efficiency.


Many-world and Bohmian interpretations do not use a projection postulate or anything like that.

Thank you very much for this information. However, another question is in order in such case. Let me ask it using the example of the de Broglie - Bohm interpretation (dBB) (as, on the one hand, you are an expert in it, and on the other hand, because I know more about it than about other non-standard interpretations).

It is my understanding that dBB fully accepts unitary evolution (UE) of standard quantum mechanics (at least, in some of its versions).

If I am wrong, please advise. However, if I am indeed wrong (or for those versions that do not accept UE unconditionally), that means that dBB predicts deviations from UE and thus experimental results differing from those of SQM (at least in principle). How do we know that these predictions of dBB are indeed correct? I think you'll agree that we cannot know that until we have experimental confirmation. So anything that dBB has to say on nonlocality beyond what SQM says has no experimental basis.

If, however, I am right (or for those versions of dBB that fully accept UE), my question is as follows. Is UE enough to prove nonlocality in dBB? If it is enough, then the relevant proof can be translated into a proof for SQM, and that means that nonlocality in SQM can be proven without the projection postulate (PP) or something like that. That would mean that I was terribly wrong from the very beginning of this thread, and I would certainly want to know if this is indeed the case.

If, however, dBB adds something extra to UE to prove nonlocality, then this extra is either correct in SQM, or it's wrong there. If it's correct in SQM, then again we can translate the dBB proof of nonlocality into a proof for SQM, and it is possible to prove nonlocality in SQM without PP or something like that. Again, I would want to know if this is so.

If, however, this extra is wrong in SQM, that means that it has no experimental basis.

So the above reasoning has several branches generated by several ifs, and I would very much appreciate if you could tell me which branch is correct. Or maybe the entire reasoning is wrong for some other reason that I cannot see right now.


Demystifier said:
That's fair to say. Anyway, if you did know more about them, it would probably much easier for you to accept quantum nonlocality, at least in the sense of violation of Bell inequalities for the (ideal) case of detectors with perfect efficiency.

That may be so. Right now, however, the above reasoning makes me doubt it.


Demystifier said:
Well, I would advise you to give up of searching for a variant of QM that does not predict violation of Bell inequalities for the (ideal) case of detectors with perfect efficiency.

Thank you for your advice.
 
  • #274
yoda jedi said:
not wrong, rather incomplete or aproximate...

I could agree with you for practical purposes (indeed, one can say that PP is not wrong, but it's approximate), but this thread is not about practical purposes. Indeed, SQM implies nonlocality. If I use your wording ("approximate" instead of "wrong"), then am I supposed to talk about "approximate nonlocality"? Then what, am I supposed to say that this "approximate nonlocality" rules out locality or does not rule out locality? Does not make much sense either way, if you ask me. So for the purpose of this thread I prefer the following wording: "strictly speaking, wrong".

As for work of Adler and others, their theories may be correct, but it is my understanding that they deny precise unitary evolution, and there is no experimental basis for that. Maybe there will be such experimental basis in the future, but I am not sure I can meaningfully discuss these theories now.

Actually, I have problems with the motivation of their work. They seem to believe that measurements have definite outcomes, and I doubt that. I quoted the articles by Allahverdyan a.o., where they rigorously study a model of measurement. In the process of measurement of a spin projection, the particle interacts with a paramagnetic system. This paramagnetic system evolves into some macroscopic state, and this seems to decide the outcome of the measurement. However, according to the quantum recurrence theorem, after an incredibly long period of time this macroscopic state will inevitably flip, if UE is correct, thus reversing the outcome of the measurement.
 
  • #275


akhmeteli said:
If, however, I am right (or for those versions of dBB that fully accept UE), my question is as follows. Is UE enough to prove nonlocality in dBB? If it is enough, then the relevant proof can be translated into a proof for SQM, and that means that nonlocality in SQM can be proven without the projection postulate (PP) or something like that. That would mean that I was terribly wrong from the very beginning of this thread, and I would certainly want to know if this is indeed the case.
Yes, that seems to be the case. QM is nonlocal even without the PP. Essentially, QM is nonlocal because the basic entity is the wave function, which is a single quantity attributed to all particles, even when they are spatially separated. For more elaborated argumentation that QM is nonlocal in ANY interpretation see
http://xxx.lanl.gov/abs/quant-ph/0703071
 
  • #276


Demystifier said:
Yes, that seems to be the case. QM is nonlocal even without the PP. Essentially, QM is nonlocal because the basic entity is the wave function, which is a single quantity attributed to all particles, even when they are spatially separated. For more elaborated argumentation that QM is nonlocal in ANY interpretation see
http://xxx.lanl.gov/abs/quant-ph/0703071

"Seems to be the case" is not the same as "is nonlocal". Neither is your article categorical ("strongly supports" is not the same as "proves"). You don't state that "nonlocality in SQM can be proven without the projection postulate (PP) or something like that", so while I certainly can be wrong saying it cannot, so far I stand by what I said. I certainly respect your opinion, but opinion is not a proof.

So I'd like to ask you again: is nonlocality proven in dBB using just UE?

As for "QM is nonlocal because the basic entity is the wave function, which is a single quantity attributed to all particles, even when they are spatially separated." - I mentioned a mathematical mechanism suggesting that a QFT-like theory can be a disguise for a local theory. Or, reversing the argument, a local theory can have a seemingly nonlocal form.
 
  • #277
SpectraCat said:
Unfortunately I can't understand the details of that post, as I already mentioned. However, even if I accepted its content, I am not sure why it is relevant to this discussion ... and earlier you said "nonlinear" PDE's, only here do you mention that they are local. Can you please break down the physical significance of this statement in the current context, or link to a post where you described it previously?.

So let me try to explain. You see, in general, time evolution can be described by partial differential equations in 3+1 dimensions, such as the Maxwell equations, and they are typically local. There are also linear equations in the Fock space, such as in quantum electrodynamics (QED), and the Bell theorem seems to imply that, e.g., QED is nonlocal. So it seems that these two kinds of evolution are worlds apart. However, the formulae from Kowalski's book that I posted show that nonlinear equations in (3+1)D can be embedded in linear equations in the Fock space, and they look pretty much like unitary evolution in quantum field theory, and even the relevant Hamiltonian is expressed in terms of operators of creation and annihilation (I am hypothesizing now, but I think that similar results for fermions can be obtained using the fermionic coherent states (Cahill, Glauber, 1999)). So a local theory may be disguised as a nonlocal one.

Later I'll try to give a simple example of Carleman linearization to illustrate how a low-dimensional nonlinear differential equation can be embedded into a linear equation in infinitely many dimensions.
 
  • #278


akhmeteli said:
So I'd like to ask you again: is nonlocality proven in dBB using just UE?
Yes, it is proven. Moreover, it is proven within any known formulation/interpretation of quantum theory.

akhmeteli said:
As for "QM is nonlocal because the basic entity is the wave function, which is a single quantity attributed to all particles, even when they are spatially separated." - I mentioned a mathematical mechanism suggesting that a QFT-like theory can be a disguise for a local theory. Or, reversing the argument, a local theory can have a seemingly nonlocal form.
As QFT is also a known formulation of quantum theory, my assertion above refers also to QFT. QFT also contains nonlocal objects - quantum states (represented by a sort of wave functions or something equivalent).
 
  • #279


akhmeteli said:
"Seems to be the case" is not the same as "is nonlocal". Neither is your article categorical ("strongly supports" is not the same as "proves"). You don't state that "nonlocality in SQM can be proven without the projection postulate (PP) or something like that", so while I certainly can be wrong saying it cannot, so far I stand by what I said. I certainly respect your opinion, but opinion is not a proof.
Have you read my paper completely? I have an impression that you read the abstract only.
 
  • #280


Demystifier said:
Yes, it is proven. Moreover, it is proven within any known formulation/interpretation of quantum theory.

This is strange. My understanding was nonlocality was not proven, e.g., in SQM, using just UE, and you have not stated the opposite before (your article is not at all categorical on this point, moreover, you are saying there that you don't have (yet) a proof of your conjecture). Maybe you misread my question? Or maybe you could give me a reference to such proof (using just UE) for SQM or dBB?
 
Back
Top