Local realism ruled out? (was: Photon entanglement and )

In summary, the conversation discussed the possibility of starting a new thread on a physics forum to discuss evidence for a specific perspective. The topic of the thread was related to the Bell theorem and its potential flaws on both theoretical and experimental levels. The original poster mentioned that their previous posts on this topic had been criticized, but their factual basis had not been challenged until recently. They also noted that the measurement problem in quantum mechanics is a well-known issue and cited a paper that they believed supports the idea that local realism has not been ruled out by existing experiments. The other participant in the conversation disagreed and stated that the paper did not rule out local realism and provided additional quotes from experts in the field. Ultimately, the conversation concluded with both parties holding differing views
  • #106
jambaugh said:
If we actually (in our conceptual model of how nature works) allow causal feedback, future to past, it seems to me then we must invoke a "meta-time" over which such phenomena would decay out or reinforce to a caustic threshold or stable oscillation, (the local "reality" oscillating w.r.t. this meta-time).
That's interesting, because my explicit Bohmian model of relativistic nonlocal reality does involve a "meta time".

jambaugh said:
The problem as I see it is this sort of speculation is not operationally meaningful. It's no different than supposing an invisible aether, or Everette many worlds. Sure you can speculate but you can't test within the bounds of science. Such phenomena are by their nature beyond observation. Again I see the "reality" of it as meaningless within the context of science. That isn't an argument, just the results of my many internal arguments over past years.
That objection can, of course, be also attributed to the nonrelativistic Bohmian interpretation that does not involve the "meta time".
 
Physics news on Phys.org
  • #107
Demystifier said:
Then my next question is: What WOULD you accept as a good argument for nonlocality? For example, if someone would make better detectors with higher efficiency such that the fair sampling loophole is avoided, and if the experiments would still violate Bell inequalities, would you accept THAT as a good evidence for nonlocality?

Yes, that would certainly be a good evidence of nonlocality (I mean if violations of the genuine Bell inequalities, without loopholes, are demonstrated experimentally). In that case I would certainly have to reconsider my position. To be frank, I cannot promise I'll reject locality in that case and not free will, for example, but I will certainly have hard time trying to adapt to new reality. The problem is locality will not be the only thing I'll need to reconsider in that case. Such experimental demonstration would also undermine my firm belief in unitary evolution and relativity. And this is in fact the main reason I don't expect any violations of the genuine Bell inequalities.

To give a direct answer to your question "What WOULD I accept as a good argument for nonlocality?", I should also add that experimental demonstration of faster-than-light signaling would certainly be a much more direct and convincing evidence of nonlocality. But again, locality would not be the only casualty of such demonstration. Unitary evolution and relativity would also have hard time trying to survive.
 
  • #108
Akhmeteli, that seems to be a reasonable answer. However, I think that nonlocality is compatible with relativity and unitary evolution. For more details see
https://www.physicsforums.com/showthread.php?t=354083
especially posts #1 and #109. I would like to see your opinion on that.
 
  • #109
akhmeteli said:
The problem is locality will not be the only thing I'll need to reconsider in that case. Such experimental demonstration would also undermine my firm belief in unitary evolution and relativity. And this is in fact the main reason I don't expect any violations of the genuine Bell inequalities.

First, Bell tests ARE genuine. I think you mean "loophole" free. All experiments have "loopholes", some are simply more relevant than others - and you are free to your personal opinion. But it is manifestly unfair to characterize the hundreds/thousands of different Bell tests themselves as "not genuine".

Second: that is quite a bold prediction you are making, not sure what would make you think that quantum mechanics is actually incorrect (an absolute deduction from your statement).

And last: why do you need to abandon relativity in the case of a confirmed (for you) violation of a Bell Inequality? The speed of light will still remain a constant in all local reference frames. Mass and clocks will still follow the standard rules. So what changes? The only thing that changes are physical effects not described by relativity in the first place. I do not consider relativity to include the absolute prediction that nonlocal elements cannot exist. I think it is an implied result, and one that could well fit within a larger theory. In fact, that is a result that Demystifier has been expressing for some time.
 
  • #110
RUTA said:
It seems difficult to define space and time using interacting systems because you need the concepts of space and time to make sense of what you mean by "systems" to begin the process. That is, what you mean by "a system" seems to require trans-temporal identification and to have "two systems" requires spatial separation -- what else would you use to discriminate between otherwise identical systems? That's why we chose a co-definition of space, time and sources (as understood in discrete QFT) as our fundamental operating principle. I look forward to your solution.

Well consider for example the entangled electron pair, totally anti-correlated . We typically factor the system into left-moving and right-moving particles (picking our orientation frame appropriately). And we then speak of entanglement of their spins. We could as easily speak of the up z-spin and the down z-spin particle. This is a distinct factorization of the composite system into "two particles". Another distinct factorization is into x-spin up vs down. Each is a different "reality" and the plurality of choices specifically shows our classical bias in thinking of the composite system as two objects. We should rather refer to "a factor" instead of "the component". (And I think equating different factorizations is the principle mistake in parsing the EPR experiment and other entangled systems.)

Now you may argue that spin is also a space-time concept but I could as easily used quark color instead of spin. More to the point, We may find it "difficult to define space and time using interacting systems because" We "need the concepts of space and time to make sense of what [We] mean by 'systems' to begin the process" due to our being space-time entities. That is to say it is a failing of our imagination and artifact of our nature not the universe itself.

Agreed initially we need a concept of time but it need not be metric, only topological and ordered to reflect causal sequence. I can then conceive of a large dimensional quantum system with a complicated random Hamiltonian. (reparametrizing time to make it t independent = pick a t-metric or class of metrics dictated by the dynamics.)

I can also conceive of factoring that system into N 2-dimensional components where 2^N is close to the dimension. Each 2-dim factor has its own U(2)~U(1)xSO(3) structure and I look at the global Hamiltonian and ask what form it takes in terms of internal plus interaction terms. I can then consider different choices of factorization which for the given Hamiltonian might simplify its form.

If I could find some way to formulate an iteration over cases and optimization principle (say minimum sum of component entropies, i.e. minimal entanglement, or otherwise some quantification of symmetry or near-symmetry of the Hamiltonian, or ...) then I might find a global su(2)xsu(2)~so(4) group [so(4) being the compact deformaton if iso(3) of the Euclidean group of spatial geometry. ] naturally emerges for random Hamiltonians under appropriate factorizations and as t increases sufficiently. In short a "natural" condensation into a 3-dimensional space as a spin network and with imperfections effecting e.g. gauge defects. Maybe with some arm-waving and invocation of anthropic principles I could reconstruct the universe in such a fashion.

The question is, for a random large quantum system, can we extrapolate how an entity within that system, able to develop science and formulate physics, would paint his universe. What is the range of possibilities?

I haven't yet of course and such a program may not be "the right way to go about it" (and indeed I can already see many problems) but it is an example of how one might go about constructing/determining spatial structure from scratch. It is not inconceivable to me.
 
  • #111
jambaugh said:
Well consider for example the entangled electron pair, totally anti-correlated. We typically factor the system into left-moving and right-moving particles (picking our orientation frame appropriately). And we then speak of entanglement of their spins. We could as easily speak of the up z-spin and the down z-spin particle. This is a distinct factorization of the composite system into "two particles". Another distinct factorization is into x-spin up vs down. Each is a different "reality" and the plurality of choices specifically shows our classical bias in thinking of the composite system as two objects. We should rather refer to "a factor" instead of "the component". (And I think equating different factorizations is the principle mistake in parsing the EPR experiment and other entangled systems.)

You've snuck spatiality in the backdoor -- you need two experimental outcomes, so you need two detectors. You don't need to talk about spatiality in the context of a "quantum system," but you do need those detectors. And, of course, you need to define what you mean by "up" and "down" outcomes in the context of those detectors. [In fact, we don't have any graphical counterpart to "quantum systems" in our approach.]

jambaugh said:
Now you may argue that spin is also a space-time concept but I could as easily used quark color instead of spin. More to the point, We may find it "difficult to define space and time using interacting systems because" We "need the concepts of space and time to make sense of what [We] mean by 'systems' to begin the process" due to our being space-time entities. That is to say it is a failing of our imagination and artifact of our nature not the universe itself.

Moving to charge doesn't help -- you need "some thing" to "possess" the charge, even if you attribute it to the detectors. So, again, how do you distinquish two such otherwise identical "things" without space?

jambaugh said:
Agreed initially we need a concept of time but it need not be metric, only topological and ordered to reflect causal sequence. I can then conceive of a large dimensional quantum system with a complicated random Hamiltonian. (reparametrizing time to make it t independent = pick a t-metric or class of metrics dictated by the dynamics.)

Exactly what we concluded, "time" is inextricably linked to what we mean by "things" (discrete QFT sources for us). This is topological not geometric as you say. Now are you going to argue that time is "special" in this sense over "space?" That is, we "need" a notion of temporality at the topological level but not space?

jambaugh said:
I can also conceive of factoring that system into N 2-dimensional components where 2^N is close to the dimension. Each 2-dim factor has its own U(2)~U(1)xSO(3) structure and I look at the global Hamiltonian and ask what form it takes in terms of internal plus interaction terms. I can then consider different choices of factorization which for the given Hamiltonian might simplify its form.

Interaction between ... ? Again, more than one "thing" will require some form of differentiation. Are you saying you will have a theoretical counterpart to every particle in the universe? That is, you can't talk about electrons, quarks, muons, ... in general?

jambaugh said:
I haven't yet of course and such a program may not be "the right way to go about it" (and indeed I can already see many problems) but it is an example of how one might go about constructing/determining spatial structure from scratch. It is not inconceivable to me.

I don't see, as I argue above, that you've succeeded even conceptually. You need the notions of identification and differentiation to have "things."
 
  • #112
Demystifier said:
Akhmeteli, that seems to be a reasonable answer. However, I think that nonlocality is compatible with relativity and unitary evolution. For more details see
https://www.physicsforums.com/showthread.php?t=354083
especially posts #1 and #109. I would like to see your opinion on that.

Dear Demystifier,

I did not say that "nonlocality is incompatible with relativity and unitary evolution". Indeed, tachyons are thinkable. However, it seems to me that relativity and unitary evolution in their current form leave little space for nonlocality. I remember studying quantum field theory many years ago. The lecturer was Professor Shirkov. Of course, we used his well-known book (N N Bogolyubov and D V Shirkov, `Introduction to the Theory of Quantized Fields'). One of the basic principles used in that book was microcausality. So I tend to believe nonlocality would lead to completely different forms of unitary evolution and relativity (for example, one of such new form may require tachyons). Explicit or implicit faster-than-light signaling does not follow from the current form of unitary evolution and relativity. To get such nonlocality in the Bell theorem you need something extra - such as the projection postulate. And this postulate generates nonlocality in a very direct way: indeed, according to this postulate, as soon as you measure a projection of spin of one particle of a singlet, the value of the projection of spin of the other particle immediately becomes determined, no matter how far from each other the particles are, and this is what the Bell theorem is about..

I looked at the references you gave. Again, I agree that unitary evolution and relativity, strictly speaking, do not eliminate nonlocality. However I wanted to ask you something. If I am not mistaken, you mentioned recently that Bohm's theory is superdeterministic.That seems reasonable. Furthermore, maybe unitary evolution is also, strictly speaking, superdeterministic. Indeed, it can include all observers and instruments, at least in principle. So my question is: What does this mean for nonlocality of Bohm's theory?
 
  • #113
Demystifier said:
Akhmeteli, that seems to be a reasonable answer. However, I think that
nonlocality is compatible with relativity and unitary evolution.
For more details see
https://www.physicsforums.com/showthread.php?t=354083

i think the same.

yoda jedi said:


specifically:

Tumulka:
http://arxiv.org/PS_cache/quant-ph/pdf/0406/0406094v2.pdf
and
http://arxiv.org/PS_cache/quant-ph/pdf/0602/0602208v2.pdf




Bedingham:
http://arxiv.org/PS_cache/arxiv/pdf/0907/0907.2327v1.pdf
 
  • #114
Demystifier said:
... if someone would make better detectors with higher efficiency such that the fair sampling loophole is avoided, and if the experiments would still violate Bell inequalities, would you accept THAT as a good evidence for nonlocality?
No, of course not.

I asked in a previous post:
Is Bell's theorem about the way the quantum world is, or is it about limitations on the formalization of entangled states?
The formalism is in effect modelling, and must be compatible with, the experimental design(s) that it's associated with.

Quantum nonseparability, vis the SQM representation, has to do with the nonfactorability of entangled state representations, which reflects the necessary statistical dependency between A and B -- not some property of the underlying quantum world.

The predictions of Bell LHV models (characterized by their incorporation of the Bell locality condition, ie. factorability of the joint entangled state representation) don't fully agree with experimental results precisely because these models are incompatible with the salient feature of experiments designed to produce entanglement, namely statistical dependence between A and B.

And, the statistical dependence between A and B is produced solely vis the local transmissions and interactions involved in the pairing process.

So, the incompatibility of Bell LHV models with SQM and the experimental violation of Bell inequalities has nothing to do with nonlocality in Nature.

It might also be noted that calling SQM a local or nonlocal theory (whether due to Bell associated considerations or some interpretation of the formalism by itself) is more obfuscating than enlightening.
 
Last edited:
  • #116
Demystifier said:
That's interesting, because my explicit Bohmian model of relativistic nonlocal reality does involve a "meta time".
...
That objection can, of course, be also attributed to the nonrelativistic Bohmian interpretation that does not involve the "meta time".

Yes I can see how the presence/absence of a meta-time would fit in and I don't object to its invocation per se. I see e.g. BI (and MW) not so much as an interpretation as it is a model given as I argue it is invoking non-operational components.

Thus if one were to simply drop the word "interpretation" from BI I'd be all for it.

Acknowledged as such, I think Bohmian QM could be a nice tool comparable to e.g. treating space-time as a dynamic manifold with its own meta-time and meta-dynamics to which it must relax to a stationary state yielding a solution of Einstein's equations. I don't have to assert the "reality" of extra dimensions or that meta-time in which space-time is embedded to use the model as a tool for calculation and enumeration of cases.

But I find "reality" is inherently a classical concept, and indeed the epitome of classical-ness. I see trying to hold onto the "reality" part of the negated local reality as regressive. (and should be replaced with non-objective "actuality".) That's a somewhat intuitive judgment of course but I believe based on good heuristic principles.
 
  • #117
akhmeteli said:
Yes, that would certainly be a good evidence of nonlocality (I mean if violations of the genuine Bell inequalities, without loopholes, are demonstrated experimentally).
Experimental loopholes have nothing to do with it. Bell's LHV ansatz is incompatible with QM, because QM, a statistical theory, correctly models the statistical dependency between A and B of the entangled state (vis nonfactorability of the joint state representation) while Bell's formulation doesn't.

akhmetli said:
To get such nonlocality in the Bell theorem you need something extra - such as the projection postulate. And this postulate generates nonlocality in a very direct way: indeed, according to this postulate, as soon as you measure a projection of spin of one particle of a singlet, the value of the projection of spin of the other particle immediately becomes determined, no matter how far from each other the particles are, and this is what the Bell theorem is about..
The assumption underlying the projection postulate is that what is being jointly analyzed at A and B during the same coincidence interval is the same thing. Where's the nonlocality?
 
  • #118
DrChinese said:
First, Bell tests ARE genuine. I think you mean "loophole" free. All experiments have "loopholes", some are simply more relevant than others - and you are free to your personal opinion. But it is manifestly unfair to characterize the hundreds/thousands of different Bell tests themselves as "not genuine".

Thank you for your comments.

I did not say the tests were not genuine. I just did not say that. However, the Bell inequalities violated in those tests were not genuine, i.e. those defined in the Bell theorem, because either they were doctored using the fair sampling assumption or the spatial separation was not sufficient. So I insist that genuine Bell inequalities were not violated in those experiments, and this is not just my opinion, this is mainstream (I admit that, strictly speaking, there is no consensus on that as you strongly disagree:-) )

DrChinese said:
Second: that is quite a bold prediction you are making, not sure what would make you think that quantum mechanics is actually incorrect (an absolute deduction from your statement).

What makes me think that is the fact that unitary evolution and the projection postulate contradict each other, so they cannot be both correct.

DrChinese said:
And last: why do you need to abandon relativity in the case of a confirmed (for you) violation of a Bell Inequality? The speed of light will still remain a constant in all local reference frames. Mass and clocks will still follow the standard rules. So what changes? The only thing that changes are physical effects not described by relativity in the first place. I do not consider relativity to include the absolute prediction that nonlocal elements cannot exist. I think it is an implied result, and one that could well fit within a larger theory. In fact, that is a result that Demystifier has been expressing for some time.

I answered this question replying to Demystifier. In brief, I admit that relativity and nonlocality, strictly speaking, are not incompatible, but I tend to believe that relativity and unitary evolution in their current form do not suggest nonlocality.
 
  • #119
yoda jedi said:
i think the same.

Please see my answers to Demystifier and DrChinese
 
  • #120
ThomasT said:
Experimental loopholes have nothing to do with it. Bell's LHV ansatz is incompatible with QM, because QM, a statistical theory, correctly models the statistical dependency between A and B of the entangled state (vis nonfactorability of the joint state representation) while Bell's formulation doesn't.

The assumption underlying the projection postulate is that what is being jointly analyzed at A and B during the same coincidence interval is the same thing. Where's the nonlocality?

Dear ThomasT,

I am awfully sorry, I've read your post several times, but I just cannot understand a word.
 
  • #121
akhmeteli said:
If I am not mistaken, you mentioned recently that Bohm's theory is superdeterministic.That seems reasonable. Furthermore, maybe unitary evolution is also, strictly speaking, superdeterministic. Indeed, it can include all observers and instruments, at least in principle. So my question is: What does this mean for nonlocality of Bohm's theory?
Bohmian mechanics is both superdeterministic and nonlocal. It should not be surprising, because Bohmian mechanics uses the wave function, and wave function is a nonlocal and deterministic object.
 
  • #122
Demystifier said:
Bohmian mechanics is both superdeterministic and nonlocal. It should not be surprising, because Bohmian mechanics uses the wave function, and wave function is a nonlocal and deterministic object.

I have not given much thought to superdeterminism, so please forgive me if the following question will be downright stupid.

My understanding is that superdeterminism rejects free will. So it looks like, from the point of view of Bohmian mechanics, no possible results of Bell tests can eliminate local realism, because there is no free will anyway? I know that, Bohmian mechanics or not, the "superdeterminism hole" cannot be eliminated in Bell tests, but superdeterminism is typically considered a pretty extreme notion, and now it turns out it is alive and kicking in such a relatively established approach as Bohmian?
 
  • #123
I as understand, just superdeterminism is not enough to create a loophole in Bells. In addition to superdeterminism, we also need an evil Nature, positioned BM particles in advance in a very special way, to trick the scientists and laugh at them.

In some sense that loophole is like 'Boltzmann brain' which also can not be ruled out. BTW, 'Boltzmann brain' agrument can be used even to deny QM at whole: world is just Newtonian, but 'Boltzmann brain' has memories that QM was discovered and experimentally verified.
 
  • #124
Demystifier said:
Bohmian mechanics is both superdeterministic and nonlocal. It should not be surprising, because Bohmian mechanics uses the wave function, and wave function is a nonlocal and deterministic object.
That's a useful observation. It's obvious, as you say, if you think of it. Thanks.

Do you have a view of how this meshes with arguments about free will, or do you think the issue of free will is overblown?
 
  • #126
akhmeteli said:
My understanding is that superdeterminism rejects free will.
True.

akhmeteli said:
So it looks like, from the point of view of Bohmian mechanics, no possible results of Bell tests can eliminate local realism, because there is no free will anyway?
Wrong. Bohmian mechanics is, by definition, a theory of nonlocal realism, so anything which assumes Bohmian mechanics eliminates local realism.

akhmeteli said:
I know that, Bohmian mechanics or not, the "superdeterminism hole" cannot be eliminated in Bell tests, but superdeterminism is typically considered a pretty extreme notion, and now it turns out it is alive and kicking in such a relatively established approach as Bohmian?
Superdeterminism by itself is not extreme at all. After all, classical mechanics is also superdeterministic. What is extreme is the idea that superdeterminism may eliminate nonlocality in QM. Namely, superdeterminism alone is not sufficient to eliminate nonlocality. Instead, to eliminate nonlocality, superdeterminism must be combined with a VERY SPECIAL CHOICE OF INITIAL CONDITIONS (see also the post of Dmitry67 above). It is such special conspiratorial initial conditions that is considered extreme.
 
  • #127
Demystifier said:
In my opinion, free will is only an illusion. See the attachment in
https://www.physicsforums.com/showpost.php?p=2455753&postcount=109
Fair enough, given the just-hedged-enough nature of "you think that you have free will. But it may only be an illusion". For me, I'm not willing to make strong claims on something that appears not to be so easily looked at experimentally, but OK, if we have the hedge.
 
  • #128
Peter Morgan said:
Fair enough, given the just-hedged-enough nature of "you think that you have free will. But it may only be an illusion". For me, I'm not willing to make strong claims on something that appears not to be so easily looked at experimentally, but OK, if we have the hedge.
I'm glad to see that we (you and me) think similarly.
 
  • #129
Demystifier said:
After all, classical mechanics is also superdeterministic.
Right.
What is extreme is the idea that superdeterminism may eliminate nonlocality in QM. Namely, superdeterminism alone is not sufficient to eliminate nonlocality. Instead, to eliminate nonlocality, superdeterminism must be combined with a VERY SPECIAL CHOICE OF INITIAL CONDITIONS (see also the post of Dmitry67 above). It is such special conspiratorial initial conditions that is considered extreme.
The "very special"ness is only that, given that the state of the whole experimental apparatus at the times that simultaneous events were recorded, together with the instrument settings at the time, were what they were, the state of the whole experimental apparatus and its whole past light cone at some point in the past must have been consistent with the state that we observed. From a classical deterministic dynamics point of view, this is only to say that the initial conditions now determine the initial conditions at past times (and at future times).

A thermodynamic or statistical mechanical point of view of what the state is, however, places a less stringent requirement that the thermodynamic or statistical mechanical state in the past must have been consistent with the recorded measurements that we make now. An experiment that violates Bell-CHSH inequalities makes a record, typically, of a few million events that are identified as "pairs", which is not a very tight constraint on what the state of the universe was in the backward light-cone a year ago. A probabilistic dynamics, such as that of QM, only claims that the statistics that are observed now on various ensembles of data constrain what the statistics in the past would have been if we had measured them. This kind of move to probabilistic dynamics is as open to classical modeling in space-time as it is to QM, in which we make the superdeterminism apply only to probability distributions instead of to deterministic states. To some extent this move suggests giving up particle trajectories, but of course trajectories can be added that are consistent with the probabilistic dynamics of QM, in several ways, at least including deBB, Nelson, and SED (insofar as the trajectories that we choose to add are beyond being looked at by experiment, however, we should perhaps be metaphysically rather noncommittal).
 
  • #130
From an interview with Anton Zeilinger:

I'd like to come back to these freedoms. First, if you assumed there were no freedom
of the will – and there are said to be people who take this position – then you could
do away with all the craziness of quantum mechanics in one go.


True – but only if you assume a completely determined world where everything that
happened, absolutely everything, were fixed in a vast network of cause and effect.
Then sometime in the past there would be an event that determined both my choice of
the measuring instrument and the particle's behaviour. Then my choice would no
longer be a choice, the random accident would be no accident and the action at a
distance would not be action at a distance.

Could you get used to such an idea?

I can't rule out that the world is in fact like that. But for me the freedom to ask
questions to nature is one of the most essential achievements of natural science. It's a
discovery of the Renaissance. For the philosophers and theologians of the time, it
must have seemed incredibly presumptuousness that people suddenly started
carrying out experiments and asking questions of nature and deducing laws of nature,
which are in fact the business of God. For me every experiment stands or falls with
the fact that I'm free to ask the questions and carry out the measurements I want. If
that were all determined, then the laws of nature would only appear to be laws, and
the entire natural sciences would collapse.

http://print.signandsight.com/features/614.html
 
  • #131
Hi Nikman, but note that Zeilinger has limited the discussion to thinking it has to be "complete" determinism. As he says, he can't rule complete determinism out, but he doesn't like it, he'd rather do something else. Fair enough.

I'm curious what you think, Zeilinger being not here, in the face of a suggestion that we take the state to be either thermodynamic or statistical mechanical (i.e. a deterministic evolution of probabilities distributions, without necessarily introducing deterministic trajectories). Part of the suggestion here is to emulate, in a classical setting, the relative lack of metaphysical commitment of, say, the Copenhagen interpretation of QM to anything that we do not record as part of an experiment, which to me particularly includes trajectories.
 
  • #132
Demystifier said:
Superdeterminism by itself is not extreme at all. After all, classical mechanics is also superdeterministic. What is extreme is the idea that superdeterminism may eliminate nonlocality in QM. Namely, superdeterminism alone is not sufficient to eliminate nonlocality. Instead, to eliminate nonlocality, superdeterminism must be combined with a VERY SPECIAL CHOICE OF INITIAL CONDITIONS (see also the post of Dmitry67 above). It is such special conspiratorial initial conditions that is considered extreme.

I don't think that is a completely fair to say that classical mechanics is also superdeterministic, because I do not believe such is the case. If determinism was the same thing as superdeterminism, we would not need a special name for it. So I agree completely with your "extreme" initial conditions requirement at a minimum.

But I also question whether [classical mechanics] + [extreme initial conditions] can ever deliver superdeterminism. In a true superdeterministic theory, you would have an explicit description of the mechanism by which the *grand* conspiracy occurs (the conspiracy to violate Bell inequalities). For example: we could connect Alice's detector setting to a switch controlled by the timing of decays of a radioactive sample. So that is now part of the conspiracy too, and the instructions for when to click or not must be present in that sample (and therefore presumably everywhere). Were that true, why can't we see it before we run the experiment?

As I have said many times: if you allow the superdeterminism "loophole" as a hedge for Bell inequalities, you essentially allow it as a hedge for all physical laws. Which sort of takes the meaning away from it (as a hedge) in the first place.

[I probably shouldn't have even written this post, so my apologies in advance. I consider it akin to false histories (the Omphalos hypothesis) - ad hoc and unfalsifiable.]
 
  • #133
nikman said:
From an interview with Anton Zeilinger:

... If that were all determined, then the laws of nature would only appear to be laws, and
the entire natural sciences would collapse.

http://print.signandsight.com/features/614.html

Thanks for the link! I think his quote says a lot.
 
  • #134
DrChinese said:
But I also question whether [classical mechanics] + [extreme initial conditions] can ever deliver superdeterminism. In a true superdeterministic theory, you would have an explicit description of the mechanism by which the *grand* conspiracy occurs (the conspiracy to violate Bell inequalities).
Part of the conspiracy, at least, comes from the experimenter. One of a specific symmetry class of experimental apparatuses has to be constructed, typically over months, insofar as it used not to be easy to violate Bell inequalities. The material physics that allows us to construct the requisite correlations between measurement results is arguably pretty weird.

Furthermore, the standard way of modeling Bell inequality violating experiments in QM is to introduce projection operators to polarization states of a single frequency mode of light, which are non-local operators. [A propos of which, DrC, do you know of a derivation that is truly careful about the field-theoretic locality?] The QM model, in other words, is essentially a description of steady state, time-independent statistics that has specific symmetry properties. Since I take violation of Bell inequalities to be more about contextuality than about nonlocality, which specifically is implemented by post-selection of a number of sub-ensembles according to what measurement settings were in fact chosen, this seems natural to me, but I wonder what you think?

Remember that with me you have to make a different argument than you might make with someone who thinks the measurement results are noncontextually determined by the state of each of two particles, since for me whether measurement events occur is determined jointly by the measurement devices and the field they are embedded in.
For example: we could connect Alice's detector setting to a switch controlled by the timing of decays of a radioactive sample. So that is now part of the conspiracy too, and the instructions for when to click or not must be present in that sample (and therefore presumably everywhere). Were that true, why can't we see it before we run the experiment?
I do wonder, but apparently that's how the statistics pile up. We have a choice of whether to just say, with Copenhagen, that we can say nothing at all about anything that is not macroscopic, or to consider what properties different types of models have to have in order to "explain" the results. A particle Physicist tells a causal story about what happens in experiments, using particles, anti-particles, and ghost and virtual particles, with various prevarications about what is really meant when one talks about such things (which is typically nonlocal if anything like Wigner's definition of a particle is mentioned, almost inevitably); so it seems reasonable to consider what prevarications there have to be in other kinds of models. It's good that we know moderately well what prevarications we have to introduce in the case of deBB, and that they involve a nonlocal trajectory dynamics in that case.
As I have said many times: if you allow the superdeterminism "loophole" as a hedge for Bell inequalities, you essentially allow it as a hedge for all physical laws. Which sort of takes the meaning away from it (as a hedge) in the first place.
This might be true, I guess, although proving that superdeterminism is a hedge for all possible physical laws looks like tough mathematics to me. Is the same perhaps true for backward causation? Do you think it's an acceptable response to ask what constraints have to be put on superdeterminism (or backward causation) to make it give less away?
[I probably shouldn't have even written this post, so my apologies in advance. I consider it akin to false histories (the Omphalos hypothesis) - ad hoc and unfalsifiable.]
You're always welcome with me, DrC. I'm very pleased with your comments in this case. If you're ever in CT, look me up.
I like the Omphalos. Is it related to the heffalump?

Slightly after the above, I'm particularly struck by your emphasis on the degree of correlation required in the initial conditions to obtain the experimental results we see. Isn't the degree of correlation required in the past precisely the same as the degree of correlation that we note in the records of the experimental data? It's true that the correlations cannot be observed in the past without measurement of the initial state in outrageous detail across the whole of a time-slice of the past light-cone of a measurement event, insofar as there is any degree of dynamical chaos, but that doesn't take away from the fact that in a fine-grained enough description there is no change of entropy. [That last phrase is a bit cryptic, perhaps, but it takes my fancy a little. Measurements now are the same constraint on the state in the past as they are on the state now. Since they are actually observed constraints now, it presumably cannot be denied that they are constraints on the state now. If the actual experimental results look a little weird as constraints that one might invent now, then presumably they look exactly as weird as constraints on the state 10 years ago, no more and no less. As observed constraints, they are constraints on what models have to be like to be empirically adequate.] I'm worried that all this repetition is going to look somewhat blowhard, as it does a little to me now, so I'd be glad if you can tell me if you can see any content in it.
 
  • #135
Peter Morgan said:
Hi Nikman, but note that Zeilinger has limited the discussion to thinking it has to be "complete" determinism. As he says, he can't rule complete determinism out, but he doesn't like it, he'd rather do something else. Fair enough.

I made the mistake of claiming in a post some while back that the Zeilinger group's Leggett paper needs editing (for English clarity) because in its conclusion it seemed to suggest that the authors didn't foreclose even on superdeterminism (or something more or less equivalent). Well, I was wrong; they don't foreclose on it, as AZ makes clear here. He simply finds such a world unimaginable.

I'm curious what you think, Zeilinger being not here, in the face of a suggestion that we take the state to be either thermodynamic or statistical mechanical (i.e. a deterministic evolution of probabilities distributions, without necessarily introducing deterministic trajectories). Part of the suggestion here is to emulate, in a classical setting, the relative lack of metaphysical commitment of, say, the Copenhagen interpretation of QM to anything that we do not record as part of an experiment, which to me particularly includes trajectories.

I'm far more abashed than flattered at being considered an acceptable stand-in to speak for this astonishing, brilliant man. For gosh sakes I'm not even a physicist; I'm at best an 'umble physics groupie.

In this dilettante capacity I'm not aware that he's ever gone as far as (say) Mermin (in the Ithaca Interpretation) and suggested that everything's correlations, dear boy, correlations. What does Bruknerian coarse-grainedness as complementary to decoherence tell us? This is really in part about what macrorealism means, isn't it? Does the GHZ Emptiness of Paths Not Taken have any relevance here?

My understanding via Hans C. von Baeyer is that Brukner and Zeilinger have plotted state evolution in "information space" (in terms of classical mechanics, equivalent to trajectories of billiard balls perhaps) and then translated that into Hilbert space where the math reveals itself to be the Schrödinger equation. How truly deterministic is the SE? My mental clutch is starting to slip now.
 
  • #136
Maaneli said:
I disagree. You replied to someone's suggestion that locality is worth sacrificing for realism, with the claim that Leggett's work shows that even "realism" (no qualifications given about contextuality or non-contextuality) is not tenable without sacrificing another intuitively plausible assumption. But that characterization of Leggett's work is simply not accurate, which anyone can see by reading those abstracts you linked to. And I don't even think that's true that everyone in this field agrees that the word realism is used to imply classical realism, and that this is done without any confusion. I know several active researchers in this field who would dispute the validity of your use of terminology. Moreover, the link you gave to try and support your claim, doesn't actually do that. If you read your own link, you'll see that everything Aspelmeyer and Zeilinger conclude about realism from their experiment is qualified in the final paragraph:

However, Alain Aspect, a physicist who performed the first Bell-type experiment in the 1980s, thinks the team's philosophical conclusions are subjective. "There are other types of non-local models that are not addressed by either Leggett's inequalities or the experiment," he said.

So Aspect is clearly indicating that Aspelmeyer and Zeilinger's use of the word "realism" is intended in a broader sense than Leggett's use of the term "classical realism".



It's not nitpicking on semantics, it's getting the physics straight. If that's too difficult for you to do, then I'm sorry, but maybe you're just not cut out for this thread.


i agree
reality is independence of observers.
 
  • #137
Peter Morgan said:
Part of the conspiracy, at least, comes from the experimenter. One of a specific symmetry class of experimental apparatuses has to be constructed, typically over months, insofar as it used not to be easy to violate Bell inequalities. The material physics that allows us to construct the requisite correlations between measurement results is arguably pretty weird.

Furthermore, the standard way of modeling Bell inequality violating experiments in QM is to introduce projection operators to polarization states of a single frequency mode of light, which are non-local operators. [A propos of which, DrC, do you know of a derivation that is truly careful about the field-theoretic locality?] The QM model, in other words, is essentially a description of steady state, time-independent statistics that has specific symmetry properties. Since I take violation of Bell inequalities to be more about contextuality than about nonlocality, which specifically is implemented by post-selection of a number of sub-ensembles according to what measurement settings were in fact chosen, this seems natural to me, but I wonder what you think?

Remember that with me you have to make a different argument than you might make with someone who thinks the measurement results are noncontextually determined by the state of each of two particles, since for me whether measurement events occur is determined jointly by the measurement devices and the field they are embedded in.

I do wonder, but apparently that's how the statistics pile up. We have a choice of whether to just say, with Copenhagen, that we can say nothing at all about anything that is not macroscopic, or to consider what properties different types of models have to have in order to "explain" the results. A particle Physicist tells a causal story about what happens in experiments, using particles, anti-particles, and ghost and virtual particles, with various prevarications about what is really meant when one talks about such things (which is typically nonlocal if anything like Wigner's definition of a particle is mentioned, almost inevitably); so it seems reasonable to consider what prevarications there have to be in other kinds of models. It's good that we know moderately well what prevarications we have to introduce in the case of deBB, and that they involve a nonlocal trajectory dynamics in that case.

This might be true, I guess, although proving that superdeterminism is a hedge for all possible physical laws looks like tough mathematics to me. Is the same perhaps true for backward causation? Do you think it's an acceptable response to ask what constraints have to be put on superdeterminism (or backward causation) to make it give less away?

You're always welcome with me, DrC. I'm very pleased with your comments in this case. If you're ever in CT, look me up.
I like the Omphalos. Is it related to the heffalump?

Slightly after the above, I'm particularly struck by your emphasis on the degree of correlation required in the initial conditions to obtain the experimental results we see. Isn't the degree of correlation required in the past precisely the same as the degree of correlation that we note in the records of the experimental data? It's true that the correlations cannot be observed in the past without measurement of the initial state in outrageous detail across the whole of a time-slice of the past light-cone of a measurement event, insofar as there is any degree of dynamical chaos, but that doesn't take away from the fact that in a fine-grained enough description there is no change of entropy. [That last phrase is a bit cryptic, perhaps, but it takes my fancy a little. Measurements now are the same constraint on the state in the past as they are on the state now. Since they are actually observed constraints now, it presumably cannot be denied that they are constraints on the state now. If the actual experimental results look a little weird as constraints that one might invent now, then presumably they look exactly as weird as constraints on the state 10 years ago, no more and no less. As observed constraints, they are constraints on what models have to be like to be empirically adequate.] I'm worried that all this repetition is going to look somewhat blowhard, as it does a little to me now, so I'd be glad if you can tell me if you can see any content in it.

We have a lot of jackalopes in Texas, but few heffalumps.

---------------------------------

The issue is this: Bell sets limits on local realistic theories. So there may be several potential "escape" mechanisms. One is non-locality, of which the Bohmian approach is one which attempts to explicitly describe the mechanism by which Bell violations can occur. Detail analysis appears to provide answers to how this could match observation. BM can be explicitly critiqued and answers can be provided to those critiques.

Another is the "superdeterminism" approach. Under this concept, the initial conditions are just such that all experiments which are done will always show Bell violations. However, like the "fair sampling" loophole, the idea is that from the full universe of possible observations - those which are counterfactual - the true rate of coincidence does NOT violate a Bell Inequality. So there is a bias function at work. That bias function distorts the true results because the experimenter's free will is compromised. The experimenter can only select to perform measurements which support QM due to the experimenter's (naive and ignorant) bias.

Now, without regard to the reasonableness of that argument, I point out the following cases, in which the results are identical.

a) The experimenter's detector settings are held constant for a week at a time.
b) The settings are changed at the discretion of the experimenter, at any interval.
c) The settings are changed at due to clicks from a radioactive sample, per an automated system, over which the experimenter has no direct control.
d) A new hypothesis, that the experiments actually show that a Bell Inequality is NOT violated, but the data recording device is modified coincidentally to show results indicating that the Bell Inequality was violated.

In other words, we know we won't see any difference in a), b) and c). And if d) occurred, it would be a different form of "superdeterminism". So the question I am asking: does superdeterminism need to obey any rules? Does it need to be consistent? Does it need to be falsifiable? Because clearly, the a) case above should be enough to rule out superdeterminism (at least in my mind - the experimenter is exercising no ongoing choice past an initial point). The c) case requires that superdeterminism flows from one force to another, when the standard model does not show any such mechanism (since there is no known connection between an experimental optical setting and the timing of radioactive decay). And the d) case shows that there is always one more avenue by which we can float an ad hoc hypothesis.

So you ask: is superdeterminism a hedge for all physical laws? If you allow the above, one might then turn around and say: does it not apply to other physical laws equally? Because my answer is that if so, perhaps relativity is not a true effect - it is simply a manifestation of superdeterminism. All of those GPS satellites... they suffer from the idea that the experimenter is not free to request GPS information freely. So while results appear to follow GR, they really do not. How is this less scientific than the superdeterminism "loophole" as applied to Bell?

In other words, there is no rigorous form of superdeterminism to critique at this point past an ad hoc hypothesis. And we can formulate ad hoc hypotheses about any physical law. None of which will ever have any predictive utility. So I say it is not science in the conventional sense.

-----------------------

You mention contextuality and the subsamples (events actually recorded). And you also mention the "degree of correlation required in the initial conditions to obtain the experimental results we see". The issue I return to time after time: the bias function - the delta between the "true" universe and the observed subsample correlation rates - must itself be a function of the context. But it is sometimes negative and sometimes positive. That seems unreasonable to me. Considering, of course, that the context is ONLY dependent on the relative angle difference and nothing else.

So we need a bias function that eliminates all other variables except the difference between measurement settings at a specific point in time. It must apply to entangled light, which will also show perfect correlations. But it must NOT apply to unentangled light (as you know, that is my criticism of the De Raedt model). And it must further return apparently random values in all cases. I believe these are all valid requirements of a superdeterministic model. As well as locality and realism, of course.
 
  • #138
Continued from above...

So what I am saying is: when you put together all of the requirements, I don't think you have anything that works remaining. You just get arguments that are no better than "last Thursdayism".

------------------------------

By the way, wouldn't GHZ falsify superdeterminism too? After all, there is no subsample.

Or would one make the argument that the experimenter had no free will as to the choice of what to measure? (That seems a stretch, since all observations yield results inconsistent with local realism - at least within experimental limits).
 
  • #139
Demystifier said:
True.


Wrong. Bohmian mechanics is, by definition, a theory of nonlocal realism, so anything which assumes Bohmian mechanics eliminates local realism.


Superdeterminism by itself is not extreme at all. After all, classical mechanics is also superdeterministic. What is extreme is the idea that superdeterminism may eliminate nonlocality in QM. Namely, superdeterminism alone is not sufficient to eliminate nonlocality. Instead, to eliminate nonlocality, superdeterminism must be combined with a VERY SPECIAL CHOICE OF INITIAL CONDITIONS (see also the post of Dmitry67 above). It is such special conspiratorial initial conditions that is considered extreme.

Thank you very much for the explanations
 
  • #140
DrChinese said:
I don't think that is a completely fair to say that classical mechanics is also superdeterministic, because I do not believe such is the case. If determinism was the same thing as superdeterminism, we would not need a special name for it. So I agree completely with your "extreme" initial conditions requirement at a minimum.
I see what you mean, but note that I use a different DEFINITION of the term "superdeterminism". In my language, superdeterminism is nothing but determinism applied to everything. Thus, a classical deterministic model of the world is superdeterministic if one assumes that, according to this model, everything that exists is described by the classical laws of physics. In my language, superdeterminism does not imply the absence of specific laws, such as Newton law of gravitation.

Even with this definition of superdeterminism, it is not exactly the same as determinism. For example, if you believe that the classical laws of physics are valid everywhere except in the brain in which a genuine spiritual free will also acts on electric currents in the brain, then, according to my definition, such a view is deterministic but not superdeterministic.
 
Back
Top