How does the thermal interpretation explain Stern-Gerlach?

In summary, the thermal interpretation views the measurement of a Hermitian quantity as giving an uncertain value approximating the q-expectation, rather than an exact revelation of an eigenvalue. Applied to the Stern-Gerlach experiment, the observed result is seen as an uncertain measurement of the q-expectation of the spin-x operator, which is zero for a beam of spin-z up electrons passing through a Stern-Gerlach device oriented in the x direction. However, this interpretation is inconsistent with the classical prediction of a normal distribution for a quantity with a q-expectation of zero. In the thermal interpretation, the measurement device is always treated as a quantum device, and the beam is represented as a quantum field with a sharp number
  • #36
A. Neumaier said:
If the environment is such that it corresponds to a spin measurement with collapse to an up or down state, this dynamics is expected to have just two stable fixd points

Hmm...I think I might get it. Let me try my own words again and see if you agree. The appearance of Copehagen style collapse is inextricably bound up in the physical construction of the device itself. We should think of the incident field as causing the device to transition from its (sort of metastable) "ready" configuration to 1 of its 2+ possible (sort of "ground state") "clicked" configurations, which represent inaccurate TI measurements (as opposed to Copenhagen projections).

But the key is that not all transitions to all arbitrary clicked configurations can be induced by an N=1 field. In particular, such a field cannot induce a transition to a clicked configuration where the device has clicked 2+ times at different cells of the device.

Is that the idea?
 
Physics news on Phys.org
  • #37
A. Neumaier said:
traditional interpretations claim the validity of the linear Schrodinger Equation only for isolated systems. A detector is never isolated, hence the linear Schrodinger Equation does not apply.

Just to be clear: you are saying that the linear Schrodinger Equation does not apply to the detector and the spots that appear on it, correct? The linear Schrodinger Equation seems to work fine for explaining how the interaction of the spin-1/2 with the inhomogeneous magnetic field splits one trajectory into two. But the spin-1/2 is not an isolated system in this interaction: the system also includes the magnetic field.
 
  • #38
charters said:
Hmm...I think I might get it. Let me try my own words again and see if you agree. The appearance of Copehagen style collapse is inextricably bound up in the physical construction of the device itself. We should think of the incident field as causing the device to transition from its (sort of metastable) "ready" configuration to 1 of its 2+ possible (sort of "ground state") "clicked" configurations, which represent inaccurate TI measurements (as opposed to Copenhagen projections).

But the key is that not all transitions to all arbitrary clicked configurations can be induced by an N=1 field. In particular, such a field cannot induce a transition to a clicked configuration where the device has clicked 2+ times at different cells of the device.

Is that the idea?
Yes. Which transitions are possible is constrained by conservation laws and by selection rules.
 
  • #39
A. Neumaier said:
Yes. Which transitions are possible is constrained by conservation laws and by selection rules.

So, it seems these detector transitions are restricted at the holistic level, requiring a top down definition of the overall detector, which can be a highly nonlocal object in space and time.

Consider a detector made of multiple, widely separated components, such as arbitrarily many quad cell photodetectors, each at the end a of different arm of a Mach Zender interferometer. How do photodetectors A and B (which can be kilometers or lightyears apart) know if and when they are part of the same overall MZI detector, such that their transitions have to be constrained by each other? How do they know if/when they are meant to act as one non-local detector?
 
  • Like
Likes eloheim
  • #40
charters said:
So, it seems these detector transitions are restricted at the holistic level, requiring a top down definition of the overall detector, which can be a highly nonlocal object in space and time.
Yes. In the thermal interpretation, the whole is more than its parts. In mathematical terms, a composite system has more independent beables (q-expectations) than the beables of its parts. This is a consequence of the formal apparatus of quantum mechanics, which the thermal interpretation does not change.
charters said:
Consider a detector made of multiple, widely separated components, such as arbitrarily many quad cell photodetectors, each at the end a of different arm of a Mach Zender interferometer. How do photodetectors A and B (which can be kilometers or lightyears apart) know if and when they are part of the same overall MZI detector, such that their transitions have to be constrained by each other? How do they know if/when they are meant to act as one non-local detector?
I don't know how they know. This seems to be a secret of the creator of the universe.

But other interpretations of quantum mechanics also have no explanation for long-distance correlations violating Bell-type inequalities. One can only say that the dynamics assumed predicts these phenomena, and that Nature conforms to these predictions.
 
  • #41
A. Neumaier said:
But other interpretations of quantum mechanics also have no explanation for long-distance correlations

In Bohm, you have the pilot wave making the necessary trajectory corrections to the particle HVs. In MWI you have local decoherence and branching. In GRW you have the stochastic collapse mechanism. In superdeterminism, you have it all baked into the initial conditions. Retrocausal interpretations have the backwards evolving state vector. I don't know what equivalent story you want to tell here, esp if the TI is meant to be non-random.

I also would note most of these other interpretations require the above stories specifically to deal with Bell violations. You appear to need this for even a basic MZI. It is sort of like even when the quanta is unentangled, the entire macroscopic world of all detectors is still highly entangled (and at long distances, to a stronger degree than implied by something like Reeh Schlieder).
 
  • Like
Likes eloheim
  • #42
charters said:
In Bohm, you have the pilot wave making the necessary trajectory corrections to the particle HVs. In MWI you have local decoherence and branching. In GRW you have the stochastic collapse mechanism. In superdeterminism, you have it all baked into the initial conditions. Retrocausal interpretations have the backwards evolving state vector.
They have stories, not explanations. They all assume an unexplained nonlocal dynamics from the start.
charters said:
I don't know what equivalent story you want to tell here, esp if the TI is meant to be non-random.
The existence of multilocal q-expectations, which provide the potentially nonlocal correlations and evolve deterministically.
charters said:
I also would note most of these other interpretations require the above stories specifically to deal with Bell violations.
So does the thermal interpretation, but without artificial baggage (no additional micropositions, no multiple worlds, no postulated collapse, no causality violations). Though it is superdeterministic in the sense that every deterministic theory of the universe has its complete fate encoded in the initial condition. (I don't really know what the extra 'super-' is about.)
charters said:
You appear to need this for even a basic MZI.
No. If one does MZI with coherent light there are no correlations between the different detector results; each one fires independently according to its own locally incident field intensity, and the observed coincidence statistics (no bunching or antibunching) comes out. The apparent nonlocality is due to looking at the five detectors only at the random times where one of the detector fires, and observes that at this exact time no other detector fires. In fact, the collection of all five detectors responds with an all zero result almost all the times and occasionally with a 100% (in your setting), exact coincidencde has zero probability. Thus nothing nonlocal happens.
 
  • #43
A. Neumaier said:
(I don't really know what the extra 'super-' is about.)

The super in superdeterminism means that the interpretation is set up such that, for a generic choice of initial conditions, the standard equations/laws we use to make predictions will not work.

So, in this case, you explain the TI as relying on:

A. Neumaier said:
The existence of multilocal q-expectations, which provide the potentially nonlocal correlations and evolve deterministically.

But I can imagine different initial conditions with arbitrary/different multilocal q-expectations and therefore different non-local correlations between detector components. These detector correlations won't reproduce the correct experimental outcomes, eg in EPR experiments. It's only a special subclass of conceivable multilocal q-expectations which have to be assumed/baked into the initial conditions in order to reproduce QM.

This is not really anything to do with the TI in particular. It is just a consequence of Bell's theorem that any single world, deterministic interpretation will feature either a pilot wave+preferred foliation, retrocausality, or superdeterminism. Based on what you've written (and since you don't seem to adopt the former two concepts) superdeterminism seems like the choice the TI makes here.
 
  • Like
Likes eloheim
  • #44
charters said:
The super in superdeterminism means that the interpretation is set up such that, for a generic choice of initial conditions, the standard equations/laws we use to make predictions will not work.
But this is the case for any deterministic dynamics of a specific system. For a generic choice of initial conditions, Nwton's law for our Solar system is not predictive. Would you therefore call Newton's mechanics superdeterministic?

On the other hand, the universe is a single system, so has to be treated on par with our Solar system.
charters said:
Based on what you've written (and since you don't seem to adopt the former two concepts) superdeterminism seems like the choice the TI makes here.
Sure, the TI is deterministic, and applies only for our single universe.

By the preceding it is superdeterministic in your sense, just like Newton's mechanics for our Solar system.
charters said:
But I can imagine different initial conditions with arbitrary/different multilocal q-expectations and therefore different non-local correlations between detector components. These detector correlations won't reproduce the correct experimental outcomes, eg in EPR experiments
But these detector and environment preparations would also not reproduce the actual detector and environment preparations needed to guarantee the correct performance of these experiments.

Thus TI is predictive without the need for assuming more about the initial conditions than is assumed in the analysis of the experiment.
 
  • #45
A. Neumaier said:
Would you therefore call Newton's mechanics superdeterministic?

Yes, in a limited sense. Newtonian mechanics does have to assume restriction to the set of initial conditions where nonrelativistic physics is valid. But this is not really something to worry about for emergent theories only valid in some restricted regime. In contrast, QM is claimed to be universal and fundamental, so if the validity of its equations/laws are claimed to be contingent on initial conditions in this way, a lot of people experience some heartburn and doubt.

A. Neumaier said:
But these detector and environment preparations would also not reproduce the actual detector and environment preparations needed to guarantee the correct performance of these experiments.

This is begging the question/assuming the superdeterminist methodology. The anti-superdeterminism worldview is that you can't look to outcomes to decide which initial conditions are valid.

I'm not really trying to say superdeterminism is an unacceptable philosophy. It doesn't work for me, but it does for many people smarter than me, most prominently t'Hooft. I guess I just wanted to highlight what I see as *the* major philosophical wedge issue/commitment in the TI, which doesn't get much attention in the papers.
 
  • #46
charters said:
In contrast, QM is claimed to be universal and fundamental, so if the validity of its equations/laws are claimed to be contingent on initial conditions in this way, a lot of people experience some heartburn and doubt.
Well, in the TI, the universal laws approximately follow from the law for the full universe, for all small subsystems of the universe that physicists find (by Nature or by special equipment, which is just human-manipulated Nature) prepared in the initial states they use to make successful predictions. To produce these approximations, the initial state of the universe is irrelevant; only the initial state of the subsystem and some general features of the universe known to be valid at the time of performing the experiment matter.

Thus no fine-tuning of the universe is needed beyond perhaps a low entropy state of the early universe. And even that might perhaps come about through coarse-graining.
charters said:
This is begging the question/assuming the superdeterminist methodology. The anti-superdeterminism worldview is that you can't look to outcomes to decide which initial conditions are valid.
I don't see the problem.

It is obvious that one can predict states of a subsystem of a big deterministic system only when the initial conditions of this subsystem actually have the values assumed for the prediction! One does not have to look at the outcomes but at the preparation!
 
  • #47
A. Neumaier said:
But this is the case for any deterministic dynamics of a specific system. For a generic choice of initial conditions, Nwton's law for our Solar system is not predictive. Would you therefore call Newton's mechanics superdeterministic?

The distinction between deterministic and superdeterministic theories is basically in what can be considered "free variables". For example, in the EPR experiment, we have two experimenters, Alice and Bob, who choose what measurements to perform (so that's one source of variability) and then we have the experimental results themselves, which is another source of variability. In Bell's analysis of EPR, he treats Alice's and Bob's choices as "free variables", and considers the measurement results to be functions of those choices (plus the "hidden variable", which is another free variable). In contrast, if you consider Alice's and Bob's choices to be constrained so that there is a hidden relationship between the three variables--(1) Alice's choice, (2) Bob's choice, and (3) the hidden variable value--then Bell's analysis doesn't apply. You can certainly match the predictions of EPR with local hidden variables if you assume that Alice's and Bob's choices are predictable (or are determined by the hidden variable ##\lambda##). That loophole is the superdeterminism loophole.

It might seem at first that determinism implies superdeterminism. If Alice and Bob are described by deterministic laws, then their choices should be predictable, right? But they're not really the same. Alice might decide to make her choice based on some external event, such as whether she sees a supernova explosion in a certain region of the sky right before her measurement. Bob might decide to make his choice based on whether a basketball player makes his shot. Their choices can depend on absolutely anything. So in order for Alice's and Bob's choices to be reliably correlated, it's not enough that things be deterministic, but that the whole universe (or at least the part that is observable by Alice and Bob) be set up precisely in order to make that correlation. Such superdeterminism is not just a matter of having the future determined by current conditions (ordinary determinism), but would require that current conditions of the entire universe be fine-tuned.
 
  • Like
Likes eloheim and andrew s 1905
  • #48
stevendaryl said:
The distinction between deterministic and superdeterministic theories is basically in what can be considered "free variables".
Would a Laplacian classical multiparticle universe in which observers (taken to be machines to avoid problems with consciousness) are also multiparticle systems be superdeterministic in this sense?
 
Last edited:
  • #49
A. Neumaier said:
Would a Laplacian classical multiparticle universe in which observers (taken to be machines to avoid problems with consciousness) are also multiparticle systems be superdeterministic in this sense?
Consider such a world where the gravitational potential is ##r^{-1+a}##, for some constant ##a > 0## let's say.

Then imagine that the initial state of the universe is such that your machines are "destined" to never obtain the accuracy or sufficient statistical certainty to confirm the ##a## correction and are thus "doomed" to believe gravity has a ##r^{-1}## potential. That would be superdeterminism.

In essence a deterministic world where the initial conditions never evolve into states corresponding to observers obtaining an accurate determination of the physical laws.
 
Last edited:
  • #50
A. Neumaier said:
Would a Laplacian classical multiparticle universe in which observers (taken to be machines to avoid problems with consciousness) are also multiparticle systems be superdeterministic in this sense?

As I said in my previous post, being deterministic does not imply being superdeterministic. Classical mechanics is not superdeterministic.
 
  • Like
Likes Demystifier
  • #51
A. Neumaier said:
I don't see the problem

Ok, let me try a different route. Consider a basic SG experiment with an N=1 beam. You claim the TI is deterministic. Accordingly, to encode this hidden determinism we should be able to write the state of the experiment *prior* to the detector click as

(|UP>| + |DOWN> ) ⊗ {up}

where the Dirac notation is the normal quantum state and {n} is the state of the hidden variable which deterministically predicts the click. In the TI, I believe {up} and {down} would represent different fine grained distinctions in the configuration of the detector itself (as opposed to BM, where it represents different configurations of the beam).

Do you agree with this description being faithful to the TI so far?
 
  • #52
charters said:
Ok, let me try a different route. Consider a basic SG experiment with an N=1 beam. You claim the TI is deterministic. Accordingly, to encode this hidden determinism we should be able to write the state of the experiment *prior* to the detector click as

(|UP>| + |DOWN> ) ⊗ {up}

where the Dirac notation is the normal quantum state and {n} is the state of the hidden variable. In the TI, I believe {up} and {down} would represent different fine grained distinctions in the configuration of the detector itself (as opposed to BM, where it represents different configurations of the beam).

Do you agree with this description being faithful to the TI so far?
No. The beables (hidden variables) are the collection of all q-expectations of the universe. Given a single spin prepared in a pure state ##\psi## we know at preparation time that for any 3-vector ##p## that the quantity ##S(p):=p\cdot\sigma## of the spin satisfies ##\langle S(p)\otimes 1\rangle= \psi^*S(p)\psi##. In your case, this is the sum of the four entries of ##S(p)##. Your curly up and down correspond to pointer readings, i.e., functions of q-expectations (beables, hidden variables) of the detector, not to a state of the detector. Many states of the detector lead to identical pointer readings.

This is completely independent of the deterministic dynamics, which is the Ehrenfest dynamics of the universe.

In the most general case we know nothing more, unless we make assumptions a similar kind about the environment, i.e., the state and the dynamics of the remainder of the universe and its interactions with the spin. These assumptions define a model for what it means that this environment contains a detector with a pointer or screen, that responds to the prepared spin in the way required to count as a measurement.

Thus you need to specify a complete model for the measurement process (including a Hamiltonian for the dynamics of th model universe) to conclude something definite. This is the reason why the arguments for analyzing meaurement are either very lengthy (as in the AB&N paper) or only qualitative (as in my Part III).
 
  • #53
stevendaryl said:
As I said in my previous post, being deterministic does not imply being superdeterministic. Classical mechanics is not superdeterministic.
DarMM said:
In essence a deterministic world where the initial conditions never evolve into states corresponding to observers obtaining an accurate determination of the physical laws.
Do you have different notions of the meaning of superdeterminisitc?
 
  • #54
A. Neumaier said:
Your curly up and down correspond to pointer readings, i.e., functions of q-expectations (beables, hidden variables) of the detector, not to a state of the detector. Many states of the detector lead to identical pointer readings

Ok it is possible I just don't get it or you are talking about hidden variables in a way very different from what I am used to. But this may all be semantics around the use of the word "state" so I want to rephrase:

All I am trying to pin down is whether or not the hidden variable descriptions are such that, just before the measurement, all HV descriptions of the detector that will lead to an observable "up" reading (for a particular choice of axis) are completely disjoint/distinct from all the HV descriptions that will lead to an observable "down" reading?

In essence, would knowledge of the hidden variable description of the detector at t<1 allow me to perfectly predict the observed click at t=1?
 
  • #55
A. Neumaier said:
Do you have different notions of the meaning of superdeterminisitc?
I'm saying:
A superdeterministic world is a deterministic world where the initial conditions never evolve into states corresponding to observers obtaining an accurate determination of the physical laws.

A world can be deterministic without being superdeterministic if the initial conditions permit the development of observers who obtain accurate enough measurements to determine the laws of the world.

So for example in 't Hooft's model Quantum Mechanics is literally completely wrong. Not approximately right but inaccurate in some remote regimes like the Early Universe, but literally completely wrong even in its predictions of say the Stern Gerlach experiment. However the initial conditions of the world are such that experimental errors occur that make it look correct.
 
  • #56
charters said:
would knowledge of the hidden variable description of the detector at t<1 allow me to perfectly predict the observed click at t=1?
I don' think so, because in reading a discrete pointer there is a fuzzy decision boundary. This is like race conditions in computer science which may delay decisions indefinitely. Thus there is a partition into 3 sets, one deciding for spin up, one deciding for spin down, and one for indecision; the third one having positive measure that goes to zero only as the duration of the measurement goes to infinity.

In experimental practice, this accounts for the limited efficiency of detectors.
 
Last edited:
  • #57
stevendaryl said:
As I said in my previous post, being deterministic does not imply being superdeterministic. Classical mechanics is not superdeterministic.
DarMM said:
A world can be deterministic without being superdeterministic if the initial conditions permit the development of observers who obtain accurate enough measurements to determine the laws of the world.
In a classical Laplacian universe, a Laplacian detector of finite size perfectly knowing its own state can never get an arbitrarily accurate estimate of a single particle state external to it. Thus a classical Laplacian universe would be superdeterministic. Do you mean that, @DarMM, contradicting @stevendaryl?

If so, the thermal interpretation is also superdeterministic, for essentially the same reason.
 
  • #58
A. Neumaier said:
I don' think so, because in reading a discrete pointer there is a fuzzy decision boundary. This is like race conditions in computer science which may delay decisions indefinitely. Thus there is a partition into 3 sets, one deciding for spin up, one deciding for spin down, and one for indecision; the third one having positive measure that goes to zero only as the duration of the measurement goes to infinity.

In experimental practice, this accounts for the limited efficiency of detectors.

I don't understand how this answer is consistent with what you wrote in III.4.2, specifically:

"These other variables therefore become hidden variables that would determine the stochastic elements in the reduced stochastic description, or the prediction errors in the reduced deterministic description. The hidden variables describe the unmodeled environment associated with the reduced description.6 Note that the same situation in the reduced description corresponds to a multitude of situations of the detailed description, hence each of its realizations belongs to different values of the hidden variables (the q-expectations in the environment), slightly causing the realizations to differ."

Would your answer be different had I phrased my question as?:

would knowledge of the hidden variable description of the detector plus its local environment (eg, the detector casing or surrounding air in the lab) at t<1 allow me to perfectly predict the observed click at t=1?
 
  • #59
charters said:
I don't understand how this answer is consistent with what you wrote in III.4.2, specifically:

"These other variables therefore become hidden variables that would determine the stochastic elements in the reduced stochastic description, or the prediction errors in the reduced deterministic description. The hidden variables describe the unmodeled environment associated with the reduced description. Note that the same situation in the reduced description corresponds to a multitude of situations of the detailed description, hence each of its realizations belongs to different values of the hidden variables (the q-expectations in the environment), slightly causing the realizations to differ."

Would your answer be different had I phrased my question as?:

would knowledge of the hidden variable description of the detector plus its local environment (eg, the detector casing or surrounding air in the lab) at t<1 allow me to perfectly predict the observed click at t=1?
No. You can take the detector to be the whole orthogonal complement of the measured system, and my answer is still the same. You can also take it to be just the pointer variable; all other beables of the universe are effectively hidden variables, no matter whether they are actually hidden. My first response was less focussed and ignored the race conditions since your question was less clear.

This is because of the nature of a real detection process (which is what is modeled in the thermal interpretation). There is a continuous pointer variable ##x## (a function of the beables = hidden variables = q-expectations, all of them continuous) of the detector that is initially at zero. Suppose that the pointer readings for a decision up are close to ##1##, that for down are close to ##-1##, and a reading counts as definite only if the sign and one bit of accuracy have persisted for more than a minimal duration ##\Delta t##. This defines the three response classes up, down, and undecided. At short times after the preparation, the detector didn't have sufficient time to respond, and the third (undecided) set of conditions has measure essentially 1; the up and down measures are essentially zero. These measures are a continuous function of the observation time and gradually move to ##0,p,1-p##, but achieve these values only in the limit of infinite time.
 
  • #60
Ok I appreciate the details, but I don't think this is necessary for the heart of my question. Some finite time after the N=1 beam has become incident on the detector, the pointer is going to visibly have pointed towards 1 or -1. I am not concerned with how quickly this happens.

All I want to know is: would a full hidden variable/beable description of the detector/environment at some time before the beam is incident be sufficient to predict whether the detector eventually reads 1 or -1 (for any given beam).

I take this to be the minimal definition of hidden variable determinism in quantum foundations, so if you say no to this, I don't understand how you claim the TI is deterministic (except in the classical limit where all interpretations are effectively deterministic) or has meaningful hidden variables. Hidden variables that don't make this sort of prediction are not really fulfilling their defined purpose.
 
  • #61
charters said:
the pointer is going to visibly have pointed towards 1 or -1. I am not concerned with how quickly this happens.
or continues to oscillate, or is stuck near zero, due to race conditions. If you ignore this, you ignore a loophole that makes a practical difference - real efficiency is never 100%, and a good model of a deterministic universe must predict this reduced efficiency!

charters said:
would a full hidden variable/beable description of the detector/environment at some time before the beam is incident be sufficient to predict whether the detector eventually reads 1 or -1 (for any given beam).
It would, in all cases where a definite decision is reached, and it would predict when this is the case.
 
Last edited:
  • #62
A. Neumaier said:
It would, in all cases where a definite decision is reached, and it would predict when this is the case.

Ok perfect. So then something you need to explain is how, in an EPR experiment, the hidden variables describing the configuration of detector 1, and (when applicable) predicting the outcome of its measurement, are able to coordinate with the hidden variables doing the same for detector 2, such that Bell violations become possible.

The only known hidden variable solutions to this problem are

A) add a non-local pilot wave that can surgically adjust the local HVs as needed to create Bell violations, while using an absolute definition of simultaneity

B) superdeterminism, where the Bell violations are ultimately just the result of dumb luck in the initial conditions, and entanglement itself is just an illusion of this coincidence.

C) Retrocausality is an option too, but that's not quite a hidden variable approach per se.

But I also get the sense from earlier in the discussion you think its okay to stop short of making this choice and therefore don't need to engage with their perceived downsides in the existing foundations literature. I don't agree, and I expect you'll have a hard time getting folks to adopt (or even know if they'd want to adopt) the TI while not clearly biting one of these bullets. So that's my main point.

(One extra thing I hope for clarity: the idea that Bohmian mechanics has non-local hidden variables is not really accurate. What it has are local hidden variables that receive the benefit of non-local corrections via the pilot wave in order to permit Bell violations. And I don't see how a "non-local" hidden variables interpretation could be anything other than this.)
 
  • #63
charters said:
The only known hidden variable solutions to this problem are
Nothing in your arguments forbids that the thermal interpretation provides an additional, previously
unknown way to achieve that. @DarMM gave in post #268 of the main thread on the thermal interpretation a nice summary of the thermal interpretation, where he addresses this in his point 4.
 
  • #64
A. Neumaier said:
Nothing in your arguments forbids that the thermal interpretation provides an additional, previously
unknown way to achieve that. DarMM gave in post #268 of the main thread on the thermal interpretation a nice summary of the thermal interpretation, where he addresses this in his point 4.

I don't agree that what he calls correlator properties in that post can be consistent hidden variable determinism.

A correlator is a conditional of the form: "when subsystem A takes value x, B takes y; when A takes y, B is x". By construction, it requires that there is some uncertainty in the local description of each subsystem.

If such a correlator description is complete, the hidden variable descriptions of each local detector will not be definite and so will not satisfy the determinism/predictability condition we just established.

In hidden variable determinism, a non-local HV description is only of the form: "detector A will measure UP and B will measure DOWN" which is of course consistent with the truth of "detector A will measure UP" on its own. There are no conditionals. And just like local HVs, these trivial non-local HVs will not violate Bell ineqs without adopting one of the previously discussed options.
 
  • #65
A. Neumaier said:
In a classical Laplacian universe, a Laplacian detector of finite size perfectly knowing its own state can never get an arbitrarily accurate estimate of a single particle state external to it. Thus a classical Laplacian universe would be superdeterministic. Do you mean that, @DarMM, contradicting @stevendaryl?

If so, the thermal interpretation is also superdeterministic, for essentially the same reason.
No it's not just about not being able to obtain total precision. Its more that the initial state is conspiratorial. Let me take a real world example.

There was a recent test of CHSH violations that used light from distant quasars to select the spin orientations.

In a superdeterministic world quantum mechanics is actually false, but the light from the quasars happens to always select the correct orientation to incorrectly give the impression the CHSH inequalities are violated.

So it's not just a lack of arbitrary accuracy it's that the observers are determined to come to false conclusions about the physical laws that apply to their world.
 
  • #66
charters said:
will not violate Bell ineqs without adopting one of the previously discussed options.
Where is the theorem you refer to? I don't know of any theorem that has as one of its necessary alternatives a pilot wave statement as your post #62 states. There is a big difference between
charters said:
known hidden variable solutions to this problem
and necessary properties.

In any case, all interpretation have open research questions, and the thermal interpretation has these, too; some of these are discussed in post #293 of the main thread. No interpretation must indicate how it falls into a particular classification, though those interested in classifying interpretations may want to investigate these issues. Those interested in understanding quantum mechnaics only need one plausible interpretation they can make sense of.
 
  • #67
DarMM said:
No it's not just about not being able to obtain total precision. Its more that the initial state is conspiratorial. Let me take a real world example.

There was a recent test of CHSH violations that used light from distant quasars to select the spin orientations.

In a superdeterministic world quantum mechanics is actually false, but the light from the quasars happens to always select the correct orientation to incorrectly give the impression the CHSH inequalities are violated.

So it's not just a lack of arbitrary accuracy it's that the observers are determined to come to false conclusions about the physical laws that apply to their world.
In this sense, the thermal interpretation is definitely not superdeterministic. Very coarse knowledge of the state of the universe at preparation time, together with some more details about the detector and how it works, are sufficient to predict with the traditional approximations everything known.
 
  • #68
A. Neumaier said:
Where is the theorem you refer to?

I don't think I need a theorem. I'm only listing the solutions I am aware of and agree are viable, but I don't mean to be closed off to alternatives I've never contemplated.

However, I would say the burden of proof is on the proponent of a new interpretation to convince readers they've indeed found such a viable alternative to the accepted approaches to HVs that works in light of Bell's theorem. It is not enough just to say you have non-local, determinist HVs and be done with it. You need to elaborate on what exactly this means, especially when you claim its emphatically not a pilot wave or superdeterministic.

A. Neumaier said:
In any case, all interpretation have open research questions, and the thermal interpretation has these, too; some of these are discussed in post #293 of the main thread. No interpretation must indicate how it falls into a particular classification

Sure, and I would expect this will be one of the particular open questions that folks who think a lot about foundations will want to see tackled in the TI context. This is more than just a sociological classification exercise. It speaks to what the ontology of the TI is, what it claims the universe is like.
 
  • #69
A. Neumaier said:
In this sense, the thermal interpretation is definitely not superdeterministic
Yes, I would have said the thermal interpretation is deterministic, but not superdeterministic.

charters said:
A correlator is a conditional of the form: "when subsystem A takes value x, B takes y; when A takes y, B is x". By construction, it requires that there is some uncertainty in the local description of each subsystem.

If such a correlator description is complete, the hidden variable descriptions of each local detector will not be definite and so will not satisfy the determinism/predictability condition we just established
Remember one of the main differences between the thermal interpretation and other views is really that probability theory itself is given a different interpretation.

In the Thermal Interpretation a correlator does not have the meaning you give. Rather it is a bilocal property, that is a nonlocal property that requires measurements at two locations to ascertain. It has a fixed deterministic value.

However it is the metastability of the slow modes of the devices at each location that cause them to develop discrete inaccurate readings of this quantity. This inaccuracy requires one to use several measurements to determine the correlator.

So in this view it's not fundamentally a conditional and it doesn't require that there is (fundamental) uncertainty in each local device. That just arises as it normal does in the Thermal Interpretation.
 
  • #70
charters said:
. It is not enough just to say you have non-local, determinist HVs and be done with it. You need to elaborate on what exactly this means
It is enough to explain how this is compatible with long-distance entanglement experiments, and I did this in Part II of my series of papers.
 

Similar threads

Replies
1
Views
1K
Replies
2
Views
2K
Replies
1
Views
1K
Replies
24
Views
4K
Replies
43
Views
3K
Replies
826
Views
78K
Replies
7
Views
2K
Replies
24
Views
3K
Back
Top