Is this popular description of entanglement correct?

In summary, this conversation is discussing the popularized statement "If particle A is found to be spin-up, "we know that" particle B "has" spin-down." The speaker thinks that this statement is not always accurate because if the second particle is measured in a different direction then the first particle's spin may be different.
  • #106
PeterDonis said:
...even though the actual underlying physical laws are completely different from quantum mechanics. In other words, the initial conditions were arranged so that we humans would be misled into inferring a completely wrong set of physical laws, which nevertheless make all the correct predictions about experimental results.

And those physical laws being different only with respect to Bell tests and the like. Apparently, the speed of light really is a constant with the observed value of c. And general relativity does not require humans to be misled, etc.
 
  • Like
Likes mattt
Physics news on Phys.org
  • #107
Sunil said:
I store the seeds for some pseudorandom number generators in devices near A resp. B sufficiently isolated so that you can justify (with whatever means, not my problem) the independence assumption for this seed.
The independence assumption cannot be justified for the seed because our theory has long range interactions. It can be theoretically justified in the non-interacting case (Newtonian mechanics). So, you cannot isolate the seed.

Sunil said:
I know, a little bit unfair to combine here two different parts of your argumentation. But you have no choice here: Either you acknowledge that there are ways to make sure that there is independence - then I will use these ways to construct a Bell theorem test where we can make sure that there is independence of the decisions of the experimenters from the initial state of the pair.
There is independence only in those theories without long-range interactions. I agree that Bell's theorem rules them out.

Sunil said:
Or you cannot do it, then my point is proven that with superdeterminism statistical experiments are dead.
1. Statistical experiments are possible with superdeterminism. Using computer simulations to test for many initial states is a valid method to do them.

2.You can use independence where the variables you are looking for are not significantly impacted by long-range interactions. Newtonian mechanics is such an example, but also chemistry (EM interactions between distant molecules do not lead to a net energy transfer), biology and so on.

Sunil said:
And that's why there will be no correlation...
Can you provide any evidence for your claim? That a complex system cannot lead to correlations?
Sunil said:
If you doubt, make such computations yourself, with the largest amount of what you can do...
Nice try to shift the burden of proof. It's your job to provide evidence for your assertions.

Sunil said:
You will not find any correlations. Except for cases where it is possible to explain them in a sufficiently simple way.
I'm looking forward to see your calculations.

Sunil said:
Just to clarify what we are arguing here about. Looks like you want to argue that superdeterminism can be somehow restricted for macroscopic bodies if they are in sufficiently stable states or so.
No. Electrons are as stable as billiard balls. But electrons do interact at a distance, while billiard balls don't (if you neglect gravity). The electrons inside billiard balls do interact, but because you have the same number of positive and negative charges this interaction does not manifest itself as a net force on the balls. So, if you are only interested about the position/velocity of the balls you can assume they are independent for distant objects.

Sunil said:
Let's assume that is the case. Then I will build a device creating pseudorandom numbers out of such macroscopic pieces.
This does not work, because the independence only holds in a regime where Newtonian mechanics is a good approximation. The emission of EM waves is not described in such a regime.
 
  • #108
Lord Jestocost said:
The point of "Superdeterminism" is simple: The initial conditions of the Universe were arranged that way that all measurements performed were and are consistent with the predictions of quantum mechanics.
Can you please explain how did you arrived to that conclusion starting from my explanation:

"We repeat the simulation for a large number of initial states so that we can get a statistically representative sample. The initial states are chosen randomly, so no conspiracy or fine-tuning is involved." ?

And what does Big-Bang has to do with this?
 
  • #109
DrChinese said:
Everything you mention is a gigantic hand wave. Basically: assume that my conclusion is correct, and that proves my conclusion is correct.
Can you point out exactly where I assumed what I wanted to prove?

DrChinese said:
1. In the 't Hooft reference, he does not derive QM in the 6 pages. And since there is no model presented, and no attempt to show why Bell does not apply, he certainly doesn't make any predictions.
He presents the derivation in the 4'th reference:

Fast Vacuum Fluctuations and the Emergence of Quantum Mechanics
Found Phys 51, 63 (2021)
https://arxiv.org/pdf/2010.02019.pdf

At page 14 we find:

"The main result reported in this paper is that by adding many interactions of the form (4.1), the slow variables end up by being described by a fully quantum mechanical Hamiltonian Hslow that is a sum of the form (5.1)."

DrChinese said:
2. But I DO pick the settings! The question I want answered is HOW my choice is controlled. If something forces me to make the choice I do, what is it and most importantly... WHERE IS IT? Is it in an atom in my brain? Or a cell? Or a group of cells? And what if I choose to make a choice by way of my PC's random number generator? How did the computer know to give me a number that would lead to my choice?
Think about it in this way. Before the experiment you are in some state. this state is not independent of the state of the particle source, since the whole system must obey Maxwell's equations or whatever equations the theory postulates. This restricts the possible initial states, and, because the theory is deterministic, it also restricts your future decisions. The same constraints apply to your random number generator.

The hypothesis here is that the hidden variables that would violate QM are impossible to produce because there is no initial state that would lead to their generation. but I insist, I do not claim that this hypothesis is true, only that it can be true for some theories. So, we cannot dismiss them all, we need to check them one by one.

DrChinese said:
4. You wouldn't need to jump through "superdeterministic ad hoc" rules if Bell didn't exclude all local realistic theories.
He only excluded those without long-range interactions. If you disagree, please explain why the function I mention in post #93 necessarily has equal probabilities. If the probabilities are different, the "true" rate for that theory could have, in principle, any value.

DrChinese said:
Specifically, if the observed rate and the "true" rate both fell inside the classical range (>33% in my example) so that Bell Inequalities aren't violated.
By "classical" here you only include Newtonian mechanics. I am not aware of any calculation in the context of a long-range interacting theory, like classical EM, GR or fluid mechanics.

DrChinese said:
In case you missed it, "superdeterminsm" ONLY applies to Bell tests and the like. For all other physical laws (including the rest of QM), apparently the experimenter has completely free will.
Not at all. In most cases, nobody cares about the experimenter's free will. If an astronomer reports a supernova explosion, nobody cares if his decision to look in that direction was free. As long as he reports correctly what he observed it does not matter. The same is true for LIGO or LHC discoveries.

DrChinese said:
For example: tests of gravitational attraction, the speed of light, atomic and nuclear structures, etc.
You test gravitational attraction by looking at orbiting stars for example. Why should I care if the astronomer was free or not to look there? You measure the speed of light by bouncing a laser from the Moon. Why is the experimenter's freedom required?

DrChinese said:
BTW, the superdeterminism you are describing is contextual...
Yes, it is.

DrChinese said:
and therefore violates local realism.
It does not. You assume what you want to prove.

DrChinese said:
Maintaining local realism is the point of superdeterminism in the first place. So that's a big fail. In case this is not clear to you why this is so: the SD hypothesis is that the experimenter is forced to make a specific choice. Why should that be necessary?
It is required by the consistency conditions of the initial state. Impossible choices do not correspond to a valid initial state, so you cannot do them.

DrChinese said:
If the true rate was always 25% (violating the Bell constraint), they there would be no need to force the experimenter to make such a choice that complies - any setting choice would support SD.
Any physically possible choice supports SD. The choices that are not made are impossible because there is no valid initial state that could evolve into them.

DrChinese said:
Obviously, the true rate must be within the >33% region in my example to avoid contextuality issues.
Contextuality is what we want, there is no issue here.
 
  • #110
AndreiB said:
The independence assumption cannot be justified for the seed because our theory has long range interactions. It can be theoretically justified in the non-interacting case (Newtonian mechanics). So, you cannot isolate the seed.
? That means, once we have with gravity and EM long range interactions, your "independence assumption" can never be applied in our world? Ok, that means that in our world with superdeterminism statistical science is dead, not?
AndreiB said:
1. Statistical experiments are possible with superdeterminism. Using computer simulations to test for many initial states is a valid method to do them.
Computer simulations are computer simulations, not experiments. All you can do with them is to clarify what a theory predicts. A falsification requires experiments.
AndreiB said:
2.You can use independence where the variables you are looking for are not significantly impacted by long-range interactions. Newtonian mechanics is such an example, but also chemistry (EM interactions between distant molecules do not lead to a net energy transfer), biology and so on.
The variables I have to look for in the Bell experiment are the positions of the macroscopic detectors. They have to be independent from the preparation of the pair. This is all I need.

I guarantee this by having the seeds in macroscopic form in isolated rooms near the detectors, thus, not significantly impacted nor impacting anything outside the room in the initial phase. The rooms are opened only a short moment before the measurement itself happens, so Einstein causality (which holds for EM as well as gravity, thus, all the known long range forces) prevents an influence on the other side.
AndreiB said:
Can you provide any evidence for your claim? That a complex system cannot lead to correlations?
Yes. Namely the success of science based on classical causality. Which includes the common cause principle that correlations require causal explanations. Causal explanations in existing human science are quite simple explanations, they don't require anything close to computing even ##10^9## particles. If complex systems would regularly lead to nontrivial correlations there would be a lot of known violations of the common cause principle.
AndreiB said:
I'm looking forward to see your calculations.
I have no time and resources for meaningless computations which are known to give only trivial results, namely independence. Which anyway would prove nothing.
 
  • #111
PeterDonis said:
In other words, the initial conditions were arranged so that we humans would be misled into inferring a completely wrong set of physical laws, which nevertheless make all the correct predictions about experimental results.
Superdeterminism does not imply that QM is wrong, on the contrary. The whole point of SD is to reproduce QM, why would you want to reproduce a wrong theory? Indeed, EPR proves that QM cannot be fundamental (if we want to avoid non-locality), but, as a statistical approximation, is correct.
 
  • #112
Sunil said:
? That means, once we have with gravity and EM long range interactions, your "independence assumption" can never be applied in our world?
If those interactions are relevant for the variable of interest in the experiment, the independence assumption (IA) cannot be used. As repeatedly explained, we can ignore those forces in some situations but not in others.

Sunil said:
Ok, that means that in our world with superdeterminism statistical science is dead, not?
As proven by the simulation example, science is not dead.

Sunil said:
Computer simulations are computer simulations, not experiments. All you can do with them is to clarify what a theory predicts. A falsification requires experiments.
A computer simulation allows you to calculate the theoretical prediction for a specific test, like a Bell test. You compare that prediction with experiment in the normal way. You don't need the independence assumption to perform the experiment. you just do it.

Sunil said:
The variables I have to look for in the Bell experiment are the positions of the macroscopic detectors. They have to be independent from the preparation of the pair. This is all I need.
indeed.

Sunil said:
I guarantee this by having the seeds in macroscopic form in isolated rooms near the detectors, thus, not significantly impacted nor impacting anything outside the room in the initial phase. The rooms are opened only a short moment before the measurement itself happens, so Einstein causality (which holds for EM as well as gravity, thus, all the known long range forces) prevents an influence on the other side.
Again, this does not work. Say you have 2 balls and 1 electron. The position/momenta of the 2 balls can be assumed to be independent (since they are well described by Newtonian mechanics and no relevant long-range interaction is taken place - unless they are in space and gravity must be taken into account).

The "instantaneous" position/momentum of the electron is not independent of the 2 balls, since the electron "feels" the EM fields associated with the atoms in the balls. It will be accelerated back and forth in a sort of Brownian motion. Averaged for a long enough time, the trajectory of the electron would resemble the non-interacting one, since the EM forces cancel out on average.

Our hidden variable, however, depends on the exact state of the electron at the moment it "takes the jump", so it will not be independent of your macroscopic balls.

Sunil said:
Causal explanations in existing human science are quite simple explanations, they don't require anything close to computing even ##10^9## particles.
1. Really, have you seen a computer model for the formation of a planetary system from a cloud of gas and dust? How many particles are there? Sure, the calculation involves approximations, but Nature does the job without them. So, clearly, a very complex system of many interacting particles can lead to simple correlations, like those specific for a planetary system (same direction/plane of orbit, a certain mass distribution and so on.)

2. The existing explanations are simple because of limitations in computation power. Clearly, the more objects you include, the better the simulation approaches reality, not worse, as you imply. Metheorologists would love to have 10^26 data points and computers powerful enough to do the calculations. They are not restricted to 1point/Km because causality stops working for a higher resolution.

Sunil said:
If complex systems would regularly lead to nontrivial correlations there would be a lot of known violations of the common cause principle.
Why?

Sunil said:
I have no time and resources for meaningless computations which are known to give only trivial results...
Evidence please? (evidence for the triviality of the results, not for your lack of time)
 
  • #113
AndreiB said:
If those interactions are relevant for the variable of interest in the experiment, the independence assumption (IA) cannot be used. As repeatedly explained, we can ignore those forces in some situations but not in others.
For the variables of the first part of the experiment, the boxes containing the seeds are in no way relevant. There is a preparation of the state of photons, thus, no charge and no relevant gravity. The boxes are isolated.

In the second part, we use Einstein causality to show the irrelevance of the open boxes for the other device.

AndreiB said:
A computer simulation allows you to calculate the theoretical prediction for a specific test, like a Bell test. You compare that prediction with experiment in the normal way. You don't need the independence assumption to perform the experiment. you just do it.
You also need probability assumptions for the computer computation, say, for the choice of your initial values. And you need the independence assumption from everything else. Your ##10^{26}## particles are, last but not least, only a minor part of the universe.

By the way, your thought experiment simulation does not test superdeterminism. It simply tests a particular variant of usual theory, which assumes some distribution of the initial values, some equations of motion. Why you think it has any relation to superdeterminism is not clear.
AndreiB said:
Again, this does not work. Say you have 2 balls and 1 electron. The position/momenta of the 2 balls can be assumed to be independent (since they are well described by Newtonian mechanics and no relevant long-range interaction is taken place - unless they are in space and gravity must be taken into account).

The "instantaneous" position/momentum of the electron is not independent of the 2 balls, since the electron "feels" the EM fields associated with the atoms in the balls.
I will see how you will handle photons (which are used in most Bell tests). But my actual impression is that you will find another excuse for not allowing the Bell tests.
AndreiB said:
1. Really, have you seen a computer model for the formation of a planetary system from a cloud of gas and dust? How many particles are there?
As much as they were able to handle. I don't know and don't care. But without that computer simulation science would be as fine as it is today.

AndreiB said:
Sure, the calculation involves approximations, but Nature does the job without them. So, clearly, a very complex system of many interacting particles can lead to simple correlations, like those specific for a planetary system (same direction/plane of orbit, a certain mass distribution and so on.)
Simple correlations which have simple explanations.
AndreiB said:
2. The existing explanations are simple because of limitations in computation power. Clearly, the more objects you include, the better the simulation approaches reality, not worse, as you imply.
I don't imply this.
If complex systems would regularly lead to nontrivial correlations there would be a lot of known violations of the common cause principle.
AndreiB said:
Why?
Because each correlation between some preparation and later human decisions would be a correlation without causal explanation, thus, a clear violation of the common cause principle. If such a correlation would be observed, people would not ignore it, but would try very hard to get rid of it. With superdeterminism being correct and able to do what you claim - to lead to violations of the Bell inequalities in essentially all Bell tests - they would be unable to get rid of the correlation. (As they try hard to improve Bell tests.)

AndreiB said:
Evidence please? (evidence for the triviality of the results, not for your lack of time)
Learn to read, I have already given it.
 
  • #114
AndreiB said:
In medicine we have the problem that a lot of phenomena are not understood. The placebo effect simply means that the psychological state of the patient matters. Even if you cannot eliminate this aspect completely you could investigate the reason behind the effect and take that reason (say the presence of some chemicals in the brain) into account. Of course, this makes research harder, but it is also of a greater quality since you get a deeper understanding of the drug's action. In any case, it's not useless.
Independent of whether you can (or cannot) simply explain the origin of the placebo effect, the important realization is that you can experimentally verify its existence. And this possiblility to experimentally verify the violation of potentially unjustified independence assumptions is how Sabine Hossenfelder in 2011 came to seriously consider superdeterminism.
Once you have experimentally established the precense of an effect, it certainly makes sense to investigate the reason behind the effect.

And there is also another side of this coin: If you have a superdeterministic model, and it predicts that you have some chance to experimentally verify the presence of the violation of independence assumption, then you are in a totally different situation than for t'Hooft models. Understanding his points about high energy degrees of freedom might be worthwhile nevertheless, but not as a way to defend the possibility of superdeterminism. (And if you have an apparently superdeterminstic model, but the reasons why it doesn't allow to experimentally verify the presence of the violation of independence assumption are more subtle and more consistent than for t'Hooft models, then there is also the possibility that it is not really a superdeterministic model after all.)
 
  • #115
Sunil said:
For the variables of the first part of the experiment, the boxes containing the seeds are in no way relevant. There is a preparation of the state of photons, thus, no charge and no relevant gravity. The boxes are isolated.
In order to prepare the photons (classicaly EM waves) you need an electron to accelerate. The polarizations of those waves depend on the way the electron accelerated, itself dependent on the EM fields at that location. The fields are correlated with the global charge distribution, including your boxes. The boxes cannot be isolated. They are, after all, just large groups of charges, each contributing to the EM fields in which the experiment unfolds.

Sunil said:
You also need probability assumptions for the computer computation, say, for the choice of your initial values.
Indeed.

Sunil said:
And you need the independence assumption from everything else. Your ##10^{26}## particles are, last but not least, only a minor part of the universe.
This might be a problem, indeed. I see two ways out:

1. One might prove mathematically that beyond a certain N the statistics remain stable, so we can compute the prediction using the minimum number of particles that could model the experiment.
2. Since EM does not depend on scale you can test for Bell violations using macroscopic charges. For the reason presented earlier, these would be independent from the rest of the universe. I think this is actually doable in practice.

Sunil said:
By the way, your thought experiment simulation does not test superdeterminism. It simply tests a particular variant of usual theory, which assumes some distribution of the initial values, some equations of motion. Why you think it has any relation to superdeterminism is not clear.
It's because I think that any usual theory with long-range interactions is potentially superdeterministic. there is no particular "superdeterministic" assumption. Just take a deterministic theory with long-range and local forces (hence a field theory) and see what you get.

Sunil said:
I will see how you will handle photons (which are used in most Bell tests). But my actual impression is that you will find another excuse for not allowing the Bell tests.
Not at all, see above!

Sunil said:
Simple correlations which have simple explanations.
They are only simple because we developed statistical tools to deal with large number of particles. It is still the case that large number of interacting particles can lead to observable correlations. If large number of particles could "cooperate" to produce planetary systems, why would they not be able to also produce entangled states?

Sunil said:
Because each correlation between some preparation and later human decisions would be a correlation without causal explanation, thus, a clear violation of the common cause principle.
Just because you do not know the explanation does not mean there is none, so there is no violation of the common cause principle.

Sunil said:
If such a correlation would be observed, people would not ignore it, but would try very hard to get rid of it. With superdeterminism being correct and able to do what you claim - to lead to violations of the Bell inequalities in essentially all Bell tests - they would be unable to get rid of the correlation. (As they try hard to improve Bell tests.)
With SD they need not get rid of those correlations, since SD would provide the cause they are searching for. The SD explanation for a Bell test is of the same type as the explanation for why planets follow elliptical orbits. It's the N-body EM equivalent of the 2-body planet-star gravitational system.
 
  • #116
AndreiB said:
Since EM does not depend on scale you can test for Bell violations using macroscopic charges. For the reason presented earlier, these would be independent from the rest of the universe. I think this is actually doable in practice.
What could this possibly mean? It is rather obvious that you have never studied any of Aspect et al.'s papers.
It is not sufficient for your utterings to sound logical. They should also make sense.
 
  • #117
I will see how you will handle photons (which are used in most Bell tests). But my actual impression is that you will find another excuse for not allowing the Bell tests.

AndreiB said:
Not at all, see above!
Above I have found this:
AndreiB said:
In order to prepare the photons (classicaly EM waves) you need an electron to accelerate. The polarizations of those waves depend on the way the electron accelerated, itself dependent on the EM fields at that location. The fields are correlated with the global charge distribution, including your boxes. The boxes cannot be isolated. They are, after all, just large groups of charges, each contributing to the EM fields in which the experiment unfolds.
An excuse for not allowing the use of your independence assumption in Bell tests.

But let's look at the next excuse which you have to present for the experiment where the direction of the detectors are defined by starlight arriving shortly before the measurement from the other side than the particle measured at that detector. There was a real experiment with this. Instead of starlight, I would prefer CMBR radiation coming from this other side. So, the event which has created these photons has not been in the past light cone of the preparation of the pair.

BTW, if there is a singularity in the past - and according to GR without inflation, as well as to GR with inflation caused by a change of the vacuum state, there has to be a singularity - then there is a well-define and finite horizon of events which have a common event in the past with us. This horizon can be easily computed in GR, and in the BB without inflation it was quite small, so that the visible inhomogeneities visible in the CMBR where greater than this horizon size. This problem was named "horizon problem". Inflation solves it FAPP by making it greater than what we see in the CMBR. But it does not change the fact that those events we see in the CMBR coming from opposite sides are causally influenced by causes farther away in those directions, and all we have to do is to go so much far away searching for those causes that we will end up with causes in the opposite directions which have nothing in their common past. So, each of the two causes can influence (if Einstein causality holds) only one of the detectors, and not the preparation procedure.

But once I can modify, by modifying this external cause, only one detector setting, I have independent control over one detector setting.

As before, I'm sure you will find an excuse.
AndreiB said:
1. One might prove mathematically that beyond a certain N the statistics remain stable, so we can compute the prediction using the minimum number of particles that could model the experiment.
This is what normal science, with the rejection of superdeterminism, is assuming. The statistics remain stable, namely the interesting variables which do not have sufficiently simple causal explanations for their correlations will remain independent. Except that nobody hopes that one can prove this mathematically for a completely general situation. But for various pseudorandom number generators such independence proofs are known. I even remember to have seen the proof for the sequence of digits of ##pi##.
AndreiB said:
2. Since EM does not depend on scale you can test for Bell violations using macroscopic charges. For the reason presented earlier, these would be independent from the rest of the universe. I think this is actually doable in practice.
Ah, I see, this is what you have meant with "above". Nice trick, given that (I think) you know that it is quite difficult to prepare entanglement states for macroscopic bodies in a stable way.
AndreiB said:
It's because I think that any usual theory with long-range interactions is potentially superdeterministic.
So your claim that superdeterministic theories may be falsifiable is bogus. That you think about such potentiality does not make that theory superdeterministic.
AndreiB said:
there is no particular "superdeterministic" assumption. Just take a deterministic theory with long-range and local forces (hence a field theory) and see what you get.
You get what usual science assumes - independence if there is no causal justification for a dependence.

This independence assumption (the zero hypothesis) is clearly empirically falsifiable, and if it is falsified, then usual science starts to look for causal explanations. And usually finds it. ("Usually" because this requires time, so that one has to expect that there will always cases where the search was not yet successful.)
AndreiB said:
It is still the case that large number of interacting particles can lead to observable correlations.
This is as probable as that all the atoms of a gas concentrate themselves in one small part of the bottle.

This could be, in fact, another argument: I would expect states with correlations have lower entropy than states with zero correlations. Indeed, if the gas concentrates in the upper left corner of the bottle, there will be correlations between up-down and left-right directions, while in the homogeneous gas distribution where will be none. (Except, of course, for those forms of the bottle where the form alone already leads to a correlation of them also for the homogeneous distribution.)
AndreiB said:
If large number of particles could "cooperate" to produce planetary systems, why would they not be able to also produce entangled states?
The cooperation for planetary systems is already predicted by very rough approximations, and this prediction does not change even if we make rough approximation errors. But if we add white noise to variables with correlations, the correlations decrease.
AndreiB said:
Just because you do not know the explanation does not mean there is none, so there is no violation of the common cause principle.
False logic. If I don't know the explanation, there may be one. But it is as well possible that there is none, thus, a violation of the common cause principle. Your "so there is no" obviously does not follow.

We have the large experience of humankind with the successful application of the common cause principle. Essentially everybody identifies correlations in everyday life and then tries to find explanations. This fails if the correlations are not real, but statistical errors. But many times causal explanations will be found. If it would be violated in reality, it would have been detected long ago.
AndreiB said:
With SD they need not get rid of those correlations, since SD would provide the cause they are searching for.
Which is an euphemistic reformulation of my thesis that SD would be the end of science. There would be no longer any need to search for causal explanations of correlations.
 
  • #118
AndreiB said:
1. Can you point out exactly where I assumed what I wanted to prove? ... He presents the derivation in the 4'th reference:

Fast Vacuum Fluctuations and the Emergence of Quantum Mechanics
Found Phys 51, 63 (2021)
https://arxiv.org/pdf/2010.02019.pdf2. Think about it in this way. Before the experiment you are in some state. this state is not independent of the state of the particle source, since the whole system must obey Maxwell's equations or whatever equations the theory postulates. This restricts the possible initial states, and, because the theory is deterministic, it also restricts your future decisions. The same constraints apply to your random number generator.

The hypothesis here is that the hidden variables that would violate QM are impossible to produce because there is no initial state that would lead to their generation. but I insist, I do not claim that this hypothesis is true, only that it can be true for some theories. So, we cannot dismiss them all, we need to check them one by one.

3. Not at all. In most cases, nobody cares about the experimenter's free will. If an astronomer reports a supernova explosion, nobody cares if his decision to look in that direction was free. As long as he reports correctly what he observed it does not matter. The same is true for LIGO or LHC discoveries.

You test gravitational attraction by looking at orbiting stars for example. Why should I care if the astronomer was free or not to look there? You measure the speed of light by bouncing a laser from the Moon. Why is the experimenter's freedom required?4, [Superdeterminism is contextual] Yes, it is.

1. Why yes I can! Note that for 't Hooft's reference, he is quoting... himself! (Just as you seem to do.) And he claims this to be a derivation of QM. Wow, who knew you could And from his reference, which is about "fast" variables (which I am not aware as part of any standard model), he tells us:

"...we first assume the existence of very high frequency oscillations. These give rise to energy levels way beyond the regime of the Standard Model."

How about we assume the existence of very small turtles? "It's turtles all the way down..." Or how about we assume the universe was created last Thursday, and our memories of an earlier existence are false (Last Thursdayism). Basically: you can't reference another author referencing himself with a completely speculative set of hypotheses/assumptions that dismiss Bell, and then say "look what I proved".

2. I accept that any model, deterministic or not, has a restricted number of initial states. What is missing (among other things) is i) a causal connection between a) the entangled particle source(s) and b) the many apparati determining the measurement context; and ii) a viable description of the mechanism of how that causal connection between a) and b) coordinates to obey the quantum mechanical expectation values locally.

An important note about the sources of entangled particles per a) above. The entangled particles can come from fully independent laser sources, without having ever been present in the same light cone. Now, I am perfectly aware that within a purported Superdeterministic model, everything in the observable universe lies in a common light cone and therefore would be "eligible" to participate in the anti-Bell conspiracy. But now you would be arguing:

There exist particular photons, from 2 different lasers* that also end up being measured at the proper angles (context) for a Bell Inequality violation, but only when statistically averaged (even though there is complete predetermination of each individual case); and "somehow" these two photons each carry a marker of some kind (remember they have never been in causal contact - so the laser source must have passed on this marker to each photon at the time it was created) so that it "knows" whether to be up or down - but only in the specific context of that a measurement setting that can be changed midflight according to a pseudo-random generator for the setting - that can itself have involvement by any number of human brains and/or computers.

So, where did any of this get explained other than by a general purpose "suppose that"? Because there is no known physics - EM, QM or otherwise - that could serve as a base mechanism for any of the above to support the wild assertions involved in SD.3. I wasn't referring to free will in mentioned measurements of c, gravitiation, or any other constant. I was referring to the fact that the ONLY scenario where Superdeterminism is a factor is in Bell tests. Apparently, the universe is just fine at revealing its true nature without a need to "conspire" at everything else.

Imagine that we are measuring the mean lifetime of a free neutron as being 880 seconds. But then you tell me it's really 333 seconds. Your explanation is: It's just that the sample was a result of initial settings, and those initial settings led to an unfair sample. And by the way, that unfair sample always gives the same results: 880 seconds instead of 333 seconds. By analogy, that is much like the SD hypothesis that the local realistic value of my Bell test example must be at least .333, although the observed value is .250. Why do you need a conspiracy to explain the results of one scientific test, but no others?4. Glad you agree. A contextual theory is not realistic. According to EPR ("elements of reality"), there must be counterfactual values for all elements of reality that exist, regardless of whether or not you can measure them simultaneously. *See for example:
High-fidelity entanglement swapping with fully independent sources
https://arxiv.org/abs/0809.3991
 
  • Like
Likes mattt and weirdoguy
  • #119
AndreiB said:
Superdeterminism does not imply that QM is wrong
It does on any interpretation of QM except one that views QM as just a statistical model over some underlying deterministic physics, where the statistics and probabilities have the standard classical ignorance interpretation. The latter interpretation of QM seems to be one of the least popular ones.
 
  • Like
Likes vanhees71
  • #120
DrChinese said:
A contextual theory is not realistic. According to EPR ("elements of reality"), there must be counterfactual values for all elements of reality that exist, regardless of whether or not you can measure them simultaneously.
I disagree. Bohmian and other realistic interpretations of QM are, given that they give the QM predictions, necessarily contextual given the Kochen-Specker theorem.

And this can be also easily seen explicitly, given that the trajectories of the system are influenced by the trajectories of the measurement devices.
 
  • Like
Likes mattt, vanhees71 and gentzen
  • #121
Sunil said:
An excuse for not allowing the use of your independence assumption in Bell tests.
Either my explanation is true or it is not. I think the word "excuse" here is used to avoid accepting that my explanation is perfectly valid.

Sunil said:
But let's look at the next excuse which you have to present for the experiment where the direction of the detectors are defined by starlight arriving shortly before the measurement from the other side than the particle measured at that detector. There was a real experiment with this. Instead of starlight, I would prefer CMBR radiation coming from this other side. So, the event which has created these photons has not been in the past light cone of the preparation of the pair.
1. The model I proposed is in term of classical EM. The Big-Bang, inflation period and all that cannot be described in terms of this model. So, let's stay in a regime where this model makes sense.

2. I agree that if you can prove that "the event which has created these photons has not been in the past light cone of the preparation of the pair" SD is dead. The question is, can you?

Sunil said:
BTW, if there is a singularity in the past - and according to GR without inflation, as well as to GR with inflation caused by a change of the vacuum state, there has to be a singularity - then there is a well-define and finite horizon of events which have a common event in the past with us. This horizon can be easily computed in GR, and in the BB without inflation it was quite small, so that the visible inhomogeneities visible in the CMBR where greater than this horizon size.
As far as I know there is no theory at this time that is capable of describing the Big-Bang. So all this is pure speculation.

Sunil said:
This problem was named "horizon problem". Inflation solves it FAPP by making it greater than what we see in the CMBR. But it does not change the fact that those events we see in the CMBR coming from opposite sides are causally influenced by causes farther away in those directions, and all we have to do is to go so much far away searching for those causes that we will end up with causes in the opposite directions which have nothing in their common past. So, each of the two causes can influence (if Einstein causality holds) only one of the detectors, and not the preparation procedure.
Can you please specify the conditions at the Big-Bang? Was the Big-Bang a deterministic process or not? if it is described by GR it should be, right? Did correlations exist in the pre-Big-Bang state? What evidence we have for that?

Sunil said:
As before, I'm sure you will find an excuse.
I am not going to accept a bunch of assumptions with no evidence behind them. If you present a coherent theory of the Big-Bang I'll look into it and see if an "excuse" is to be found.

Sunil said:
This is what normal science, with the rejection of superdeterminism, is assuming.
I don't think so. "Normal science" uses a certain model. The conclusions only apply if the model is apt for the experiment under investigation. For example, the kinetic theory of gases apply for an ideal gas. It works when the system is well approximated by such a model. If your gas is far from ideal you don't stubbornly insist on this model, you change it. In the case of Bell's theorem the model is Newtonian mechanics with contact forces only. Such a model is inappropriate for describing EM phenomena, even classical EM phenomena like induction. So, it is no wonder that the model fails to reproduce QM.
Sunil said:
The statistics remain stable, namely the interesting variables which do not have sufficiently simple causal explanations for their correlations will remain independent.
"Sufficiently simple" is a loaded term. And i didn't claim that the statistics does not remain stable in this case, I just don't know. If you are right and the function behaves well, great. We will be able to compute the classical EM prediction for a Bell test. When such computation is done we will see if it turns out right or wrong.
Sunil said:
But for various pseudorandom number generators such independence proofs are known. I even remember to have seen the proof for the sequence of digits of ##pi##.
I don't get the your point about PI. Clearly, the digits of PI are not independent since they are determined by a quite simple algorithm. Two machines calculating PI would be perfectly correlated.

Sunil said:
Ah, I see, this is what you have meant with "above". Nice trick, given that (I think) you know that it is quite difficult to prepare entanglement states for macroscopic bodies in a stable way.
IF Bell correlations are caused by long-range interactions between the experimental parts one should be able to prepare macroscopic entangled states. I am not aware of any attempt of doing so.
Sunil said:
So your claim that superdeterministic theories may be falsifiable is bogus. That you think about such potentiality does not make that theory superdeterministic.

Clearly, all theories with long-range interactions are falsifiable. Classical EM, GR, fluid mechanics have been tested a lot. I have laid out my argument why such theories could be superdeterministic. We will know that when the function is computed. Until then you cannot rule them out.

Sunil said:
You get what usual science assumes - independence if there is no causal justification for a dependence.
But there is a causal justification. There is a long-range interaction involved that determines the hidden variable. "Normal" science does not assume independence when this is the case.

Sunil said:
This independence assumption (the zero hypothesis) is clearly empirically falsifiable, and if it is falsified, then usual science starts to look for causal explanations. And usually finds it. ("Usually" because this requires time, so that one has to expect that there will always cases where the search was not yet successful.)
As far as i can say, the independence assumption was falsified by Bell tests + EPR argument. Locality can only be maintained if the independence assumption fails. And no violation of locality was ever witnessed in "normal science", right?

Sunil said:
This is as probable as that all the atoms of a gas concentrate themselves in one small part of the bottle.
And this spontaneously happens when the non-interaction assumption (approximately true for a gas far from its boiling point) fails when the gas is cooled. Exactly my point.

Sunil said:
The cooperation for planetary systems is already predicted by very rough approximations...
There was a time when no suitable model existed and no such approximations were possible. We could very well be at this stage with entanglement.

Sunil said:
False logic. If I don't know the explanation, there may be one. But it is as well possible that there is none, thus, a violation of the common cause principle. Your "so there is no" obviously does not follow.
There is nothing wrong with my logic. If you can't prove a violation (and you can't) you cannot just assume one.

Sunil said:
We have the large experience of humankind with the successful application of the common cause principle. Essentially everybody identifies correlations in everyday life and then tries to find explanations. This fails if the correlations are not real, but statistical errors. But many times causal explanations will be found. If it would be violated in reality, it would have been detected long ago.
I agree. This is why I find SD a natural choice. It explains the correlations in terms of past causes. The other possible explanation is non-locality, a behavior which was never witnessed.

I think you forget the really important point that without SD you have non-locality. Your arguments based on what "normal science" assumes or not do not work for this scenario, since, when you factor in the strong evidence for locality, the initial probability for a violation of the statistical independence is increased many orders of magnitude.
 
  • #122
DrChinese said:
Note that for 't Hooft's reference, he is quoting... himself! (Just as you seem to do.) And he claims this to be a derivation of QM.
Yes, he is quoting himself because he invented the model. What's wrong with that?

DrChinese said:
And from his reference, which is about "fast" variables (which I am not aware as part of any standard model), he tells us:

"...we first assume the existence of very high frequency oscillations. These give rise to energy levels way beyond the regime of the Standard Model."

How about we assume the existence of very small turtles? "It's turtles all the way down..." Or how about we assume the universe was created last Thursday, and our memories of an earlier existence are false (Last Thursdayism).
't Hooft's model is an existence proof that local, deterministic theories could reproduce QM. I did not claim that his model is a true replacement for the Standard Model.

DrChinese said:
Basically: you can't reference another author referencing himself with a completely speculative set of hypotheses/assumptions that dismiss Bell, and then say "look what I proved".
Why not? When I will see your rebuttal published I'll change my mind.

DrChinese said:
2. I accept that any model, deterministic or not, has a restricted number of initial states. What is missing (among other things) is i) a causal connection between a) the entangled particle source(s) and b) the many apparati determining the measurement context;
Not "any" model. In Newtonian mechanics with contact forces you can arrange the initial state in any way you want. There is no rule that says that given the position and velocity of particle 1 you need to restrict the position and/or velocity of particle 2 in any way. Not so in field theories. In classical EM you can, like in the Newtonian mechanical case arrange the initial positions/velocities in any way you want. But you can't do that for the fields. The fields at particle 1 are uniquely determined by the global distribution/momenta of charges and those fields will determine how particle 1 moves (via Lorentz force). Since Bell's theorem disregards this constraint (in the form of independence assumption) one cannot rely on the valability of the so-called "classical" prediction in this case.

DrChinese said:
and ii) a viable description of the mechanism of how that causal connection between a) and b) coordinates to obey the quantum mechanical expectation values locally.
The only mechanism is the restricted number of the initial states. One simply needs to calculate the prediction of the theory while taking into account that restriction.

DrChinese said:
An important note about the sources of entangled particles per a) above. The entangled particles can come from fully independent laser sources, without having ever been present in the same light cone. Now, I am perfectly aware that within a purported Superdeterministic model, everything in the observable universe lies in a common light cone and therefore would be "eligible" to participate in the anti-Bell conspiracy. But now you would be arguing:

There exist particular photons, from 2 different lasers* that also end up being measured at the proper angles (context) for a Bell Inequality violation, but only when statistically averaged (even though there is complete predetermination of each individual case); and "somehow" these two photons each carry a marker of some kind (remember they have never been in causal contact - so the laser source must have passed on this marker to each photon at the time it was created) so that it "knows" whether to be up or down - but only in the specific context of that a measurement setting that can be changed midflight according to a pseudo-random generator for the setting - that can itself have involvement by any number of human brains and/or computers.

So, where did any of this get explained other than by a general purpose "suppose that"? Because there is no known physics - EM, QM or otherwise - that could serve as a base mechanism for any of the above to support the wild assertions involved in SD.
The restricted number of initial states applies for any system. It could be the whole universe if you like, but then you could not make the required calculations. Making the experiment bigger and more complex does not change the qualitative aspect that some states cannot be prepared because there is no initial state that evolves into them.

DrChinese said:
3. I wasn't referring to free will in mentioned measurements of c, gravitiation, or any other constant. I was referring to the fact that the ONLY scenario where Superdeterminism is a factor is in Bell tests.
This is also true for non-locality. We don't need to assume it anywhere else.

DrChinese said:
Apparently, the universe is just fine at revealing its true nature without a need to "conspire" at everything else.
Apparently, the universe is also local. Why make an exception here?

DrChinese said:
Imagine that we are measuring the mean lifetime of a free neutron as being 880 seconds. But then you tell me it's really 333 seconds.
I would not tell you that. At no point I doubt that the results of the Bell test are what they are. They are true and statistically representative. It's only that the statistics in the case of a theory with long-range interactions is different from Newtonian mechanics.

DrChinese said:
Your explanation is: It's just that the sample was a result of initial settings, and those initial settings led to an unfair sample.
The sample is perfectly fair for the theory under investigation (say EM). It's not fair for a different theory with different equations, like Newtonian mechanics, but why would you expect that?

DrChinese said:
By analogy, that is much like the SD hypothesis that the local realistic value of my Bell test example must be at least .333, although the observed value is .250. Why do you need a conspiracy to explain the results of one scientific test, but no others?
Your fallacy is to assume that any local realistic theory should predict the same thing as Newtonian mechanics. Try to play pool with charge/magnetized balls. Check and see if the probability of placing the balls in a certain pocket is the same. I would expect it to be different. The balls move differently. There is no conspiracy.

DrChinese said:
4. Glad you agree. A contextual theory is not realistic. According to EPR ("elements of reality"), there must be counterfactual values for all elements of reality that exist, regardless of whether or not you can measure them simultaneously.
EPR assumed that indeed. I think they were wrong. Elements of reality do exist for the unmeasured spin components but they are different from what EPR expected.

DrChinese said:
*See for example:
High-fidelity entanglement swapping with fully independent sources
https://arxiv.org/abs/0809.3991
What is your point with this paper?
 
  • #123
PeterDonis said:
It does on any interpretation of QM except one that views QM as just a statistical model over some underlying deterministic physics, where the statistics and probabilities have the standard classical ignorance interpretation. The latter interpretation of QM seems to be one of the least popular ones.
By QM I mean QM's postulates, those 7 rules. Obviously, each interpretation rejects all other interpretations, this is not a particularity of SD.
 
  • #124
WernerQH said:
What could this possibly mean? It is rather obvious that you have never studied any of Aspect et al.'s papers.
It is not sufficient for your utterings to sound logical. They should also make sense.
If entanglement is an effect of long-range interactions between the experimental parts it follows that it should be possible to reproduce such states at macroscopic level. Does this make sense to you?

What's your point with Aspect's papers?
 
  • Like
  • Haha
Likes vanhees71 and WernerQH
  • #125
AndreiB said:
What's your point with Aspect's papers?
Read them!
 
  • Like
Likes vanhees71
  • #126
WernerQH said:
Read them!
I did. there is nothing there about macroscopic entangled states. They use photons. photons are not macroscopic. What's your point, again?
 
  • Like
Likes vanhees71
  • #127
AndreiB said:
Either my explanation is true or it is not. I think the word "excuse" here is used to avoid accepting that my explanation is perfectly valid.
Your "explanations" contain two elements which contradict each other. On the one hand, you use typical common sense reasoning what some interactions are not strong enough to have an influence, on the other hand you use superdeterminism where even the smallest imaginable modification would destroy the whole construction by destroying the correlation. One would be valid in a normal world, the other one in that superdeterministic world. Using both together would be inconsistent.
AndreiB said:
1. The model I proposed is in term of classical EM. The Big-Bang, inflation period and all that cannot be described in terms of this model. So, let's stay in a regime where this model makes sense.
Photons flowing from far away toward the devices, and detectors of such photons which would turn the spin measurement detectors as necessary, can be described by classical EM (a photon can be simply described by a particular classical solution fulfilling the quantization condition). So, start the computation a second before the initialization, the measurement done a second after the initialization, with those photons possibly modifying the detectors angles being 1.9 light seconds away from their target detectors if they exist.

AndreiB said:
2. I agree that if you can prove that "the event which has created these photons has not been in the past light cone of the preparation of the pair" SD is dead. The question is, can you?
In GR without inflation it is well-known and trivial, in GR with inflation one would have to look back more, finding earlier causal events coming from even more far away. Or, similar to the classical EM picture above, we can start with the initial values not at the singularity but later, after these photons have been emitted. Last but not least, if you don't allow for the computation starting at some quite arbitrary time, your computation fails even in your theory even with almighty computer power.
AndreiB said:
As far as I know there is no theory at this time that is capable of describing the Big-Bang. So all this is pure speculation.
The Big Bang meaning the hot early phase of the universe where the average density was, say, similar to a neutron star is understood quite well in standard cosmology. The singularity itself only shows that GR becomes invalid if the density will be too large. But for the discussion of superdeterminism this is irrelevant anyway. The point I have made is that in your construction will be anyway initial values which can have a causal influence on each of the detectors but not on the pair preparation and the corresponding other detector. If you accept this, no need for BB theory.
AndreiB said:
I don't think so. "Normal science" uses a certain model. The conclusions only apply if the model is apt for the experiment under investigation. For example, the kinetic theory of gases apply for an ideal gas. It works when the system is well approximated by such a model. If your gas is far from ideal you don't stubbornly insist on this model, you change it. In the case of Bell's theorem the model is Newtonian mechanics with contact forces only. Such a model is inappropriate for describing EM phenomena, even classical EM phenomena like induction. So, it is no wonder that the model fails to reproduce QM.
Sorry, but this is nonsense. Bell's theorem presupposes only EPR realism (which does not even mention Newton or contact forces) and Einstein causality. Classical EM fits into the theorem, thus, cannot violate the BI. GR too. Realistic quantum interpretations like dBB fulfill EPR realism, but violate Einstein causality (but not classical causality).

Other variants of the proof rely only on causality, and what they need is the common cause principle. That's all.
AndreiB said:
"Sufficiently simple" is a loaded term.
Once you think about computations with ##10^{26}## particles, let's take much less, say, ##10^9## particles. And let's name an explanation sufficiently simple if it can be shown using computations with less than ##10^9## particles. That would be fair enough, not?
AndreiB said:
And i didn't claim that the statistics does not remain stable in this case, I just don't know.
If you are right and the function behaves well, great. We will be able to compute the classical EM prediction for a Bell test. When such computation is done we will see if it turns out right or wrong.
AndreiB said:
I don't get the your point about PI. Clearly, the digits of PI are not independent since they are determined by a quite simple algorithm. Two machines calculating PI would be perfectly correlated.
But if you have two quite correlated sequences of digits, and add the sequence of the digits of ##\pi# mod 10 to one but not the other, the correlation disappears.
AndreiB said:
IF Bell correlations are caused by long-range interactions between the experimental parts one should be able to prepare macroscopic entangled states. I am not aware of any attempt of doing so.
I see no base for this. Entanglement of macroscopic states is destroyed by any interaction with the enviroment, this is called decoherence.
AndreiB said:
Clearly, all theories with long-range interactions are falsifiable. Classical EM, GR, fluid mechanics have been tested a lot. I have laid out my argument why such theories could be superdeterministic. We will know that when the function is computed. Until then you cannot rule them out.
No. There is a level of insanity of such philosophical theories where it becomes impossible in principle to rule them out. You cannot rule out, say, solipcism. Superdeterminism is in the same category. It is impossible to rule it out. Your computations would be simple computations of normal theories, without any connection to superdeterminism beyond your words. The only thing one can do with superdeterminism it to recognize that it makes no sense and to ignore it.
AndreiB said:
But there is a causal justification. There is a long-range interaction involved that determines the hidden variable. "Normal" science does not assume independence when this is the case.
False. Common sense makes a large difference between some accidental influences and systematic influences which can explain stable correlations.
AndreiB said:
As far as i can say, the independence assumption was falsified by Bell tests + EPR argument. Locality can only be maintained if the independence assumption fails. And no violation of locality was ever witnessed in "normal science", right?
Completely wrong. The independence assumption is much more fundamental than experimental upper bounds found up to now for causal influences. You cannot falsify GR by observations on soccer fields. If you evaluate what you see on a soccer field, you will not even start to question GR. Same here. You will not even start questioning the the independence assumption or the common cause principle because of some observations on the quantum field. But you use them to find out what the experiment tells us. And it tells us that Einstein causality is violated.
AndreiB said:
There is nothing wrong with my logic. If you can't prove a violation (and you can't) you cannot just assume one.
Thanks for making my point. Except that you have to replace your "you" with "I". You just assume a violation of the common cause principle.
AndreiB said:
I think you forget the really important point that without SD you have non-locality. Your arguments based on what "normal science" assumes or not do not work for this scenario, since, when you factor in the strong evidence for locality, the initial probability for a violation of the statistical independence is increased many orders of magnitude.
As explained, non-locality is not really an issue, science has developed nicely during the time of non-local Newtonian gravity. Moreover, quantum non-locality is unproblematic because it does not appear at all without some special preparation procedures.

Instead, with superdeterminism we can no longer make any statistical science.
 
  • Like
Likes Doc Al and weirdoguy
  • #128
AndreiB said:
Since EM does not depend on scale you can test for Bell violations using macroscopic charges. For the reason presented earlier, these would be independent from the rest of the universe. I think this is actually doable in practice.
We have now written proof that you haven't understood what Bell violations are.
Go ahead and describe your macroscopic experiment. :-)
 
  • #129
Sunil said:
1. I disagree. Bohmian and other realistic interpretations of QM are, given that they give the QM predictions, necessarily contextual given the Kochen-Specker theorem.

2. And this can be also easily seen explicitly, given that the trajectories of the system are influenced by the trajectories of the measurement devices.
1. My definition of "realistic" follows EPR ("elements of reality"), which definition Bell used (for better or for worse). Admittedly, there are contextual interpretations of QM in which the measurement devices are themselves active participants in the outcome. I wouldn't call those realistic in the EPR sense, because the individual elements of reality are subjective to the observer's choice of measurement basis. 2. I'd love learn how Bohmian Mechanics factors in the angle setting of a measurement device (remote or not) to lead us to the observed statistics. Of course, I am quite aware that BM incorporates an underlying function, the value of which is unknown to us at any point in time. Equally are that there is the so-called pilot wave which guides thanks. And aware that BM lacks the traditional QM notion of spin. You might not be able to supply that mechanism, and maybe no one can yet.

------------------------

What would be nice is to see a description of a specific Bell test (say a run of 10 detected pairs by Alice and Bob) in which we see the measurement device's impact on the outcomes. We would have Alice's setting fixed at 0 degrees, and Bob's alternating between +/- 120 degrees, entangled Type 1 photons (i.e. polarization is the same) - where distance apart is not important (since locality is not a limiting factor in BM). To be specific, there is a PBS enforcing the measurement angle setting.

How is Bob's changing PBS affecting the environment such that Bob's PBS orientation at the time of detection is communicated to the other components of the setup? Because I would imagine that all the other changing dynamics in the environment's surroundings (actually the entire universe, since distance is not a factor) would contribute an overwhelming amount of "noise" as well. Why are some elements of a dynamic environment a critical factor to the observed statistics, and some not?
 
  • #130
DrChinese said:
I'd love learn how Bohmian Mechanics factors in the angle setting of a measurement device (remote or not) to lead us to the observed statistics.
"The angle setting" is just the way the device is spatially aligned. This affects the quantum potential, which depends on that spatial direction (since there is a term in it describing the magnetic field that the particles encounter as they go through the device), and that in turn affects the trajectories of the (unobservable) particles, which in turn affects the observed statistics of the measurement results.

DrChinese said:
aware that BM lacks the traditional QM notion of spin
Yes, in BM a measurement of "spin" is just a measurement of particle trajectories, like every other measurement. See above.
 
  • Like
Likes gentzen
  • #131
AndreiB said:
1. Yes, he is quoting himself because he invented the model. What's wrong with that?

't Hooft's model is an existence proof that local, deterministic theories could reproduce QM. I did not claim that his model is a true replacement for the Standard Model.

2. EPR assumed that indeed. I think they were wrong. Elements of reality do exist for the unmeasured spin components but they are different from what EPR expected.

3. What is your point with this paper?
1. Authors don't reference their own work for the purpose of demonstrating its correctness. When it is done, it is usually done to provide additional background and additional reading. In this case, 't Hooft's wild claims are not generally accepted, and so a self-reference is out of line.

The plain fact is: 't Hooft at no time has provided a CA model of a Bell test, much less of QM as a whole. Saying that one "could" be constructed is what we skeptics call "hand waving".2. If there are values for unmeasured spin components, what are they? Bell showed there were none that matched all quantum mechanical expectation values. And anyway, how are they different than what EPR assumed?

------------------------

So far, I would characterize your rambling piecemeal arguments as anti-standard model and anti-scientific consensus. All without providing any meaningful insight as to a viable opposing view. If you think 't Hooft is on to something, that's an opinion you are welcome to. However, I have explained some of the many obvious problems with his model, which is precisely why it is generally ignored by the community at large. If you are interested in more serious local realistic computer models, they are out there (see for example the work of Hans de Raedt, Kristel Michielsen et al). But there are no superdeterministic models in existence that address Bell. And without tackling Bell head on, no one is going to take 't Hooft's work in this area seriously.

DrC is out of further discussion with you in this thread.
 
  • Like
Likes vanhees71, dextercioby, weirdoguy and 2 others
  • #132
This seems like a good point at which to close the thread.
 
  • Like
  • Sad
Likes vanhees71, Motore and weirdoguy

Similar threads

Replies
114
Views
6K
Replies
12
Views
3K
Replies
2
Views
2K
Replies
37
Views
3K
Replies
14
Views
2K
Replies
31
Views
2K
Back
Top