Why all the rejection of superdeterminism?

In summary, the conversation revolves around the concept of superdeterminism, which suggests that the experimentators are not free to choose the measurement parameters and that their actions are predetermined by initial conditions. This idea is rejected by physicists due to its fine-tuning requirements and the fact that it goes against the fundamental assumption of freedom of the experimentalist in conducting scientific experiments. The rejection of superdeterminism is not based on the conflict with free will, but rather on the fact that it introduces additional rules and properties that are not in line with quantum mechanics. Despite some attempts to create a superdeterministic model, it remains a controversial and unproven concept.
  • #71
DrChinese said:
Think about this one as well. So if the "true" match rate is 33% when the QM predicted (and observed) value is 25%, AND this is due to superdeterminism controlling the choice of Alice and Bob's measurement settings: you don't need time-varying/fast-switching as part of your test. You make no choice other than to have the angle at 120 degrees (or whatever) and leave the entire test running at that. No change. Ever. After all, the superdeterminism hypothesis is control over the measurement settings so that the "correct" (and misleading) sub-sample is picked, not some (light speed) signal from Alice to Bob. [Which is what fast-switching is intended to protect against.]

I wouldn't say that that's all that it is protecting against.

A very general hidden-variable expression for the joint probability that Alice gets result [itex]A[/itex] and Bob gets result [itex]B[/itex] given that Alice's setting is [itex]\alpha[/itex] and Bob's setting is [itex]\beta[/itex] is:

[itex]P(A, B|\alpha, \beta) = \sum_\lambda P(\lambda | \alpha, \beta) P_A(A|\alpha, \beta, \lambda) P_B(B|\alpha, \beta, \lambda)[/itex]

No superdeterminism implies that

[itex]P(\lambda | \alpha, \beta) = P(\lambda)[/itex]

Leaving it running for hours on end doesn't insure that.

No FTL signalling and no superdeterminism implies that

[itex]P_A(A|\alpha, \beta, \lambda) = P_A(A|\alpha, \lambda)[/itex]

[itex]P_B(B|\alpha, \beta, \lambda) = P_B(B|\beta, \lambda)[/itex]
 
Physics news on Phys.org
  • #72
stevendaryl said:
I wouldn't say that that's all that it is protecting against.

Fast switching protects us from needing to consider new, currently unknown local effects outside of QM, that might alter outcomes. There aren't any such currently in play (posited), but we know those aren't a factor already. How could they be? Nothing changes when you have fast switching and when you don't. So obviously that is not a factor either way. That should end the discussion of the need for fast switching except if you are attempting a full-on loophole free Bell test - something that we aren't discussing here (and which is material for a different thread).

Just saying that you don't need fast switching for a Bell test. There are a thousand other things we could attempt to rule out as a factor in any experiment as well, but that we don't. Example: we don't run tests on Mondays and Thursdays to prove that the day of the week does not affect experimental results either.
 
  • #73
mikeyork said:
in the intrinsic context of a photon, time stands still

This is not correct; a correct statement would be that in "the intrinsic context of a photon", the concept of "elapsed time" is not well-defined.
 
  • #74
stevendaryl said:
I feel like I'm arguing a two-front war here. On the one hand, I don't think that it's impossible to have a superdeterministic explanation for QM statistics. On the other hand, I think that such a theory would be very bizarre, and nothing like any theory we've seen so far.

Let me go through a stylized description of an EPR-like experiment so that we can see where the superdeterminism loophole comes in.

We have a game with three players, Alice, Bob and Charlie. Alice and Bob are in different rooms, and can't communicate. In each room, there are three light bulbs colored Red, Yellow and Blue, which can be turned off or on.

The game consists of many many rounds, where each round has the following steps:
  1. Initially, all the lights are off.
  2. Charlie creates a pair of messages, one to be sent to Alice and one to be sent to Bob.
  3. After Charlie creates his messages, but before they arrive, Alice and Bob each choose a color, Red, Yellow or Blue. They can use whatever criterion they like for choosing their respective colors.
  4. When Alice receives her message, she follows the instructions to decide whether to turn on her chosen light, or not. Bob similarly follows his instructions.
After playing the game for many, many rounds, the statistics are:
  • When Alice and Bob choose the same color, they always do the opposite: If Alice's light is turned on, Bob's is turned off, and vice-versa.
  • When Alice and Bob choose different colors, they do the same thing 3/4 of the time, and do the opposite thing 1/4 of the time.
The question is: What instructions could Charlie have given to Alice and Bob to achieve these results? The answer, proved by Bell's theorem, is that there is no way to guarantee those results, regardless of how clever Charlie is, provided that
  1. Charlie doesn't know ahead of time what colors Alice and Bob will choose.
  2. Alice has no way of knowing what's going on in Bob's room, and vice-versa.
The superdeterministic loophole

If Charlie does know what choices Alice and Bob will make, then it's easy for him to achieve the desired statistics:
  • Every round, he randomly (50/50 chance) sends either the message to Alice: "turn your light on", or "turn your light off"
  • If Alice and Bob are predestined to choose the same color, then Charlie sends the opposite message to Bob.
  • If Alice and Bob are predestined to choose different colors, then Charlie will send Bob the same message 3/4 of the time, and the opposite message 1/4 of the time.
Why the superdeterministic loophole is implausible

The reason that the superdeterministic loophole is not possible is because Alice and Bob can choose any mechanism they like to help them decide what color to use. Alice might look up at the sky, and choose the color based on how many shooting stars she sees. Bob might listen to the radio and make his decision based on the scores of the soccer game. For Charlie to be able to predict what Alice and Bob will choose can potentially involving everything that can possibly happen to Alice and Bob during the course of a round of the game. The amount of information that Charlie would have to take into account would be truly astronomical. The processing power would be comparable to the power required to accurately simulate the entire universe.

Why I think the superdeterministic loophole is actually impossible

What the superdeterministic loophole amounts to is that somehow Charlie has information about the initial state (before the game began) of the universe, [itex]s_0[/itex], and somehow he has a pair of algorithms, [itex]\alpha(s_0)[/itex] and [itex]\beta(s_0)[/itex] that predict the choice of Alice and Bob as a function of the initial state. The problem is that even if there were such algorithms, they computational time for computing the result would be greater than just waiting to see what Alice and Bob choose. So Charlie couldn't possibly know the results in time to choose his instructions to take those results into account.

Why not? Remember, we're allowing Alice and Bob to use whatever mechanism they like to decide what color to pick. So suppose Alice picks the same algorithm, [itex]\alpha[/itex], and chooses whatever color is NOT returned by [itex]\alpha(s_0)[/itex]? In other words, she runs the program, and if it returns "Red", she picks "Yellow". If it returns "Yellow", she picks "Blue". If it returns "Blue", she picks "Red". She can base her choice on anything, so if there is a computer program [itex]\alpha[/itex] that she can run, then she can base her choice on whatever it returns.

The only way for it to be possible that [itex]\alpha(s_0)[/itex] always gives the right answer for Alice is if it takes so long to run that Alice gives up and makes her choice before the program finishes.

This is actually a fairly standard argument that even if the universe is deterministic, if you tried to construct a computer program that is guaranteed to correctly predict the future, the future would typically arrive before the computer program finished its calculations. No matter how clever the algorithm, no matter how fast the processor, there is no way to guarantee that the prediction algorithm would be faster than just waiting to see what happens.

Your line of reasoning is based on a completely wrong picture of how physics is supposed to work. In physics objects behave in the way they behave because there is something acting on them (a force for example). Objects like planets or particles do not make computations and decide how to move in order to achieve some "purpose". Such a weird, anthropocentric view leads nowhere. One can easily make similar arguments why for example general relativity is almost impossible.

We observe that stars correlate their motion and form spiral galaxies. Do you think that a star actually performs computations using the position/momenta of all other masses in the galaxy and "decide" how to move so that a spiral shape is maintained?

The correct picture is this: objects (stars, or particles) move as a result of the force acting on them. That force is a function of the magnitude of the fields present at that location. The magnitude of the fields is determined by the position/momenta of all field sources. If you deal with infinite range fields, like gravity and electromagnetism it follows that the motion of each object is a function of position/momenta of all objects that qualify as field sources.

So, from a pure mathematical point of view no field theory of infinite range allows for the objects described by the theory to evolve independently.

Now, there is a lot of confusion regarding superdeterminism so I think it is better to avoid this word and define others as folows:

I consider any deterministic theory to be of type D. Newtonian gravity, general relativity, classical electromagnetism, Newtonian mechanics of the rigid body, Bohmian mechanics are all D type theories.

I consider a deterministic theory to be of the type D+ if this theory does not allow the detector settings and the hidden variable to be independent variables. That will include Newtonian gravity, general relativity, classical electromagnetism and Bohmian mechanics. Newtonian mechanics is NOT of this type as one can move an object around without any effect on the other objects.

It is easy to see that a description of a Bell test in terms of charged particles (electrons and quarks) moving around is indeed of the type D+. If you want for example to calculate the motion of the particles that are involved in the emission of the entangled photons (so that you can determine the spins) you will need also the position/momenta of the particles that make up the detectors. They cannot be independent. So, classical electromagnetism is a D+ type theory.

Let's now define a new type of theory, say D++. This is a type D+ theory that gives the same predictions as QM. I do not claim that classical electromagnetism is of this type. It might be but I don't have enough evidence for that. Bohmian mechanics is a type D++ theory.

Now, I think it is best to treat superdeterminism in an analog way to non-locality. While non-local theories cannot be ruled out by Bell it doesn't mean they are true. Newtonian gravity is non-local so in an universe described by this theory the detector settings cannot be independent on the hidden variable. The statistical calculation used in Bell's theorem do not work in this case. But Newtonian gravity is still not a true description of the quantum world. So, I would say that is correct to call D+ theories superdeterministic, even if they are not true (D++ theories). Most of the debate here is centered on the fact that some call D+ theories superdeterministic whyle others require only D++ theories to be called that way.

Let me now approach the problem of what would take for a D+ theory to be also a D++ theory. As I have stressed in my first post, trying to come up with an simple explanation for the violation of Bell's inequality has little chance of success, even if you have the right theory. First, you will need a valid initial state (some states might evolve in the lab blowing out, etc). Then, based on that initial state you need to simulate the motion of at least the particles directly involved in the experiment (PDC source, detectors, Alice, Bob, etc) and see what the result will be. Just looking at some equations will not help you in the same way that just looking at the equations of general relativity doesn't make the spiral shape of a galaxy obvious. So, this type of arguments involving what Alice/Bob can and cannot do and how the particles send messages, etc are useless.

The only way to ascertain if a D+ type theory is a D++ type also is to see if that theory gives QM in some limit. Then one can use QM to calculate predictions for experiments.

Andrei
 
  • #75
ueit said:
D++. This is a type D+ theory that gives the same predictions as QM. I do not claim that classical electromagnetism is of this type. It might be but I don't have enough evidence for that.
It is not. One cannot model squeezed states of light obtained by parametric down-conversion in terms of classical electromagnetic fields.
 
  • #76
Demystifier said:
Here is an exact quote of Bell (the bolding is mine):

"An essential element in the reasoning here is that a and b are free
variables. One can envisage then theories in which there just are no free
variables for the polarizer angles to be coupled to. In such ‘superdeter-
ministic’
theories the apparent free will of experimenters, and any other
apparent randomness, would be illusory. Perhaps such a theory could be
both locally causal and in agreement with quantum mechanical predic-
tions. However I do not expect to see a serious theory of this kind. I
would expect a serious theory to permit ‘deterministic chaos’ or

‘pseudorandomness’, for complicated subsystems (e.g. computers)
which would provide variables sufficiently free for the purpose at hand.

But I do not have a theorem about that."

It seems to me that Bell did understand superdeterministic theories to be the D+ type as defined by me above. He clearly states that not all superdeterministic theories need to reproduce QM, by saying "such a theory could be both locally causal and in agreement with quantum mechanical predictions".

Indeed, in classical electromagnetism the apparent free will of experimenters, and any other
apparent randomness" is "illlusory", therefore it qualifies as a superdeterministic theory.

't Hooft discusses the superdeterminism and the "conspiracy" arguments in this article:

The Fate of the Quantum
https://arxiv.org/pdf/1308.1007.pdf

He also defines a requirement for a superdeterministic theory to be non-conspiratorial: correlations should be present regardless of the initial state. So he replaces the "free will" of Bell with the free choice of the initial state. I hope you find this approach acceptable.

Andrei
 
  • #77
A. Neumaier said:
It is not. One cannot model squeezed states of light obtained by parametric down-conversion in terms of classical electromagnetic fields.

Well, I am not so sure about that. There is a theory, called stochastic electrodynamics (SED) that claims to obtain QM formalism (including a classical derivation of Plank's constant from classical electromagnetism and the assumption that there exist a zero-point field of a certain type. If their derivation is correct then every prediction of QM can also be explained in a classical way. See for example this article:

Stochastic electrodynamics as a foundation for quantum mechanics
Physics Letters A - Volume 56, Issue 4, 5 April 1976, Pages 253-254

http://www.sciencedirect.com/science/article/pii/0375960176902978

Andrei
 
  • #78
ueit said:
Well, I am not so sure about that. There is a theory, called stochastic electrodynamics (SED)
This approach has limitations. it recovers many effects, but only those of states of light that have a positive Wigner function. See

The Nature of Light: What Is a Photon?
Optics and Photonics News, October 2003
http://www.osa-opn.org/Content/ViewFile.aspx?Id=3185
 
  • #79
ueit said:
Your line of reasoning is based on a completely wrong picture of how physics is supposed to work.

I was talking specifically about a stylized version of the EPR experiment, to show the role of superdeterminism as a loophole. I was not discussing how physics is supposed to work.

In physics objects behave in the way they behave because there is something acting on them (a force for example). Objects like planets or particles do not make computations and decide how to move in order to achieve some "purpose". Such a weird, anthropocentric view leads nowhere. One can easily make similar arguments why for example general relativity is almost impossible.

The point of using anthropomorphic language was because it makes the implausibility of superdeterminism clearer. In the EPR experiment, Alice can decide, ahead of time, to base her choice of which setting to use on absolutely anything--whether she sees a shooting star, the scores of the game on the radio, etc. She can make her decision as "anthropocentric" as she likes. In order for the superdeterministic loophole to make sense, the hidden mechanism has to anticipate her choice. So potentially it has to predict the future of the universe with unerring accuracy.

And, no, it is nothing like GR. Determinism and superdeterminism are not the same things. That's just a misconception on your part.
 
  • #80
stevendaryl said:
So potentially it has to predict the future of the universe with unerring accuracy.
Any deterministic model predicts the future of the universe modeled by it with unerring accuracy. So your conclusion provides no information beyond what is already in determinism.
 
  • #81
ueit said:
So, from a pure mathematical point of view no field theory of infinite range allows for the objects described by the theory to evolve independently.

Yes, if we use deterministic field theory, then the whole universe evolves together. Show me how that leads to the quantum predictions for EPR. Actually, don't. Write a paper deriving the quantum predictions from a classical field theory. Then we can discuss it here. As it is, you're talking about a nonexistent, let alone mainstream theory.
 
  • #82
A. Neumaier said:
Any deterministic model predicts the future of the universe modeled by it with unerring accuracy. So your conclusion provides no information beyond what is already in determinism.

No, that's false. Even if the universe were completely deterministic, it would not be possible to make unerring, detailed predictions about the future evolution of the universe, because the computational power required would require the whole universe. I already went through this.
 
  • #83
stevendaryl said:
Yes, if we use deterministic field theory, then the whole universe evolves together.
And if we use deterministic ##N##-particle theory, then the same holds. Nothing in the argument depends on fields.

stevendaryl said:
As it is, you're talking about a nonexistent, let alone mainstream theory.
So was Bell, in his theorem. He proved that certain theories (allowing free choice which is impossible in a deterministic theory) don't exist, no more. And when he proved his theorem it was not mainstream.
 
  • #84
stevendaryl said:
No, that's false. Even if the universe were completely deterministic, it would not be possible to make unerring, detailed predictions about the future evolution of the universe, because the computational power required would require the whole universe. I already went through this.
Computational power is completely irrelevant for a mathematical or physical theory.

We cannot compute many things in quantum field theory due to lack of power, but still believe it correctly models at least systems of the size of the sun. Although we will never be able to do it because we never know the exact initial state of the sun.
 
  • #85
stevendaryl said:
No, that's false. Even if the universe were completely deterministic, it would not be possible to make unerring, detailed predictions about the future evolution of the universe, because the computational power required would require the whole universe. I already went through this.

Once again, suppose that Alice has a device, a computer equipped with a detailed description of the initial state of the universe, that was sufficient to predict the future with perfect accuracy and precision. So just to be perverse, she asks the computer program whether or not she will turn on a specific light switch at exactly 12:00. If the computer returns an answer before 12:00, then she does the opposite of whatever it predicts.

The conclusion is that one of the following would be true:
  1. the computer gives the wrong answer, or
  2. it will take the computer longer than 12:00 to come up with any answer at all
So under the assumption that it is possible for people to be as perverse as Alice, accurately predicting the future in a timely manner is impossible. You can convert this into a theorem about computer science: It is not possible to have a universal prediction program that predicts the future behavior of every program.
 
  • #86
A. Neumaier said:
So was Bell, in his theorem. He proved that certain theories (allowing free choice which is impossible in a deterministic theory) don't exist, no more. And when he proved his theorem it was not mainstream.

But Physics Forums is not the place to advance new results. Publish elsewhere, and we can discuss it here.
 
  • #87
stevendaryl said:
But Physics Forums is not the place to advance new results. Publish elsewhere, and we can discuss it here.
I have no intention to produce new results on this topic. I am only applying my logic to the statements offered in this thread.
 
  • #88
  • #89
A. Neumaier said:
I have no intention to produce new results on this topic. I am only applying my logic to the statements offered in this thread.

Well, the claim that superdeterminism can reproduce the quantum predictions for EPR is a huge, non-mainstream claim. I suppose that rather than directly making such a claim, you can do a double flip and argue that the arguments against the claim are inadequate. I still think it should be a published paper.
 
  • #90
A. Neumaier said:
There is a precise mathematical version of this: There is no algorithm that can tell whether an arbitrary given program stops for an arbitrary given input. But this theorem has not the slightest physical implications, since the universe is neither an algorithm nor a Turing machine.

It's completely false that it has no physical implications. The same argument shows the impossibility of the kind of superdeterminism required to make EPR-type predictions using a deterministic theory.
 
  • #91
stevendaryl said:
I was talking specifically about a stylized version of the EPR experiment, to show the role of superdeterminism as a loophole. I was not discussing how physics is supposed to work.
The point of using anthropomorphic language was because it makes the implausibility of superdeterminism clearer. In the EPR experiment, Alice can decide, ahead of time, to base her choice of which setting to use on absolutely anything--whether she sees a shooting star, the scores of the game on the radio, etc. She can make her decision as "anthropocentric" as she likes. In order for the superdeterministic loophole to make sense, the hidden mechanism has to anticipate her choice. So potentially it has to predict the future of the universe with unerring accuracy.

And, no, it is nothing like GR. Determinism and superdeterminism are not the same things. That's just a misconception on your part.

The same argument making "the implausibility of superdeterminism clearer" can be used to make any physical theory implausible, just in my example with GR. The Earth anticipates where the sun will be 8 minutes from now and accelerates toward that particular place (not where the Sun is seen with the eyes). The Sun needs to calculate where all the stars in the galaxy will be thousands of years from now to move so that it remains in the spiral arm, etc.

It is not the case that Alice makes a choice about how to set the detector and the source somehow anticipates her choice. The situation is like this:

The source of entangled particles is a quark/electron subsistem (S1)
Alice, her detector and whatever she decides do use to help her with the decision is another quark/electron subsistem (S2)
Bob, his detector and whatever he decides do use to help him with the decision is another quark/electron subsistem (S3)

S1, S2 and S3 form the whole experimental system, S.

As I have argued before, S1, S2 and S3 cannot be independent. In order to describe the evolution of S1, S2 and S3 you need the resultant electric/magnetic fields originating from the whole system, S. So, S1, S2 and S3 all evolve as a function of S. Given this situation, correlations are bound to appear between the motion of the subatomic particles of S1, S2 and S3. Sometimes those correlations could become visible at macroscopic level, and this is the fundamental cause for the observed correlations.

Now, why those exact correlations and not other? I don't know. As I have said one needs to perform a simulation of S and see what the result is. If the result is correct, the theory might be right. But even in this case you will not get a simple explanation in terms of an oversimplified macroscopic description.

Andrei
 
  • #92
A. Neumaier said:
There is a precise mathematical version of this: There is no algorithm that can tell whether an arbitrary given program stops for an arbitrary given input. But this theorem has not the slightest physical implications, since the universe is neither an algorithm nor a Turing machine.

Actually, what I'm talking about is not the same theorem. It is the theorem that it is impossible, in general to predict the future state of a program in a time less than the time required to just run the program. There is a universal program that can predict future states of other programs, but not in a timely manner. In contrast, there is no program that can solve the halting problem.
 
  • #93
stevendaryl said:
the claim that superdeterminism can reproduce the quantum predictions for EPR
I never made that claim.
stevendaryl said:
The same argument shows the impossibility of the kind of superdeterminism required to make EPR-type predictions using a deterministic theory.
It is based on the assumption that the dynamics is effectively computable. This is a ridiculous assumption. Almost no deterministic dynamics is computable, and it need not be. Certainly Newton's theory of gravity is not computable.
 
  • #94
ueit said:
The same argument making "the implausibility of superdeterminism clearer" can be used to make any physical theory implausible, just in my example with GR. The Earth anticipates where the sun will be 8 minutes from now and accelerates toward that particular place (not where the Sun is seen with the eyes). The Sun needs to calculate where all the stars in the galaxy will be thousands of years from now to move so that it remains in the spiral arm, etc.

That shows exactly the difference between a deterministic theory and a superdeterministic theory. GR is deterministic, but not superdeterministic.

If instead of the sun and the Earth, you have a rock that is orbiting a massive spaceship at a distance of 8 light-minutes, far from any other gravitational sources, and that spaceship can maneuver using rockets, then it absolutely will not be the case that the acceleration of the rock will be toward where the rocket will be 8 minutes from now (or whatever the claim was). If the spaceship uses rockets to change locations suddenly, the behavior of the rock will continue as if the spaceship were still where it was until the information about its new location and velocity has time to reach the rock.

Thanks.
 
  • #95
A. Neumaier said:
I never made that claim.

Well, that's what this discussion is about.
 
  • #96
A. Neumaier said:
It is based on the assumption that the dynamics is effectively computable. This is a ridiculous assumption.

That's why the superdeterministic loophole can't actually work.
 
  • #97
stevendaryl said:
Yes, if we use deterministic field theory, then the whole universe evolves together. Show me how that leads to the quantum predictions for EPR. Actually, don't. Write a paper deriving the quantum predictions from a classical field theory. Then we can discuss it here. As it is, you're talking about a nonexistent, let alone mainstream theory.

I have already done that in a previous post:

Stochastic electrodynamics as a foundation for quantum mechanics
Physics Letters A - Volume 56, Issue 4, 5 April 1976, Pages 253-254


http://www.sciencedirect.com/science/article/pii/0375960176902978

The most up-to date version of the theory is published in a book. You can find it free here:

The Emerging Quantum
https://loloattractor.files.wordpre..._marc3ada_cetto_andrea_valdc3a9bookzz-org.pdf

It seems to me that while you are continuously asking me to present papers and so on you don't take seriously your burden of proof. For example you make the claim that in order to get EPR results in a deterministic theory you need the ridiculous mechanism with objects anticipating what other objects will do. I have seen no rigorous argument about that. None of the scientists working on superdeterministic theories, like 't Hooft and the autors of the book above have used such a model.

Andrei
 
  • #98
stevendaryl said:
That shows exactly the difference between a deterministic theory and a superdeterministic theory. GR is deterministic, but not superdeterministic.

If instead of the sun and the Earth, you have a rock that is orbiting a massive spaceship at a distance of 8 light-minutes, far from any other gravitational sources, and that spaceship can maneuver using rockets, then it absolutely will not be the case that the acceleration of the rock will be toward where the rocket will be 8 minutes from now (or whatever the claim was). If the spaceship uses rockets to change locations suddenly, the behavior of the rock will continue as if the spaceship were still where it was until the information about its new location and velocity has time to reach the rock.

Thanks.

Your example is irrelevant because it falls outside the scope of GR. The rockets are not systems described by GR.

This situation is completely different from the description of EPR in terms of subatomic particles because there is nothing there that is not inside the scope of QM (and of the candidate hidden variable theory).
 
  • #99
ueit said:
Your example is irrelevant because it falls outside the scope of GR. The rockets are not systems described by GR.

It illustrates why GR is not superdeterministic, just deterministic.
 
  • #100
ueit said:
This situation is completely different from the description of EPR in terms of subatomic particles because there is nothing there that is not inside the scope of QM (and of the candidate hidden variable theory).

But the same conclusion holds. It doesn't matter what forces describe subatomic particles. As long as behavior is complex enough to do things like computations, it is not predictable in enough detail to allow a superdeterministic explanation of EPR statistics.
 
  • #101
stevendaryl said:
But the same conclusion holds. It doesn't matter what forces describe subatomic particles. As long as behavior is complex enough to do things like computations, it is not predictable in enough detail to allow a superdeterministic explanation of EPR statistics.

This has nothing to do with complexity. Increasing the number of objects will never lead to deviations from physical laws. If you have more field sources the object moves just as easily in the resultant field (a classical superposition of the fields originating from each source in the case of electric field). It becomes harder to simulate on a computer but I fail to see the relevance of that.

Also, the predictability of the system is irrelevant because objects don't predict anything. The Earth moves towards the instantaneous position of the Sun because it so happens that the gravitational field points there. There is only an appearance of prediction.

The reason the rock cannot "anticipate" the rocket is that the rocket's engines are based on electromagnetism and not on gravity. For GR the rocket behaves like an unmoved mover and uncaused cause. GR does not expect the rocket to accelerate because there is no gravitational field responsible for that.

On the contrary, the human brain and everything else is made up of quantum particles. Nothing behaves as an unmoved mover in respect to them. The "sudden" decisions of a human are just late manifestations of the motion of charged particles in the brain. In EPR everything is like a planet, nothing is like a rocket.
 
  • #102
ueit said:
This has nothing to do with complexity.

Yes, it does. A complex enough system is unpredictable even if it is completely deterministic.

Look, publish your paper calculating EPR correlations using a superdeterministic theory. Then we can talk about it on Physics Forums.
 
  • #103
stevendaryl said:
It illustrates why GR is not superdeterministic, just deterministic.

GR is superdeterministic in regards to those systems described by it. If one can explain the behaviour of quantum particles in terms of micro black holes then again, there would be nothing outside of its scope.

It makes no sense to use a theory to describe a system outside its scope. That would amount to a falsification of the theory.
 
  • #104
stevendaryl said:
Yes, it does. A complex enough system is unpredictable even if it is completely deterministic.

Look, publish your paper calculating EPR correlations using a superdeterministic theory. Then we can talk about it on Physics Forums.

I have presented some papers. They are not written by me but since when is there such a requirement?
 
  • #105
ueit said:
I have presented some papers. They are not written by me but since when is there such a requirement?

Physics Forums is for the discussion of mainstream physics and refereed papers.
 
Back
Top