Superdeterminism and the Mermin Device
Superdeterminism as a way to resolve the mystery of quantum entanglement is generally not taken seriously in the foundations community, as explained in this video by Sabine Hossenfelder (posted in Dec 2021). In her video, she argues that superdeterminism should be taken seriously, indeed it is what quantum mechanics (QM) is screaming for us to understand about Nature. According to her video per the twin-slit experiment, superdeterminism simply means the particles must have known at the outset of their trip whether to go through the right slit, the left slit, or both slits, based on what measurement was going to be done on them. Thus, she defines superdeterminism this way:
Superdeterminism: What a quantum particle does depends on what measurement will take place.
In Superdeterminism: A Guide for the Perplexed she gives a bit more technical definition:
Theories that do not fulfill the assumption of Statistical Independence are called “superdeterministic” … .
where Statistical Independence in the context of Bell’s theory means:
There is no correlation between the hidden variables, which determine the measurement outcome, and the detector settings.
Sabine points out that Statistical Independence should not be equated with free will and I agree, so a discussion of free will in this context is a red herring and will be ignored.
Since the behavior of the particle depends on a future measurement of that particle, Sabine writes:
This behavior is sometimes referred to as “retrocausal” rather than superdeterministic, but I have refused and will continue to refuse using this term because the idea of a cause propagating back in time is meaningless.
Ruth Kastner argues similarly here and we agree. Simply put, if the information is coming from the future to inform particles at the source about the measurements that will be made upon them, then that future is co-real with the present. Thus, we have a block universe and since nothing “moves” in a block universe, we have an “all-at-once” explanation per Ken Wharton. Huw Price and Ken say more about their distinction between superdeterminism and retrocausality here. I will focus on the violation of Statistical Independence and not worry about these semantics.
So, let me show you an example of the violation of Statistical Independence using Mermin’s instruction sets. If you are unfamiliar with the mystery of quantum entanglement illustrated by the Mermin device, read about the Mermin device in this Insight, “Answering Mermin’s Challenge with the Relativity Principle” before continuing.
In using instruction sets to account for quantum-mechanical Fact 1 (same-color outcomes in all trials when Alice and Bob choose the same detector settings (case (a)), Mermin notes that quantum-mechanical Fact 2 (same-color outcomes in ##\frac{1}{4}## of all trials when Alice and Bob choose different detector settings (case (b)) must be violated. In making this claim, Mermin is assuming that each instruction set produced at the source is measured with equal frequency in all nine detector setting pairs (11, 12, 13, 21, 22, 23, 31, 32, 33). That assumption is called Statistical Independence. Table 1 shows how Statistical Independence can be violated so as to allow instruction sets to reproduce quantum-mechanical Facts 1 and 2 per the Mermin device.
Table 1
In row 2 column 2 of Table 1, you can see that Alice and Bob select (by whatever means) setting pairs 23 and 32 with twice the frequency of 21, 12, 31, and 13 in those case (b) trials where the source emits particles with the instruction set RRG or GGR (produced with equal frequency). Column 4 then shows that this disparity in the frequency of detector setting pairs would indeed allow our instruction sets to satisfy Fact 2. However, the detector setting pairs would not occur with equal frequency overall in the experiment and this would certainly raise red flags for Alice and Bob. Therefore, we introduce a similar disparity in the frequency of the detector setting pair measurements for RGR/GRG (12 and 21 frequencies doubled, row 3) and RGG/GRR (13 and 31 frequencies doubled, row 4), so that they also satisfy Fact 2 (column 4). Now, if these six instruction sets are produced with equal frequency, then the six case (b) detector setting pairs will occur with equal frequency overall. In order to have an equal frequency of occurrence for all nine detector setting pairs, let detector setting pair 11 occur with twice the frequency of 22 and 33 for RRG/GGR (row 2), detector setting pair 22 occur with twice the frequency of 11 and 33 for RGR/GRG (row 3), and detector setting pair 33 occur with twice the frequency of 22 and 11 for RGG/GRR (row 4). Then, we will have accounted for quantum-mechanical Facts 1 (column 3) and 2 (column 4) of the Mermin device using instruction sets with all nine detector setting pairs occurring with equal frequency overall.
Since the instruction set (hidden variable values of the particles) in each trial of the experiment cannot be known by Alice and Bob, they do not suspect any violation of Statistical Independence. That is, they faithfully reproduced the same QM state in each trial of the experiment and made their individual measurements randomly and independently, so that measurement outcomes for each detector setting pair represent roughly ##\frac{1}{9}## of all the data. Indeed, Alice and Bob would say their experiment obeyed Statistical Independence, i.e., there is no (visible) correlation between what the source produced in each trial and how Alice and Bob chose to make their measurement in each trial.
Here is a recent (2020) argument against such violations of Statistical Independence by Eddy Chen. And, here is a recent (2020) argument that superdeterminism is “fine-tuned” by Indrajit Sen and Antony Valentini. So, the idea is contested in the foundations community. In response, Vance, Sabine, and Palmer recently (2022) proposed a different version of superdeterminism here. Thinking dynamically (which they don’t — more on that later), one could say the previous version of superdeterminism has the instruction sets controlling Alice and Bob’s measurement choices (Table 1). The new version (called “supermeasured theory”) has Alice and Bob’s measurement choices controlling the instruction sets. That is, each instruction set is only measured in one of the nine measurement pairs (Table 2). Indeed, there are 72 instruction sets for the 72 trials of the experiment shown in Table 2. That removes the complaint about superdeterminism being “conspiratorial” or “fine-tuned” or “violating free will.”
Table 2
Again, that means you need information from the future controlling the instruction set sent from the source, if you’re thinking dynamically. However, Vance et al. do not think dynamically writing:
In the supermeasured models that we consider, the distribution of hidden variables is correlated with the detector settings at the time of measurement. The settings do not cause the distribution. We prefer to use find [sic] Adlam’s terms—that superdeterministic/supermeasured theories apply an “atemporal” or “all-at-once” constraint—more apt and more useful.
Indeed, they voice collectively the same sentiment about retrocausality that Sabine voiced alone in her quote above. They write:
In some parts of the literature, authors have tried to distinguish two types of theories which violate Bell-SI. Those which are superdetermined, and those which are retrocausal. The most naive form of this (e.g. [6]) seems to ignore the prior existence of the measurement settings, and confuses a correlation with a causation. More generally, we are not aware of an unambiguous definition of the term “retrocausal” and therefore do not want to use it.
In short, there does seem to be an emerging consensus between the camps calling themselves superdeterministic and retrocausal that the best way to view violations of Statistical Independence is in “all-at-once” fashion as in Geroch’s quote:
There is no dynamics within space-time itself: nothing ever moves therein; nothing happens; nothing changes. In particular, one does not think of particles as moving through space-time, or as following along their world-lines. Rather, particles are just in space-time, once and for all, and the world-line represents, all at once, the complete life history of the particle.
Regardless of the terminology, I would point out that Sabine is not merely offering an interpretation of QM, but she is proposing the existence of a more fundamental (deterministic) theory for which QM is a statistical approximation. In this paper, she even suggests “what type of experiment has the potential to reveal deviations from quantum mechanics.” Specifically:
This means concretely that one should make measurements on states prepared as identically as possible with devices as small and cool as possible in time-increments as small as possible.
According to this article in New Scientist (published in May 2021):
The good news is that Siddharth Ghosh at the University of Cambridge has just the sort of set-up that Hossenfelder needs. Ghosh operates nano-sensors that can detect the presence of electrically charged particles and capture information about how similar they are to each other, or whether their captured properties vary at random. He plans to start setting up the experiment in the coming months.
We’ll see what the experiments tell us.
PhD in general relativity (1987), researching foundations of physics since 1994. Coauthor of “Beyond the Dynamical Universe” (Oxford UP, 2018).
https://www.physicsforums.com/threa…elers-1986-paper-comments.952665/post-6056948
Can you give any links to threads/posts?
I didn't mean that you are wrong but the statements by @RUTA . We had extended discussions about this repeatedly!
"Impart quantum spin" is too narrow; it should be "exchange angular momentum". Quantum spin can be inter-converted with other forms of angular momentum.
I would be interested in seeing any references in the literature to analyses of measurement interactions that address this question.
I don't know. The point I have made is not one I have seen addressed in the literature. But that doesn't make it wrong.
I don't know, how often we have discussed these wrong statements in the forum. Should this really be part of the Insights?
How do you know? You're not measuring the exchange of angular momentum with the environment. That doesn't mean you can assume it doesn't happen. It means you don't know.
I don't know where you're getting this from. There can't be any experimental uncertainty in something that's not being measured. The fact that measurement involves interaction between the measured system and the measuring device is basic QM. But it does not imply that all aspects of that interaction are captured in the measurement result. In fact they practically never are.
The Bell spin states obtain due to conservation of spin angular momentum without regard to any loss to the environment. Therefore, the theoretical results I shared are independent of experimental uncertainties, which is what you're trying to invoke.
Sorry, these statements are simply false as a matter of what actually happens in an experiment. Measurement involves interaction between the measured system and the measuring device. That interaction can exchange conserved quantities. So it is simply physically invalid to only look at the measured systems when evaluating conservation laws.
Look at a Bell spin triplet state in the symmetry plane. When Alice and Bob both measure in the same direction, they both get the same outcome, +1 or -1. That is due to conservation of spin angular momentum. Now suppose Bob measures at an angle ##\theta## with respect to Alice and they do many trials of the experiment. When Alice partitions the data according to her +1 or -1 results, she expects Bob to measure ##+\cos{\theta}## or ##-\cos{\theta}##, respectively, because she knows he would have also measured +1 or -1 if he had measured in her direction. Therefore, she knows his true, underlying value of spin angular momentum is +1 or -1 along her measurement direction, so he should be measuring the projection of that true, underlying value along his measurement direction at ##\theta## to conserve spin angular momentum. Of course, Bob can partition the data according to his ##\pm 1## equivalence relation and say it is Alice who should be measuring ##\pm \cos{\theta}## in order to conserve spin angular momentum. It is impossible to conserve spin angular momentum exactly according to either Alice or Bob because they both always measure ##\pm 1## (in accord with the relativity principle), never a fraction. However, their results do average ##\pm \cos{\theta}## under these data partitions. It has nothing to do with momentum transfer with the measurement device. All of this follows strictly from the Bell spin state formalism.
As I said, this can't be correct because during the measurement process angular momentum is exchanged between the measured particles, which the formalism you refer to describes, and the measuring devices and environment, which the formalism does not describe. So the formalism is incomplete and cannot support any claims about conservation laws.
My point has nothing to do with experimental uncertainty. It has to do with the fact that during measurement, the measured particles are open systems, not closed systems.
My claim is a mathematical fact that follows from the Bell state formalism alone. It has nothing to do with experimental uncertainty.
You are aware that the MWI is 100% deterministic, correct?
This is not what the MWI says. The "universe" in the MWI is the universal wave function, and there is always just one universal wave function. The wave function doesn't "split" when a measurement is made; that would violate unitary evolution, and the MWI says that the wave function always evolves in time by unitary evolution.
No, you need to show that a conservation law must be violated if the universe is not fully 100% deterministic because you are the one who is making that claim. I am simply pointing out that you have not shown that. You have simply assumed it, and you can't just assume it. You have to show it.
The rest of your post is irrelevant to mine because I did not say any of the things you are talking about.
The Standard Model is a quantum field theory. Certain quantum field states are described as "particles", but there are many quantum field states that cannot be described that way. The fundamental entities are fields.
You are completely ignoring Bell's Theorem. I realize that Bell himself has mentioned Superdeterminism (SD) as an "out" for his own theorem (as you point out). However, SD requires substantially more assumptions than the 3 you have above. In other words: unless you have substantially more (and progressively more outrageous) assumptions than those 3, then at least one of those 3 must not hold true.
And I get tired of saying this, but: There is no candidate SD theory in existence. By this I mean: one which explains why any choice of measurement basis leads to a violation of a Bell Inequality, in any of the following scenarios:
a. Measurement basis does not vary between pairs. This is the most common Bell test, and violates a Bell inequality.
b. Measure basis does vary:
i. By random selection, such as by computers or by radioactive samples. This too has been done, and violates a Bell inequality.
ii. By human choice (such as the Big Bell test, and violates a Bell inequality).
If there were such a theory, it could easily be falsified by suitable variations on the above. Further, there is no particular rational to invoke SD as an explanation for observed results in the area of entanglement, but no where else in all of science. You may as well claim that the "true" value of c is 2% higher than the observed value… due to Superdeterminism.
I don't think this claim can be asserted as fact at our current level of knowledge. When we make measurements on quantum systems, we bring into play huge sinks of energy and momentum (measuring devices and environments). But we don't measure the change in energy and momentum of the sinks. We only look at the measured systems. But if a measurement takes place, the measured systems are not closed systems and we should not in general expect them to obey conservation laws in isolation; they can exchange energy and momentum with measuring devices and environments. To know that conservation laws were violated we would have to include the changes in energy and momentum of the measuring devices and environments. But we don't. So I don't see that we have any basis to assert what you assert in the above quote. All we can say is that we have no way of testing conservation laws for such cases at our current level of technology.
No, it doesn't. Events that are not pre-determined can still happen in a way that obeys conservation laws.
This is not correct; field theories that do not contain any particles still have causality.
As I showed in this Insight, the indeterminism we have in QM is unavoidable according to the relativity principle. And, yes, that means conservation of spin angular momentum is not exact when Alice and Bob are making different measurements. Conservation holds only on average (Bob saying Alice must average her results and Alice saying the same about Bob) when they make different measurements.
https://www.physicsforums.com/threads/derivation-of-statistical-mechanics.1013629/
https://www.physicsforums.com/threads/interpretations-of-the-no-communication-theorem.1013630/
This thread is now reopened, with a reminder to please keep it focused on discussion of the article about superdeterminism referenced in the OP.
A thread split might be warranted here, yes.
For future reference, a better way to prompt that kind of consideration is the Report button.
Then what isn't in principle measurable by humans? Your basic rule seems to be that anything to which the Born rule applies is "in principle measurable by humans" by definition, which is arguing in a circle.
We have dozens of posts of pointless argument which has nothing to do with the original Insight.
It's so disrespectful, IMHO.
You have it backwards. My point is that there are many situations (like the orbit of the Moon 4 billion years ago) to which the Born rule can perfectly well be applied but which don't involve human measurements or observations.
Wikipedia is not a valid reference. You need to reference a textbook or peer-reviewed paper. (You do that for Boltzmann so that part is fine, although I don't have those books so I can't personally check the references.)
I don't think these claims are true. From posts others have made in this thread, I don't think I'm the only one with that opinion.
What reference are you using for your understanding of Gibbs' derivation of statistical mechanics? (And for that matter, Boltzmann's?)
No, they're not. This seems to be a fundamental disagreement we have. I don't think we're going to resolve it.
Yes, here is the explanatory sequence:
1. No preferred reference frame + h –> average-only projection for qubits
2. Average-only projection for qubits –> average-only conservation per the Bell states
3. Average-only conservation per the Bell states –> Tsirelson bound
In short, the Tsirelson bound obtains due to "conservation per no preferred reference frame".
Neither of these looks right to me.
It is true that "the system is always and only in one pure state". And if we could measure with exact precision which state it was in, at any instant, according to classical physics, we would know its state for all time, since the dynamics are fully deterministic.
However, we can't measure the system's state with exact precision. In fact, we can't measure its microscopic state (the individual positions and velocities of all the particles) at all. We can only measure macroscopic variables like temperature, pressure, and volume. So in order to make predictions about what the system will do, we have to coarse grain the phase space into "cells", where each cell represents a set of phase space points that all have the same values for the macroscopic variables we are measuring. Then, roughly speaking, we build theoretical models of the system, for the purpose of making predictions, using these coarse grained cells instead of individual phase space points: we basically assume that, at an instant of time where the macroscopic variables have particular values, the system's exact microscopic state is equally likely to be any of the phase space points inside the cell that corresponds to those values for the macroscopic variables. That gives us a distribution and enables us to do statistics.
I don't see how this can be true since the classical equations of motion are fully deterministic. A trajectory in phase space is a 1-dimensional curve (what you describe as "delta functions evolving to delta functions"), it does not start out as a 1-dimensional curve but then somehow turn into a 2-dimensional area.
I think this description is a little off. What your series of pictures show is a series of "snapshots" at single instants of time of one "cell" of a coarse graining of the phase space (i.e., all of the phase space points in the cell have the same values for macroscopic variables like temperature at that instant of time). At ##t = 0## the cell looks nice and neat and easy to distinguish from the rest of the phase space even with measurements of only finite precision (the exact location of the boundary of the cell will be uncertain, but the boundary is simple and that uncertainty doesn't have too much practical effect). As time evolution proceeds, however, ergodicity (I think that's the right term) causes the shape of the cell in phase space to become more and more convoluted and makes it harder and harder to distinguish, by measurements with only finite precision, what part of the phase space is in the cell and what part is not.
Yes, there is. The blue region in his picture is not a single trajectory. It's a set of phase space points that correspond to one "cell" of a coarse graining of the phase space at a single instant of time. See above.
That would be more arboreocentric!
In other words, your claim is not "standard QM". It's your opinion.
No, that's not what I have been claiming. I have been claiming that "measurement" is not limited to experiments run by humans; an object like the Moon is "measuring" itself constantly, whether a human looks at it or not.
This doesn't change anything I have said.
I don't think we're going to make any further progress in this discussion.
You mentioned ergodicity first. We obviously disagree on foundations of classical statistical mechanics, even on meaning of the words such as "ergodicity" and "coarse graining", so I think it's important to clear these things up.
No it doesn't. The final pink region has the same area as the initial one, it covers only a fraction of the whole gray disc. It's only if you look at the picture with blurring glasses (which is coarse graining) that it looks as if the whole disc is covered.
Except that it doesn't.
https://www.jstor.org/stable/1215826?seq=1#page_scan_tab_contents
https://arxiv.org/abs/1103.4003
https://arxiv.org/pdf/cond-mat/0105242v1.pdf
No, that's coarse graining, not ergodicity. Ergodicity involves an average over a long period of time, while the fourth picture shows filling the whole phase-space volume at one time. And it fills the whole space only in the coarse grained sense.
I see, thanks!
It seems that we disagree on what is coarse graining, so let me explain how I see it, which indeed agrees with all textbooks I am aware of, as well as with the view of Jaynes mentioned in your wikipedia quote. Coarse graining is not something that happens, it's not a process. It's just a fact that, in practice, even in classical physics we cannot measure position and momentum with perfect precision. The coarse graining is the reason why do we use statistical physics in the first place. Hence there is always some practical uncertainty in the phase space. The picture shows how an initial small uncertainty evolves to an uncertainty that, upon coarse graining, looks like a larger uncertainty.
View attachment 298912
By Gibbs H-theorem, I guess you mean H-theorem based on Gibbs entropy, rather than Boltzmann entropy. But I didn't know that Gibbs H-theorem in classical statistical mechanics does not work for the reasons you indicated. Can you give a reference?
If your argument is correct, then an analogous argument should apply to classical statistical mechanics: The Hamiltonian evolution doesn't involve the coarse graining steps that are used in the Boltzmann H-theorem. A delta distribution in phase space remains a delta distribution at all times and does not decay into a thermal equilibrium. Would you then conclude that thermal equilibrium in classical statistical mechanics also requires fine tuning?
And where does that say the no communication theorem is anthropomorphic?
You apparently have a misunderstanding as to what "standard QM" is. "Standard QM" is not any particular interpretation. It is just the "shut up and calculate" math.
I have mentioned collapse interpretations, but not "objective collapse" ones specifically. Nothing I have said requires objective collapse. The only interpretation I have mentioned in this thread that I do not think is relevant to the discussion (because in it, measurements don't have single outcomes, and measurements having single outcomes seems to me to be a requirement for the discussion we are having) is the MWI.
No, it also depends on the no communication theorem. I know you claim that the no communication theorem is only about the Born rule, and that the Born rule is anthropomorphic. I just disagree with those claims. I have already explained why multiple times. You haven't addressed anything I've actually said. You just keep repeating the same claims over and over.
Do you mean this?
https://www.amazon.com/dp/1107002176/?tag=pfamazon01-20
Well, this, at least, is a new misstatement you hadn't made before. The reference you give does not say you can shift collapse "arbitrarily close to the present". It only says you can shift it "to the end of the quantum computation" (and if you actually read the details it doesn't even quite say that–it's talking about a particular result regarding equivalence of quantum computing circuits, not a general result about measurement and collapse). That's a much weaker claim (and is also irrelevant to this discussion). If a quantum computation happened in someone else's lab yesterday, I can't shift any collapses resulting from measurements made in that computation "arbitrarily close to the present".
I have already agreed multiple times that they are not the same thing.
This whole subthread started because you claimed that the no communication theorem was anthropomorphic. I am trying to get you to either drop that claim or be explicit about exactly what kind of QM interpretation you are using to make it. I have repeatedly stated what kind of interpretation I think is necessary to make that claim: a "consciousness causes collapse" interpretation. I have brought up other interpretations, such as "decoherence and collapse go together" only in order to show that, under those interpretations, the no communication theorem is not anthropomorphic, because measurement itself is not. Instead of addressing that point, which is the only one that's relevant to the subthread, you keep complaining about irrelevancies like whether or not collapse and decoherence are the same thing (of course they're not, and I never said they were).
At this point I'm not going to respond further since you seem incapable of addressing the actual point of the subthread we are having. I've already corrected other misstatements of yours and I'm not going to keep correcting the same ones despite the fact that you keep making them.
OK, so why do you not agree that Valentini's version fixes this?
The fine tuning is in the quantum equilibrium assumption. But maybe Valentini's version is able to overcome it the fine tuning, at the cost and benefit of predicting violations of present quantum theory without fine tuning. There's a discussion by Wood and Spekkens on p21 of https://arxiv.org/abs/1208.4119.
I can't find this claim in the paper. Where exactly does the paper say that?
So where exactly is fine tuning in the Bohmian theory?