mermin device quantum entanglement

Superdeterminism and the Mermin Device

Estimated Read Time: 6 minute(s)
Common Topics: detector, instruction, setting, statistical, frequency

Superdeterminism as a way to resolve the mystery of quantum entanglement is generally not taken seriously in the foundations community, as explained in this video by Sabine Hossenfelder (posted in Dec 2021). In her video, she argues that superdeterminism should be taken seriously, indeed it is what quantum mechanics (QM) is screaming for us to understand about Nature. According to her video per the twin-slit experiment, superdeterminism simply means the particles must have known at the outset of their trip whether to go through the right slit, the left slit, or both slits, based on what measurement was going to be done on them. Thus, she defines superdeterminism this way:

Superdeterminism: What a quantum particle does depends on what measurement will take place.

In Superdeterminism: A Guide for the Perplexed she gives a bit more technical definition:

Theories that do not fulfill the assumption of Statistical Independence are called “superdeterministic” … .

where Statistical Independence in the context of Bell’s theory means:

There is no correlation between the hidden variables, which determine the measurement outcome, and the detector settings.

Sabine points out that Statistical Independence should not be equated with free will and I agree, so a discussion of free will in this context is a red herring and will be ignored.

Since the behavior of the particle depends on a future measurement of that particle, Sabine writes:

This behavior is sometimes referred to as “retrocausal” rather than superdeterministic, but I have refused and will continue to refuse using this term because the idea of a cause propagating back in time is meaningless.

Ruth Kastner argues similarly here and we agree. Simply put, if the information is coming from the future to inform particles at the source about the measurements that will be made upon them, then that future is co-real with the present. Thus, we have a block universe and since nothing “moves” in a block universe, we have an “all-at-once” explanation per Ken Wharton. Huw Price and Ken say more about their distinction between superdeterminism and retrocausality here. I will focus on the violation of Statistical Independence and not worry about these semantics.

So, let me show you an example of the violation of Statistical Independence using Mermin’s instruction sets. If you are unfamiliar with the mystery of quantum entanglement illustrated by the Mermin device, read about the Mermin device in this Insight, “Answering Mermin’s Challenge with the Relativity Principle” before continuing.

In using instruction sets to account for quantum-mechanical Fact 1 (same-color outcomes in all trials when Alice and Bob choose the same detector settings (case (a)), Mermin notes that quantum-mechanical Fact 2 (same-color outcomes in ##\frac{1}{4}## of all trials when Alice and Bob choose different detector settings (case (b)) must be violated. In making this claim, Mermin is assuming that each instruction set produced at the source is measured with equal frequency in all nine detector setting pairs (11, 12, 13, 21, 22, 23, 31, 32, 33). That assumption is called Statistical Independence. Table 1 shows how Statistical Independence can be violated so as to allow instruction sets to reproduce quantum-mechanical Facts 1 and 2 per the Mermin device.

Table 1

Statistical Independence

In row 2 column 2 of Table 1, you can see that Alice and Bob select (by whatever means) setting pairs 23 and 32 with twice the frequency of 21, 12, 31, and 13 in those case (b) trials where the source emits particles with the instruction set RRG or GGR (produced with equal frequency). Column 4 then shows that this disparity in the frequency of detector setting pairs would indeed allow our instruction sets to satisfy Fact 2. However, the detector setting pairs would not occur with equal frequency overall in the experiment and this would certainly raise red flags for Alice and Bob. Therefore, we introduce a similar disparity in the frequency of the detector setting pair measurements for RGR/GRG (12 and 21 frequencies doubled, row 3) and RGG/GRR (13 and 31 frequencies doubled, row 4), so that they also satisfy Fact 2 (column 4). Now, if these six instruction sets are produced with equal frequency, then the six case (b) detector setting pairs will occur with equal frequency overall. In order to have an equal frequency of occurrence for all nine detector setting pairs, let detector setting pair 11 occur with twice the frequency of 22 and 33 for RRG/GGR (row 2), detector setting pair 22 occur with twice the frequency of 11 and 33 for RGR/GRG (row 3), and detector setting pair 33 occur with twice the frequency of 22 and 11 for RGG/GRR (row 4). Then, we will have accounted for quantum-mechanical Facts 1 (column 3) and 2 (column 4) of the Mermin device using instruction sets with all nine detector setting pairs occurring with equal frequency overall.

Since the instruction set (hidden variable values of the particles) in each trial of the experiment cannot be known by Alice and Bob, they do not suspect any violation of Statistical Independence. That is, they faithfully reproduced the same QM state in each trial of the experiment and made their individual measurements randomly and independently, so that measurement outcomes for each detector setting pair represent roughly ##\frac{1}{9}## of all the data. Indeed, Alice and Bob would say their experiment obeyed Statistical Independence, i.e., there is no (visible) correlation between what the source produced in each trial and how Alice and Bob chose to make their measurement in each trial.

Here is a recent (2020) argument against such violations of Statistical Independence by Eddy Chen. And, here is a recent (2020) argument that superdeterminism is “fine-tuned” by Indrajit Sen and Antony Valentini. So, the idea is contested in the foundations community. In response, Vance, Sabine, and Palmer recently (2022) proposed a different version of superdeterminism here. Thinking dynamically (which they don’t — more on that later), one could say the previous version of superdeterminism has the instruction sets controlling Alice and Bob’s measurement choices (Table 1). The new version (called “supermeasured theory”) has Alice and Bob’s measurement choices controlling the instruction sets. That is, each instruction set is only measured in one of the nine measurement pairs (Table 2). Indeed, there are 72 instruction sets for the 72 trials of the experiment shown in Table 2. That removes the complaint about superdeterminism being “conspiratorial” or “fine-tuned” or “violating free will.”

Table 2

Again, that means you need information from the future controlling the instruction set sent from the source, if you’re thinking dynamically. However, Vance et al. do not think dynamically writing:

In the supermeasured models that we consider, the distribution of hidden variables is correlated with the detector settings at the time of measurement. The settings do not cause the distribution. We prefer to use find [sic] Adlam’s terms—that superdeterministic/supermeasured theories apply an “atemporal” or “all-at-once” constraint—more apt and more useful.

Indeed, they voice collectively the same sentiment about retrocausality that Sabine voiced alone in her quote above. They write:

In some parts of the literature, authors have tried to distinguish two types of theories which violate Bell-SI. Those which are superdetermined, and those which are retrocausal. The most naive form of this (e.g. [6]) seems to ignore the prior existence of the measurement settings, and confuses a correlation with a causation. More generally, we are not aware of an unambiguous definition of the term “retrocausal” and therefore do not want to use it.

In short, there does seem to be an emerging consensus between the camps calling themselves superdeterministic and retrocausal that the best way to view violations of Statistical Independence is in “all-at-once” fashion as in Geroch’s quote:

There is no dynamics within space-time itself: nothing ever moves therein; nothing happens; nothing changes. In particular, one does not think of particles as moving through space-time, or as following along their world-lines. Rather, particles are just in space-time, once and for all, and the world-line represents, all at once, the complete life history of the particle.

Regardless of the terminology, I would point out that Sabine is not merely offering an interpretation of QM, but she is proposing the existence of a more fundamental (deterministic) theory for which QM is a statistical approximation. In this paper, she even suggests “what type of experiment has the potential to reveal deviations from quantum mechanics.” Specifically:

This means concretely that one should make measurements on states prepared as identically as possible with devices as small and cool as possible in time-increments as small as possible.

According to this article in New Scientist (published in May 2021):

The good news is that Siddharth Ghosh at the University of Cambridge has just the sort of set-up that Hossenfelder needs. Ghosh operates nano-sensors that can detect the presence of electrically charged particles and capture information about how similar they are to each other, or whether their captured properties vary at random. He plans to start setting up the experiment in the coming months.

We’ll see what the experiments tell us.

140 replies
« Older CommentsNewer Comments »
  1. vanhees71 says:

    PeterDonis said

    I don't know. The point I have made is not one I have seen addressed in the literature. But that doesn't make it wrong.

    I didn't mean that you are wrong but the statements by @RUTA . We had extended discussions about this repeatedly!

  2. PeterDonis says:

    Lord Jestocost said

    The question is: Does a quantum spin "exhibition" actually impart quantum spin to the surroundings?

    "Impart quantum spin" is too narrow; it should be "exchange angular momentum". Quantum spin can be inter-converted with other forms of angular momentum.

    I would be interested in seeing any references in the literature to analyses of measurement interactions that address this question.

  3. PeterDonis says:

    vanhees71 said

    Should this really be part of the Insights?

    I don't know. The point I have made is not one I have seen addressed in the literature. But that doesn't make it wrong.

  4. vanhees71 says:

    PeterDonis said

    Sorry, these statements are simply false as a matter of what actually happens in an experiment. Measurement involves interaction between the measured system and the measuring device. That interaction can exchange conserved quantities. So it is simply physically invalid to only look at the measured systems when evaluating conservation laws.

    I don't know, how often we have discussed these wrong statements in the forum. Should this really be part of the Insights?

  5. PeterDonis says:

    RUTA said

    The Bell spin states obtain due to conservation of spin angular momentum without regard to any loss to the environment.

    How do you know? You're not measuring the exchange of angular momentum with the environment. That doesn't mean you can assume it doesn't happen. It means you don't know.

    RUTA said

    the theoretical results I shared are independent of experimental uncertainties, which is what you're trying to invoke.

    I don't know where you're getting this from. There can't be any experimental uncertainty in something that's not being measured. The fact that measurement involves interaction between the measured system and the measuring device is basic QM. But it does not imply that all aspects of that interaction are captured in the measurement result. In fact they practically never are.

  6. RUTA says:

    PeterDonis said

    Sorry, these statements are simply false as a matter of what actually happens in an experiment. Measurement involves interaction between the measured system and the measuring device. That interaction can exchange conserved quantities. So it is simply physically invalid to only look at the measured systems when evaluating conservation laws.

    The Bell spin states obtain due to conservation of spin angular momentum without regard to any loss to the environment. Therefore, the theoretical results I shared are independent of experimental uncertainties, which is what you're trying to invoke.

  7. PeterDonis says:

    RUTA said

    It is impossible to conserve spin angular momentum exactly according to either Alice or Bob because they both always measure ##\pm 1## (in accord with the relativity principle), never a fraction. However, their results do average ##\pm \cos{\theta}## under these data partitions. It has nothing to do with momentum transfer with the measurement device.

    Sorry, these statements are simply false as a matter of what actually happens in an experiment. Measurement involves interaction between the measured system and the measuring device. That interaction can exchange conserved quantities. So it is simply physically invalid to only look at the measured systems when evaluating conservation laws.

  8. RUTA says:

    PeterDonis said

    As I said, this can't be correct because during the measurement process angular momentum is exchanged between the measured particles, which the formalism you refer to describes, and the measuring devices and environment, which the formalism does not describe. So the formalism is incomplete and cannot support any claims about conservation laws.

    My point has nothing to do with experimental uncertainty. It has to do with the fact that during measurement, the measured particles are open systems, not closed systems.

    Look at a Bell spin triplet state in the symmetry plane. When Alice and Bob both measure in the same direction, they both get the same outcome, +1 or -1. That is due to conservation of spin angular momentum. Now suppose Bob measures at an angle ##\theta## with respect to Alice and they do many trials of the experiment. When Alice partitions the data according to her +1 or -1 results, she expects Bob to measure ##+\cos{\theta}## or ##-\cos{\theta}##, respectively, because she knows he would have also measured +1 or -1 if he had measured in her direction. Therefore, she knows his true, underlying value of spin angular momentum is +1 or -1 along her measurement direction, so he should be measuring the projection of that true, underlying value along his measurement direction at ##\theta## to conserve spin angular momentum. Of course, Bob can partition the data according to his ##\pm 1## equivalence relation and say it is Alice who should be measuring ##\pm \cos{\theta}## in order to conserve spin angular momentum. It is impossible to conserve spin angular momentum exactly according to either Alice or Bob because they both always measure ##\pm 1## (in accord with the relativity principle), never a fraction. However, their results do average ##\pm \cos{\theta}## under these data partitions. It has nothing to do with momentum transfer with the measurement device. All of this follows strictly from the Bell spin state formalism.

  9. PeterDonis says:

    RUTA said

    My claim is a mathematical fact that follows from the Bell state formalism alone.

    As I said, this can't be correct because during the measurement process angular momentum is exchanged between the measured particles, which the formalism you refer to describes, and the measuring devices and environment, which the formalism does not describe. So the formalism is incomplete and cannot support any claims about conservation laws.

    RUTA said

    It has nothing to do with experimental uncertainty.

    My point has nothing to do with experimental uncertainty. It has to do with the fact that during measurement, the measured particles are open systems, not closed systems.

  10. RUTA says:

    PeterDonis said

    I don't think this claim can be asserted as fact at our current level of knowledge. When we make measurements on quantum systems, we bring into play huge sinks of energy and momentum (measuring devices and environments). But we don't measure the change in energy and momentum of the sinks. We only look at the measured systems. But if a measurement takes place, the measured systems are not closed systems and we should not in general expect them to obey conservation laws in isolation; they can exchange energy and momentum with measuring devices and environments. To know that conservation laws were violated we would have to include the changes in energy and momentum of the measuring devices and environments. But we don't. So I don't see that we have any basis to assert what you assert in the above quote. All we can say is that we have no way of testing conservation laws for such cases at our current level of technology.

    My claim is a mathematical fact that follows from the Bell state formalism alone. It has nothing to do with experimental uncertainty.

  11. PeterDonis says:

    kclubb said

    Speculating that every time a “measurement” is made a new Universe comes into existence.

    This is not what the MWI says. The "universe" in the MWI is the universal wave function, and there is always just one universal wave function. The wave function doesn't "split" when a measurement is made; that would violate unitary evolution, and the MWI says that the wave function always evolves in time by unitary evolution.

  12. PeterDonis says:

    kclubb said

    What you need to show is an example where a conservation law was VIOLATED when the observation is made.

    No, you need to show that a conservation law must be violated if the universe is not fully 100% deterministic because you are the one who is making that claim. I am simply pointing out that you have not shown that. You have simply assumed it, and you can't just assume it. You have to show it.

    The rest of your post is irrelevant to mine because I did not say any of the things you are talking about.

  13. PeterDonis says:

    kclubb said

    all forces have corresponding particles – the Standard Model.

    The Standard Model is a quantum field theory. Certain quantum field states are described as "particles", but there are many quantum field states that cannot be described that way. The fundamental entities are fields.

  14. DrChinese says:

    kclubb said

    If we

    1. Assume conservation laws hold everywhere for all time and are exact
    2. Assume speed of light is universal.
    3. Assume causality depends on particles

    then the Universe MUST be pre-determined…

    You are completely ignoring Bell's Theorem. I realize that Bell himself has mentioned Superdeterminism (SD) as an "out" for his own theorem (as you point out). However, SD requires substantially more assumptions than the 3 you have above. In other words: unless you have substantially more (and progressively more outrageous) assumptions than those 3, then at least one of those 3 must not hold true.

    And I get tired of saying this, but: There is no candidate SD theory in existence. By this I mean: one which explains why any choice of measurement basis leads to a violation of a Bell Inequality, in any of the following scenarios:

    a. Measurement basis does not vary between pairs. This is the most common Bell test, and violates a Bell inequality.

    b. Measure basis does vary:
    i. By random selection, such as by computers or by radioactive samples. This too has been done, and violates a Bell inequality.
    ii. By human choice (such as the Big Bell test, and violates a Bell inequality).

    If there were such a theory, it could easily be falsified by suitable variations on the above. Further, there is no particular rational to invoke SD as an explanation for observed results in the area of entanglement, but no where else in all of science. You may as well claim that the "true" value of c is 2% higher than the observed value… due to Superdeterminism.

  15. PeterDonis says:

    RUTA said

    that means conservation of spin angular momentum is not exact when Alice and Bob are making different measurements. Conservation holds only on average

    I don't think this claim can be asserted as fact at our current level of knowledge. When we make measurements on quantum systems, we bring into play huge sinks of energy and momentum (measuring devices and environments). But we don't measure the change in energy and momentum of the sinks. We only look at the measured systems. But if a measurement takes place, the measured systems are not closed systems and we should not in general expect them to obey conservation laws in isolation; they can exchange energy and momentum with measuring devices and environments. To know that conservation laws were violated we would have to include the changes in energy and momentum of the measuring devices and environments. But we don't. So I don't see that we have any basis to assert what you assert in the above quote. All we can say is that we have no way of testing conservation laws for such cases at our current level of technology.

  16. PeterDonis says:

    kclubb said

    A non-pre-determined Universe needs to violate conservation laws somewhere.

    No, it doesn't. Events that are not pre-determined can still happen in a way that obeys conservation laws.

  17. RUTA says:

    kclubb said

    I am a novice here. But I have been studying about SD for about 8 months now. Going back to the basic issue Einstein had with non-locality…. No one seems to be addressing a point in this discussion – assume that causality cannot travel faster than the speed of light (we have never observed violation of speed of light, and without particles we have no causality).

    We see measurements that SEEM to violate locality, BUT we do have a way out by SD (John Bell himself said so). We do believe in symmetry laws, conservation of energy, momentum, etc. We CAN predict in a real sense where planets, baseballs, etc will be in the future. I think, prior to QM we would all agree that if all factors were considered in initial conditions and all particles (heat=photons, etc) could be taken into account we could predict a small system in its entirety. I have never seen an argument where someone says “conservation of energy is only approximate”, because they would need to demonstrate this experimentally.

    So given that conservation laws are all assumed to be exact (pre QM), what happened to the argument that all events are pre-determined? Not predictable as that is impossible, but pre-determined given the belief that we have discovered all the laws of motion and with the assumption that particles are real (again particles became non-real after QM, only when the wave function was available did we consider that ball are not real.)

    So here is my question: If the Universe is not pre-determined, where did the differences in energy, momentum, etc go? A non-pre-determined Universe needs to violate conservation laws somewhere. Either we have not discovered all the laws, or there is a leak in Energy somewhere we have not discovered yet. Do we not believe our own laws of Physics?

    If we

    1. Assume conservation laws hold everywhere for all time and are exact
    2. Assume speed of light is universal.
    3. Assume causality depends on particles

    then the Universe MUST be pre-determined, there is no way around it that we have actuality observed. NONE. SD, it seems to me does not need to prove itself, it seems to me that SD must be disproven, as it is an obvious result of the above assumptions. Which of the 3 assumptions would we abandon?

    Now enter QM. We see entanglement experiments confirm QM predictions, but we also believe in the above 3 assumptions, then why do we need to introduce non-local interpretations at all – SD is the way the Universe works based on the 3 assumptions, what is the problem? Based on what I have read, the only argument seems to be the disbelief that we are not free to do Science. That is the only argument I have heard – that we FEEL that we are free to make decisions on our own and this somehow invalidates all our scientific evidence for SD.

    But this was a problem early on – yes we SEE that our laws work, we understand we cannot take into account all factors when trying to predict an outcome – but the ASSUMPTION underneath was that there are laws that govern the Universe and therefore, unless someone can demonstrate how these laws are violated the Universe is pre-determined down to the last photon.

    There are those who think we live in a simulation – pre-determined again. No evidence for that really, BUT the world view helps explain why we THINK we have free will. An AI living in a simulation may go through its life believing it had free will without ever realizing otherwise. If we ditch the FEELING that we are making free decisions, then SD is absolutely the simplest way to explain the seemingly non-local results of QM. Not “many worlds” which has no observational evidence. I do not see an alternative to SD that is consistent with all our laws and measurements, and the AI worldview easily explains at least ONE way we can be fooled into believing we have free will. But in any case, FEELINGS have been the bane of science forever.

    Non-local QM theories are not necessary, as far as I can tell, if SD is considered an option. If AIs and simulations had been around BEFORE QM was discovered I do not think that non-local theories would ever have been seriously considered.

    As I showed in this Insight, the indeterminism we have in QM is unavoidable according to the relativity principle. And, yes, that means conservation of spin angular momentum is not exact when Alice and Bob are making different measurements. Conservation holds only on average (Bob saying Alice must average her results and Alice saying the same about Bob) when they make different measurements.

  18. PeterDonis says:

    PeroK said

    It seems to me that poor Ruta's insight has been well and truly hijacked here.

    We have dozens of posts of pointless argument which has nothing to do with the original Insight.

    It's so disrespectful, IMHO.

    A thread split might be warranted here, yes.

    For future reference, a better way to prompt that kind of consideration is the Report button.

  19. PeterDonis says:

    Nullstein said

    The orbit of the moon is in principle measurable by humans

    Then what isn't in principle measurable by humans? Your basic rule seems to be that anything to which the Born rule applies is "in principle measurable by humans" by definition, which is arguing in a circle.

  20. PeroK says:
    It seems to me that poor Ruta's insight has been well and truly hijacked here.

    We have dozens of posts of pointless argument which has nothing to do with the original Insight.

    It's so disrespectful, IMHO.

  21. PeterDonis says:

    Nullstein said

    In order to disagree, you must be able to name one fact about a quantum system that that is accessible for humans but not predicted by the Born rule.

    You have it backwards. My point is that there are many situations (like the orbit of the Moon 4 billion years ago) to which the Born rule can perfectly well be applied but which don't involve human measurements or observations.

  22. PeterDonis says:

    Nullstein said

    Just look at Wikipedia:

    Wikipedia is not a valid reference. You need to reference a textbook or peer-reviewed paper. (You do that for Boltzmann so that part is fine, although I don't have those books so I can't personally check the references.)

  23. PeterDonis says:

    Nullstein said

    Gibbs requires dynamic coarse graining by the laws of motion.

    Nullstein said

    this is why Gibbs' H-theorem doesn't work as a derivation of statistical mechanics

    I don't think these claims are true. From posts others have made in this thread, I don't think I'm the only one with that opinion.

    What reference are you using for your understanding of Gibbs' derivation of statistical mechanics? (And for that matter, Boltzmann's?)

  24. PeterDonis says:

    Nullstein said

    If the facts about a quantum system that can in principle be measured by humans are in 1 to 1 correspondence with the facts that are predicted by the Born rule (which they are)

    No, they're not. This seems to be a fundamental disagreement we have. I don't think we're going to resolve it.

  25. RUTA says:

    physika said

    A explanation for the "Tsirelson Bound" ??
    :oops:

    "An explanation is a set of statements, a set of facts, which states the causes, context, and consequences of those facts. It may establish rules or laws"
    .

    Yes, here is the explanatory sequence:

    1. No preferred reference frame + h –> average-only projection for qubits
    2. Average-only projection for qubits –> average-only conservation per the Bell states
    3. Average-only conservation per the Bell states –> Tsirelson bound

    In short, the Tsirelson bound obtains due to "conservation per no preferred reference frame".

  26. PeterDonis says:

    Nullstein said

    Basically, there are two ways to arrive at statistical mechanics

    Neither of these looks right to me.

    It is true that "the system is always and only in one pure state". And if we could measure with exact precision which state it was in, at any instant, according to classical physics, we would know its state for all time, since the dynamics are fully deterministic.

    However, we can't measure the system's state with exact precision. In fact, we can't measure its microscopic state (the individual positions and velocities of all the particles) at all. We can only measure macroscopic variables like temperature, pressure, and volume. So in order to make predictions about what the system will do, we have to coarse grain the phase space into "cells", where each cell represents a set of phase space points that all have the same values for the macroscopic variables we are measuring. Then, roughly speaking, we build theoretical models of the system, for the purpose of making predictions, using these coarse grained cells instead of individual phase space points: we basically assume that, at an instant of time where the macroscopic variables have particular values, the system's exact microscopic state is equally likely to be any of the phase space points inside the cell that corresponds to those values for the macroscopic variables. That gives us a distribution and enables us to do statistics.

  27. PeterDonis says:

    Nullstein said

    Gibbs' H-theorem requires intermediate coarse graining as a physical process, i.e. it must be in the equations of motion.

    I don't see how this can be true since the classical equations of motion are fully deterministic. A trajectory in phase space is a 1-dimensional curve (what you describe as "delta functions evolving to delta functions"), it does not start out as a 1-dimensional curve but then somehow turn into a 2-dimensional area.

  28. PeterDonis says:

    Demystifier said

    The picture shows how an initial small uncertainty evolves to an uncertainty that, upon coarse graining, looks like a larger uncertainty.

    I think this description is a little off. What your series of pictures show is a series of "snapshots" at single instants of time of one "cell" of a coarse graining of the phase space (i.e., all of the phase space points in the cell have the same values for macroscopic variables like temperature at that instant of time). At ##t = 0## the cell looks nice and neat and easy to distinguish from the rest of the phase space even with measurements of only finite precision (the exact location of the boundary of the cell will be uncertain, but the boundary is simple and that uncertainty doesn't have too much practical effect). As time evolution proceeds, however, ergodicity (I think that's the right term) causes the shape of the cell in phase space to become more and more convoluted and makes it harder and harder to distinguish, by measurements with only finite precision, what part of the phase space is in the cell and what part is not.

    Nullstein said

    There is no coarse graining in your picture.

    Yes, there is. The blue region in his picture is not a single trajectory. It's a set of phase space points that correspond to one "cell" of a coarse graining of the phase space at a single instant of time. See above.

  29. PeroK says:

    CoolMint said

    Pointless discussion. To know whether anything is anthropocentric is akin to knowing whether when a tree falls in a forest and no one is around to hear it it makes a sound.

    That would be more arboreocentric!

  30. PeterDonis says:

    Nullstein said

    I don't know if it does

    In other words, your claim is not "standard QM". It's your opinion.

    Nullstein said

    You claim that collapse happens all the time, independent of measurement.

    No, that's not what I have been claiming. I have been claiming that "measurement" is not limited to experiments run by humans; an object like the Moon is "measuring" itself constantly, whether a human looks at it or not.

    Nullstein said

    The no communication theorem involves many other things, but the crucial ingredient that makes it anthropocentric is the Born rule.

    This doesn't change anything I have said.

    I don't think we're going to make any further progress in this discussion.

  31. Demystifier says:

    Nullstein said

    Anyway, if you don't like the ergodic argument, why do you bring it up then? I'm only telling you that you argue based on ergodicity here.

    You mentioned ergodicity first. We obviously disagree on foundations of classical statistical mechanics, even on meaning of the words such as "ergodicity" and "coarse graining", so I think it's important to clear these things up.

  32. Demystifier says:

    Nullstein said

    It just shows the trajectory eventually filling the whole phase space.

    No it doesn't. The final pink region has the same area as the initial one, it covers only a fraction of the whole gray disc. It's only if you look at the picture with blurring glasses (which is coarse graining) that it looks as if the whole disc is covered.

  33. Demystifier says:

    Nullstein said

    What you are presenting is not coarse graining. This is the argument based on ergodic theory and it works fine.

    No, that's coarse graining, not ergodicity. Ergodicity involves an average over a long period of time, while the fourth picture shows filling the whole phase-space volume at one time. And it fills the whole space only in the coarse grained sense.

  34. Demystifier says:

    Nullstein said

    No, they are completely different derivations. Boltzmann's H-theorem is based on the Stosszahlansatz. Gibbs' H-theorem is based on coarse graining in phase space.

    I see, thanks!

    Nullstein said

    The theorem works in classical statistical physics, if you can supply a reason for why this coarse graining happens, i.e. there must be a physical process. The Liouville equation alone is reversible and can't therefore can't increase entropy. Similarly, the Bohmian equations of motion are reversible and can't increase entropy.

    It seems that we disagree on what is coarse graining, so let me explain how I see it, which indeed agrees with all textbooks I am aware of, as well as with the view of Jaynes mentioned in your wikipedia quote. Coarse graining is not something that happens, it's not a process. It's just a fact that, in practice, even in classical physics we cannot measure position and momentum with perfect precision. The coarse graining is the reason why do we use statistical physics in the first place. Hence there is always some practical uncertainty in the phase space. The picture shows how an initial small uncertainty evolves to an uncertainty that, upon coarse graining, looks like a larger uncertainty.

    View attachment 298912

  35. Demystifier says:

    Nullstein said

    No, because this is not Boltzmann's H-theorem. Valentini appeals to Gibbs' H-theorem, which is well-understood not to work without a physical mechanism for the coarse-graining.

    By Gibbs H-theorem, I guess you mean H-theorem based on Gibbs entropy, rather than Boltzmann entropy. But I didn't know that Gibbs H-theorem in classical statistical mechanics does not work for the reasons you indicated. Can you give a reference?

  36. Demystifier says:

    Nullstein said

    Because it doesn't work. Bohmian time evolution doesn't involve the coarse graining steps that are used in his calculation. A delta distribution remains a delta distribution at all times and does not decay into ##|\Psi|^2##.

    If your argument is correct, then an analogous argument should apply to classical statistical mechanics: The Hamiltonian evolution doesn't involve the coarse graining steps that are used in the Boltzmann H-theorem. A delta distribution in phase space remains a delta distribution at all times and does not decay into a thermal equilibrium. Would you then conclude that thermal equilibrium in classical statistical mechanics also requires fine tuning?

  37. PeterDonis says:

    Nullstein said

    I am using standard textbook quantum mechanics as described e.g. in Landau.

    And where does that say the no communication theorem is anthropomorphic?

    Nullstein said

    you are apparently not using standard QM. Instead you seem to be arguing with respect to an objective collapse model.

    You apparently have a misunderstanding as to what "standard QM" is. "Standard QM" is not any particular interpretation. It is just the "shut up and calculate" math.

    I have mentioned collapse interpretations, but not "objective collapse" ones specifically. Nothing I have said requires objective collapse. The only interpretation I have mentioned in this thread that I do not think is relevant to the discussion (because in it, measurements don't have single outcomes, and measurements having single outcomes seems to me to be a requirement for the discussion we are having) is the MWI.

    Nullstein said

    My arguement depends only on the Born rule

    No, it also depends on the no communication theorem. I know you claim that the no communication theorem is only about the Born rule, and that the Born rule is anthropomorphic. I just disagree with those claims. I have already explained why multiple times. You haven't addressed anything I've actually said. You just keep repeating the same claims over and over.

    Nullstein said

    You can read up on this e.g. in Nielsen, Chuang

    Do you mean this?

    https://www.amazon.com/dp/1107002176/?tag=pfamazon01-20

  38. PeterDonis says:

    Nullstein said

    due to the deferred measurement principle, it can be shited arbitrarily close to the present.

    Well, this, at least, is a new misstatement you hadn't made before. The reference you give does not say you can shift collapse "arbitrarily close to the present". It only says you can shift it "to the end of the quantum computation" (and if you actually read the details it doesn't even quite say that–it's talking about a particular result regarding equivalence of quantum computing circuits, not a general result about measurement and collapse). That's a much weaker claim (and is also irrelevant to this discussion). If a quantum computation happened in someone else's lab yesterday, I can't shift any collapses resulting from measurements made in that computation "arbitrarily close to the present".

  39. PeterDonis says:

    Nullstein said

    collapse has nothing to do with decoherence

    I have already agreed multiple times that they are not the same thing.

    Nullstein said

    I don't see why you bring it up.

    This whole subthread started because you claimed that the no communication theorem was anthropomorphic. I am trying to get you to either drop that claim or be explicit about exactly what kind of QM interpretation you are using to make it. I have repeatedly stated what kind of interpretation I think is necessary to make that claim: a "consciousness causes collapse" interpretation. I have brought up other interpretations, such as "decoherence and collapse go together" only in order to show that, under those interpretations, the no communication theorem is not anthropomorphic, because measurement itself is not. Instead of addressing that point, which is the only one that's relevant to the subthread, you keep complaining about irrelevancies like whether or not collapse and decoherence are the same thing (of course they're not, and I never said they were).

    At this point I'm not going to respond further since you seem incapable of addressing the actual point of the subthread we are having. I've already corrected other misstatements of yours and I'm not going to keep correcting the same ones despite the fact that you keep making them.

  40. Demystifier says:

    Nullstein said

    See atyy's post (although I don't agree that Valentini's version fixes this).

    OK, so why do you not agree that Valentini's version fixes this?

  41. atyy says:

    Demystifier said

    So where exactly is fine tuning in the Bohmian theory?

    The fine tuning is in the quantum equilibrium assumption. But maybe Valentini's version is able to overcome it the fine tuning, at the cost and benefit of predicting violations of present quantum theory without fine tuning. There's a discussion by Wood and Spekkens on p21 of https://arxiv.org/abs/1208.4119.

  42. Demystifier says:

    Nullstein said

    As can be seen in this paper, such a non-local explanation of entanglement requires even more fine tuning than superdeterminism.

    I can't find this claim in the paper. Where exactly does the paper say that?

  43. Demystifier says:

    Nullstein said

    It doesn't matter though, whether it is well understood. That doesn't make it any less fine tuned.

    So where exactly is fine tuning in the Bohmian theory?

« Older CommentsNewer Comments »

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply