Assumptions of the Bell theorem

In summary: In fact, the whole point of doing so is to get rid of the probabilistic aspects.The aim of this thread is to make a list of all these additional assumptions that are necessary to prove the Bell theorem. An additional aim is to make the list of assumptions that are used in some but not all versions of the theorem, so are not really necessary.The list of necessary and unnecessary assumptions is preliminary, so I invite others to supplement and correct the list.
  • #561
Lord Jestocost said:
Maybe, quantum theory is an incomplete theory. Maybe, however, we merely have this feeling because quantum theory forces us to “describe” something solely in a pure mathematical formalism because we are not equipped with adequate mental images.
This is what I love about the agent abstraction. It seems perfectly suited to describe all the weirdness of QM, in an intuitive way. Why do I say this: it's beacuse a human is a prime example of an interacting inferring agent, and we have emergen social rules. This is a rich sources of mental abstractions, that fits me perfectly. Try to think about how social rules emerge, are supported, and evolve, and how causality of these interactions works. And then try to look at QM, and there is a good chance you will see it in a new light. At least its my story.

I remember well the journey from first QM class, when i during one particular lecture realized that my whole prior worldview was deeply wrong.

(this isn't a joke)

/Fredrik
 
Physics news on Phys.org
  • #562
Lord Jestocost said:
Maybe, quantum theory is an incomplete theory. Maybe, however, we merely have this feeling because quantum theory forces us to “describe” something solely in a pure mathematical formalism because we are not equipped with adequate mental images.

Despite its ineluctably probabilistic nature, QM can be understood to be as complete as possible given that everyone has to measure the same value for Planck's constant h. Thus, there exists a principle account of QM equivalent to that for SR whereby QM must be probabilistic with all of its mysterious properties. If you are like most physicists who have long since given up on causal mechanisms for time dilation and length contraction in SR, then given this exact same principle account of QM you should also be willing to give up on causal mechanisms responsible for QM. This 3-min video "Beyond Causal Explanation" https://encyclopedia.pub/10904 in Encyclopedia gives a conceptual overview and links to one of four published papers on the idea. Its extension to GR received Honorable Mention in the GRF 2021 Essay Contest this month as well and should be published in a special issue of IJMPD for Essay Contest winners this October. I've attached a paper that is under review at a journal for physics teachers and students that is probably the clearest exposition to date.

For those who are still looking for the "luminiferous aether" or its causal counterpart in SR, you may disregard this post and continue looking for a causal mechanism behind QM as well :-)
 

Attachments

  • PrincipleAccountQM.pdf
    1.2 MB · Views: 156
  • Haha
Likes vanhees71
  • #563
Demystifier said:
Classical mechanics is supposed to be a physical theory, not simplectic geometry.
.
Right.

abstract mathematical forms and its "applications"* to physical systems.

not otherwise.

...simplectic geometry is not the physical system.

just:
*
aproximate "descriptions"

...at most.
 
Last edited:
  • #564
RUTA said:
If you are like most physicists who have long since given up on causal mechanisms for time dilation and length contraction in SR, then given this exact same principle account of QM you should also be willing to give up on causal mechanisms responsible for QM.
I think the principle account stance is great. I like the ambition of the principal account.

But I don't think it has to conflict with seeking and deeper causation, if causation is applied to the principles we can enjoy both principal accounts and causation of principles?

Why does NPRF hold? ie. what would happen if it didnt?
For example why is there an upper speed of signal propagation, and why does everyone see it the same?
Why would everyone agree on the minimum action? or in minimum information gain?

I think we can ask these things, and still enjoy the principal explanations. It's still significant progress as the explanatory value increases.

/Fredrik
 
  • #565
Fra said:
I think the principle account stance is great. I like the ambition of the principal account.

But I don't think it has to conflict with seeking and deeper causation, if causation is applied to the principles we can enjoy both principal accounts and causation of principles?

Why does NPRF hold? ie. what would happen if it didnt?
For example why is there an upper speed of signal propagation, and why does everyone see it the same?
Why would everyone agree on the minimum action? or in minimum information gain?

I think we can ask these things, and still enjoy the principal explanations. It's still significant progress as the explanatory value increases.

/Fredrik
NPRF is a fundamental principle (an ultimate principle explanans). If you want NPRF to become an explanandum, then whatever you use to explain it becomes an ultimate explanans. The explanatory sequence stops where there is consensus on fundamentality. Building a model of objective reality is germane to physics and an objective model of reality is constructed from a collection of individual data collecting contributions (of any sort). No preferred reference frame simply says that none of the individual contributions carries more weight than any other for building the objective model. So, most physicists are happy to accept NPRF as a fundamental principle when it comes to SR without any constructive counterpart needed. A notable exception is Mermin's book on SR.

The point of the papers referenced above is that those physicists should be equally happy to use NPRF as a fundamental principle when it comes to QM. Since those physicists are no longer interested in finding a causal mechanism for time dilation and length contraction a la the luminiferous aether, they need no longer be interested in finding a causal mechanism for quantum entanglement.
 
  • Like
Likes Fra
  • #566
RUTA said:
Building a model of objective reality is germane to physics and an objective model of reality is constructed from a collection of individual data collecting contributions (of any sort). No preferred reference frame simply says that none of the individual contributions carries more weight than any other for building the objective model.
About NPRF, or it's generalisation, the specific "objection" I had in mind was that normally NPFR is taken to be essentially an "observer equivalence constraint" on the set of subjective views. I argue that this is a stronger condition than what you wrote on bold. The bold state is more what i call "observer democracy".

The difference in an "interacting subjects/agent" view, is that the observer democracy represents the fact that all contributions carry equal weight. The equivalence constraints rather is a stronger statement where one assumes the existence of an exact matehmatial transformation between the views, elevating it to a perfect symmetry. In my view, this somewhat subtle distinction is a key when pondering how symmetrys are emergent. ie. the observer democracy is a process that "cause" the emergence of an "effective equivalence", but it need not be perfect such as a mathematical constraint.

/Fredrik
 
  • #567
Fra said:
About NPRF, or it's generalisation, the specific "objection" I had in mind was that normally NPFR is taken to be essentially an "observer equivalence constraint" on the set of subjective views. I argue that this is a stronger condition than what you wrote on bold. The bold state is more what i call "observer democracy".

The difference in an "interacting subjects/agent" view, is that the observer democracy represents the fact that all contributions carry equal weight. The equivalence constraints rather is a stronger statement where one assumes the existence of an exact matehmatial transformation between the views, elevating it to a perfect symmetry. In my view, this somewhat subtle distinction is a key when pondering how symmetrys are emergent. ie. the observer democracy is a process that "cause" the emergence of an "effective equivalence", but it need not be perfect such as a mathematical constraint.
/Fredrik

Yes, perhaps we do think of it differently. Every perspective is unique, i.e., every observation is unique (again, these aren't necessarily human observations). We use our mathematical model of objective reality to blend these disparate data together self-consistently. That self-consistent collection of shared information possesses symmetries in accord with NPRF, e.g., Poincare and gauge invariances. NPRF is just one of two principles required to underwrite physics, the other is the boundary of a boundary principle. We published that in Entropy, here is a short overview with a link to the Entropy paper https://encyclopedia.pub/9753. You can skip the neutral monism stuff and go straight to the physics part of the paper if that's your interest. All of this is getting much deeper than is strictly required to do physics though :-)
 
  • #568
RUTA said:
We published that in Entropy, here is a short overview with a link to the Entropy paper https://encyclopedia.pub/9753. You can skip the neutral monism stuff and go straight to the physics part of the paper if that's your interest. All of this is getting much deeper than is strictly required to do physics though :-)
With space and time in the context of your axiom 1 and 2, are you actually talking about 4D space (from classical physics) are do you by "space" mean a more abstract space of say distinguishable events? (not necessarily making a difference between and internal("local") events and "external" events?

For me to take 3D+1 as input is not acceptable as it likely misses a lot of explanatory opportunities that has to do with how 3D+1 is constructed/selected/evolved?

I do enjoy your focus on the logic of reasoning, but as I need to reinterpret everything in terms of my view, I am not sure how to characterise what you write in the agent abstraction. I am leaning towards that the two views are philosophically incompatible. While I seek to understand the emergent objectivity from "interating agents", you seem to seek to use the "presumed emergent symmetry" as a constructing constraints - this method is what I think is the standard method in physics (even though physicists dont' use a lot of philosophical terms). But it's this method I find not satisfactory, I think one can find a similar principal account for WHY symmetry follows from demoracy, is a kind of "nash-equilibrium" between agents. This would not only explain why the symmery exists, it would also explain why some symmetries are not perfect but a bit noisy.

/Fredrik
 
  • #569
RUTA said:
NPRF is just one of two principles required to underwrite physics, the other is the boundary of a boundary principle. We published that in Entropy, here is a short overview with a link to the Entropy paper https://encyclopedia.pub/9753.
I think what you acually said I didn't get on first reading is that physics is constructed either from one of the two principles, and you choose NPFR? If so we agree. I wasn't sure about the terminiology, but I assume with the "boundary of boundary principle" you refers to something along the lines of strong emergence and Wheelers law without law? If so, then I'm definitely in the strong emergence camp. This is essentially how I think as well of self-organising agents.

Here is a nice paper that I clearly illustrates part of the ideas, that is probably one of the closests to my own views that i have found published. Although the notion "statistical phenomenon" should be understood as an evolutionary phenomenon, as one can not be sure fo say "universal" or global equilbrium.

Law without law: from observer states to physics viaalgorithmic information theory
"In this work, I propose a rigorous approachof this kind on the basis of algorithmic informa-tion theory. It is based on a single postulate:that universal induction determines the chances of what any observer sees next. That is, instead of a world or physical laws, it is the local state ofthe observer alone that etermines those proba-bilities. Surprisingly, despite its solipsistic foundation, I show that the resulting theory ecoversmany features of our established physical world-view: it predicts that it appears to observersas if there was an external world that evolves according to simple, computable, probabilistic laws. In contrast to the standard view, objective reality is not assumed on this approach but rather provably emerges as an asymptotic statistical phenomenon. "
-- Markus P. M¨uller, https://arxiv.org/pdf/1712.01826.pdf

The association with agents is the athe algorithmic information is processed byt the agents themselves, and if you consider arbitrary agents, the agents say "mass" must constraint the processing power and thus possible inferences. So there are a dual support from both emergence and constraints. So I do not see the views are contradictory. I think one can understand the "constraint" as a truncated emergence (where truncation is a lossy retention, that can be argued to be physically motivated and proviging a "natural regulator")

That makes me curious how you say view the line of reasoning in that papers, relative to your neural monism perspective?

/Fredrik
 
  • Like
Likes akvadrako
  • #570
Fra said:
I think what you acually said I didn't get on first reading is that physics is constructed either from one of the two principles, and you choose NPFR? If so we agree. I wasn't sure about the terminiology, but I assume with the "boundary of boundary principle" you refers to something along the lines of strong emergence and Wheelers law without law? If so, then I'm definitely in the strong emergence camp. This is essentially how I think as well of self-organising agents.

Here is a nice paper that I clearly illustrates part of the ideas, that is probably one of the closests to my own views that i have found published. Although the notion "statistical phenomenon" should be understood as an evolutionary phenomenon, as one can not be sure fo say "universal" or global equilbrium.

Law without law: from observer states to physics viaalgorithmic information theory
"In this work, I propose a rigorous approachof this kind on the basis of algorithmic informa-tion theory. It is based on a single postulate:that universal induction determines the chances of what any observer sees next. That is, instead of a world or physical laws, it is the local state ofthe observer alone that etermines those proba-bilities. Surprisingly, despite its solipsistic foundation, I show that the resulting theory ecoversmany features of our established physical world-view: it predicts that it appears to observersas if there was an external world that evolves according to simple, computable, probabilistic laws. In contrast to the standard view, objective reality is not assumed on this approach but rather provably emerges as an asymptotic statistical phenomenon. "
-- Markus P. M¨uller, https://arxiv.org/pdf/1712.01826.pdf

The association with agents is the athe algorithmic information is processed byt the agents themselves, and if you consider arbitrary agents, the agents say "mass" must constraint the processing power and thus possible inferences. So there are a dual support from both emergence and constraints. So I do not see the views are contradictory. I think one can understand the "constraint" as a truncated emergence (where truncation is a lossy retention, that can be argued to be physically motivated and proviging a "natural regulator")

That makes me curious how you say view the line of reasoning in that papers, relative to your neural monism perspective?

/Fredrik

Thnx for the reference, I'll check it out.

Both the relativity principle and boundary of a boundary principle are necessary to recover known physics, as we argue in the paper.

Classical objects (obeying classical mechanics) interact via the quantum exchange of momentum (per quantum mechanics). Neither is fundamental to the other (they are co-fundamental) and nothing "emerges" from something more fundamental in our model.
 
  • #571
RUTA said:
Both the relativity principle and boundary of a boundary principle are necessary to recover known physics, as we argue in the paper.
Now I found it. I missed the string on first skimming. I will re-read that part of your papers to see if my question dissolves.

Edit: Regardless of how it matches with my views, I think your entropy paper is good and has many good things. I'll try to read is better better before commenting more.

/Fredrik
 
Last edited:
  • #572
Demystifier said:
The bold part is what confuses me. You are saying that there is something that we can't understand with a full closed system, but can understand with the open system. That's strange, because the closed system contains all the information that the open system does, plus some more. There is nothing in the open system that, in principle, cannot also be understood with the closed system. I guess you are saying that with open system, which contains less degrees of freedom, it is simpler to extract relevant information in practice. I'm fine with that, but I am interested in what can be done in principle. So in principle, is it possible to understand filter measurement in the closed system? If yes, how? If no, then something seems to be missing in the closed system, which is a problem in principle (even if not a problem in practice).
This is my complaint as well. An open system is by definition one where we can’t have a complete description of the whole system, so we are forced to split the universe into the system of interest, which we have a rigorous description of, plus the rest of the universe, which can only be described approximately, or statistically.

If there is some effect that shows up in open systems but not closed systems, then my suspicion is that it is an artifact of our approximation.

It sort of reminds me of thermodynamics, where a rigorous analysis of entropy shows that it is constant in a closed system, but increases if we consider open systems.
 
  • #573
Demystifier said:
With the theory you presented, can you explain why typical macro pointers don't distinguish a cat in the state ##|dead\rangle+|alive\rangle## from the cat in the state ##|dead\rangle-|alive\rangle##?
I’m not sure if this nitpicking is relevant or not, but a cat doesn’t really have a ket. A cat is an arrangement of atoms, and for most arrangements of those atoms, you don’t have a cat at all (alive or dead). Of course there is a similar problem with any composite object (an atom or a molecule), but for those systems, it makes sense in some circumstances to approximate them as indivisible.

I think a cat is different also in that it is constantly exchanging atoms and energy with the environment, so it’s fuzzy exactly what is a cat and what is the environment.
 
  • #574
stevendaryl said:
a cat doesn’t really have a ket. A cat is an arrangement of atoms, and for most arrangements of those atoms, you don’t have a cat at all (alive or dead). Of course there is a similar problem with any composite object (an atom or a molecule), but for those systems, it makes sense in some circumstances to approximate them as indivisible.
I don't see how any of this is an argument for a cat not having a ket. Composite systems--systems which can be divided into subsystems that might interact with each other--can have kets. Why wouldn't they? QM does not require that only indivisible systems can be described with kets.

stevendaryl said:
I think a cat is different also in that it is constantly exchanging atoms and energy with the environment, so it’s fuzzy exactly what is a cat and what is the environment.
This, OTOH, is an argument for a cat, by itself, not having a ket--because an open system can only be described by a mixed state, not a pure state.
 
  • #575
On the division of 'particles', collapse and a purported classical environment:

'There is no matter as such. All matter originates and exists only by virtue of a force... We must assume behind this force the existence of a conscious and intelligent Mind. This Mind is the matrix of all matter.'

Max Planck
 
  • #576
PeterDonis said:
I don't see how any of this is an argument for a cat not having a ket. Composite systems--systems which can be divided into subsystems that might interact with each other--can have kets. Why wouldn't they? QM does not require that only indivisible systems can be described with kets.
Maybe there is a standard way of handling this that I am not familiar with. But the usual assumption of the ket notation is that kets evolve into other kets according to Schrodinger’s equation. But as a collection of atoms, a cat can evolve into something that isn’t a cat. So the cat kets ##|\text{cat}_i\rangle## won’t have the property that

##e^{-iHt} |\text{cat}_i\rangle = \sum_j c_{ij} |\text{cat}_j\rangle##

So it seems to me that something other than kets should be used. Maybe projection operators :
##\Pi_{\text{dead cat}} ## projects out the component of the state of the system in which there is something corresponding to a dead cat.
 
  • #577
stevendaryl said:
the usual assumption of the ket notation is that kets evolve into other kets according to Schrodinger’s equation. But as a collection of atoms, a cat can evolve into something that isn’t a cat.
Yes. So what? "kets evolve into other kets" does not imply "cats evolve into cats". A cat ket would not be an eigenstate of the Hamiltonian, so one would not expect it to remain a cat ket forever.

stevendaryl said:
the cat kets ##|\text{cat}_i\rangle## won’t have the property that

##e^{-iHt} |\text{cat}_i\rangle = \sum_j c_{ij} |\text{cat}_j\rangle##
So what? The cat kets do not form a basis of the Hilbert space all by themselves, so one would not expect to be able to express any state a cat could evolve into by only using a linear combination of cat kets, since a cat could in principle evolve into any state in the Hilbert space.
 
  • #578
PeterDonis said:
So what? The cat kets do not form a basis of the Hilbert space all by themselves, so one would not expect to be able to express any state a cat could evolve into by only using a linear combination of cat kets, since a cat could in principle evolve into any state in the Hilbert space.
Okay, I have never seen kets used except in the case where a basis for the Hilbert space is a product of kets. A lot of the nice properties of bra-ket notation don’t hold, otherwise.
 
  • #579
stevendaryl said:
I have never seen kets used except in the case where a basis for the Hilbert space is a product of kets.
You are misunderstanding what I said. You have a cat that has, say, ##10^{25}## degrees of freedom, and an environment that has some much larger number of degrees of freedom, say ##10^{50}##. The total Hilbert space is the tensor product of those two subspaces.

What you are calling "cat" kets are kets in the "cat" subspace that describe cats--i.e., configurations of those ##10^{25}## degrees of freedom that are cats, not any of all the other possible things that could be made from the same degrees of freedom. But those kets do not form a subset of that ##10^{25}## degrees of freedom subspace that is closed under unitary evolution; we know this must be the case since cats can evolve into things that aren't cats (for example, by decomposition). Since we can distinguish states that are definitely cats from states that are definitely not cats, there must be some kets in that ##10^{25}## degrees of freedom subspace that are linearly independent from all of the "cat" kets (the ones that describe things that are definitely not cats, but made from the same degrees of freedom). So the "cat" kets by themselves cannot form a basis of that subspace. But that in no way implies that there are no "cat" kets in that subspace.

What is implied by the above is that, since the ##10^{25}## degrees of freedom subspace is just a subspace--there are also all the other "environment" degrees of freedom--if we want to really model a cat using QM, we can't just use a ket on the ##10^{25}## degrees of freedom subspace, because those degrees of freedom are interacting with the "environment" ones. That means the "cat" degrees of freedom by themselves don't have a well-defined pure state (ket). They can only be described by tracing out the environment degrees of freedom to form a mixed state. That is not the same as saying that there are no "cat" kets at all; it is just saying that, once we take into account the interaction with the environment, the "cat" degrees of freedom are entangled with the "environment" degrees of freedom, so none of the "cat" kets are applicable any more as a state of the "cat" degrees of freedom.
 
  • Like
Likes physika
  • #580
stevendaryl said:
I think a cat is different also in that it is constantly exchanging atoms and energy with the environment, so it’s fuzzy exactly what is a cat and what is the environment.
People are overly concentrated on a cat as such. The point is to consider macroscopically distinguishable kets. The ket does not need to be the cat. For instance, it can be a cat in a box supplied with everything what cat needs for a survival. But then this box with the cat needs to be (approximately) isolated from the rest of environment.
 
  • Like
Likes physika
  • #581
I am not sure at what layer the problem discussed as at. One issue is that the whole notion of "isolating" a part of the system is not a natural process, so if one isn't satisfied with in principle arguments then one must also include the hamiltonian for the isolation barrier, so as to model the delayed decoherence etc. It's not a free choice of an agent or a subsystem to isolated itself from the environment it depends on, so it seems clear to me that any such "isolation" imples a non-natural thing. One may even argue that "full isolation" even "in principle" is impossible, so even the in principle argumetns becomes abit ambigous?

Is the question wether a real experimental procedure can be set up, in reality, to verify that the cat on the box exhibits interfecere, and thus violates bell inequality? Or is the question of wether it can be envisions in principle? And would such "in principle" arguments then include detailed considerations as to wether the "lifetime" of the weird superposition would be long enough to actually perform a measurement? If not, would this still count as an "in principle" possible setup?

/Fredrik
 
  • #582
Demystifier said:
The point is to consider macroscopically distinguishable kets.
There is no such thing as a macroscopic two-level system. There is more to Schrödinger's thought experiment than the cat: the Geiger counter, the shattered flask, and the gas in the steel chamber. As PeterDonis has pointed out, these have far more degrees of freedom and possible states than the radioactive atom. They become thoroughly entangled in the course of the experiment. It is naïve to the extreme to assume that the state(s) of the cat could be disentangled from the resulting mess. The final wave function can certainly not be written as a sum of just two terms containing |dead> and |alive> as factors. A cat is not a two-level system.
 
  • #583
WernerQH said:
It is naïve to the extreme to assume that the state(s) of the cat could be disentangled from the resulting mess.
I have a feeling no one think so either, at least in practice. The question is if its possible in principle? If not - why? And then what does an "in principle" argument really mean? That was my take on the question, and i think the answer isn't obvious.

/Fredrik
 
  • #584
Isolate what from what?

'particles' that only exist upon measurement?
Or isolate mathematical wave functions from 'particles'(environment) that exist upon measurement?
Everyone is getting stuck in thinking there are solid balls that get their properties by hitting each other all the time with other solid balls.

If you mean to isolate yourselves and your knowledge from entangled systems, that's a different matter.
 
  • #585
As the difference between isolating the experimenter (only) and isolating the system in question from the whole environment is exactly the key point of the EPR experiment and Bell inequality, what needs to be isolated from the environment in order to make a normal experiment is the is the box with the cat. If this proves for some reason (principal of practical) not possible seems to be the question?

As I see it the case is a bit like this:

We have "principles" of QM, that are corroborated making use of classical agents, making inferences on small subsystems.

If we take this "principle" and argues that in despited obvious practical problems, that we have an in principle argument that applies also for systems that are not "small subsystms", then this is questionable even as a principal arguent as the principles used are not corroborated to valid for inferences in that domain; it's a wild and speculative extrapolation.

This is similar to cosmological theory problems, that the system under study undergoes changes at a rate faster than the inferences can be done (meaning collecting statistics etc). Then this paradigm of inferences is in principle unsuitable. This is one reason why there are probably good reasons to suspect that QM is probably perfectly valid in a limit of small subsystems from the perspective to classical (LARGE) agents.

The obvious question is thus: Where goes the "complexity limit" where QM works and when it "maybe not" works? It would certainly seems improbably that there is a sharp limit, the more natural explanation IMO is that QM as we know is a limiting case of a yet unknown generlized theory of inference. This is at least my own firm belief and working hypothesis.

/Fredrik
 
  • #586
Demystifier said:
People are overly concentrated on a cat as such. The point is to consider macroscopically distinguishable kets. The ket does not need to be the cat. For instance, it can be a cat in a box supplied with everything what cat needs for a survival. But then this box with the cat needs to be (approximately) isolated from the rest of environment.
Yes, I agree. My point is that you never really have a superposition of a dead cat and a live cat, you have (maybe) a superposition of a universe containing a dead cat and a universe containing a live cat.
 
  • #587
stevendaryl said:
Yes, I agree. My point is that you never really have a superposition of a dead cat and a live cat, you have (maybe) a superposition of a universe containing a dead cat and a universe containing a live cat.
And my point is that, in principle, one can have something in between: a superposition of a box with a dead cat and a box with a live cat. You might also be interested in my https://arxiv.org/abs/1406.3221
 
  • #588
RUTA said:
Thnx for the reference, I'll check it out.

Both the relativity principle and boundary of a boundary principle are necessary to recover known physics, as we argue in the paper.

Classical objects (obeying classical mechanics) interact via the quantum exchange of momentum (per quantum mechanics). Neither is fundamental to the other (they are co-fundamental) and nothing "emerges" from something more fundamental in our model.
I have been reflecting over your paper and I have come to the position that, I appreciate what you are trying to do, I see the principe and even if it's embedded with a mix of philosophical history and things, I am a little tempted to lable it a kind of "minimalist philosophical" account of the current paradigm used in QM. what I mean by this is that, given some constraints or axioms (that i do not wnt to accept, but that is a different question) of yours, which are in a sense what is required to do "objective science", the principal account is an attempt to reach philosophical sastifaction, with a minimum of speculation.

As I see it, axiom 1 refers to properties of the "set of all agents", or set of all "agent views", your axiom 1 one attempts to avoid a tower of turtles I think? Not due to an argument I buy, because of the argument that otherwise, we run into the problems with the ideals of "science". Thinking about your example of pink elephants.

Axiom 2 is clear, i see it as as special case of "observer equivalence", but applied specifically to the classification based on 4D spacetime frames.

Again, I think your two axioms are reasonable "average expectations", but in my view they should be explained. As I also see it, your perspective givse us no insight or idea on howto handle on the internal spaces. Ie. the part of total state space when you harvested it as the remainder of the the 4D spacetime. I think it's in this connection, that one may also find constructive explanations of things linke the upper boound of the signal speed, and the minimum information divergence or action as in Plancks constant.

So I like the reasoning, and i can see how it seems "fair" in a minimal sense, but it leaves me unsatisfied, but I consider myself extreme and not representative. I can compare this to the minimal statistical interpretation of QM, it seems "fair to what is actually done", but still unsatisfcatory as it seems obviously incapable of having universal validity. I understand your philosophy here in similar sense, which is not a bad thing, as it's a lot bettwe than then "shut up an calculate attitude" that i remember from old teachers who got embarrased when getting asked questions from students they could not answer or handle.

Edit: I think this even leads us to the problem of explaining the objectivity of science, when the scientific process appears to be built from a lot of subjective processes. Popper focus on falsification, and failed to see the importance of hypothesis generation (which is not deductive). So is objectivity a constraint or an emergent result? If we decide that "we do science" we can take objectivity as a constraint, but the problem is, what if the explanatory power is hidden in the non-objective domain? Then we misse out explanatory power and instead face a fine tuning problem in the set of all possible objective facts, having no a priori clue which objective fact is the right one.

/Fredrik
 
Last edited:
  • Like
Likes gentzen
  • #589
physika said:
....then you have to read more
It didn't really matter for the main point I was making. My point was that a large component of quantum theory, i.e. the structure of the state space, the space of observables and so on were provably a probability theory in terms of their mathematical structure. I just used the word "kinematic" for that part since it's common in actual research to do so. Usually dynamics is reserved for the actual DOFs and choice of Hamiltonian.

Whether that label is unique or required doesn't really matter. I don't see what is so innovative about Spekken's paper (another one of these "prose essays") as to make it something that "has" to be read. Or in fact why it actually matters here.
 
Last edited:
  • #590
Lord Jestocost said:
However, the superposition state does not evolve by the Schrödinger equation into a mixed one. With all due respect, this statement is wrong.
I'm not saying pure states can evolve into mixed states under the Schrödinger equation, obviously they cannot. I'm saying that a macroscopic body is in a mixed state due being in thermal contact with some environment, interacting with EM fields and so on.
 
  • #591
Hi, interesting discussion. However, it's too long, and too many issues have been raised.
I want to concentrate on two previously discussed points; realism and counterfactual definiteness.
Realism is usually blamed for the claimed existence of quantum nonlocality. This claim, however, only reveals that Bell's point is completely missed.
We can explain this by saying that John Bell did not conceive his inequalities with the intent of proving quantum nonlocality. Quite the contrary, he set his inequalities to investigate if non-conspiratorial common causes could give a local explanation of quantum mechanics perfect correlated results. Bell inequalities violations only mean that we cannot explain quantum nonlocality with non-conspiratorial common causes.
The failure of the non-conspiratorial common causes program is usually expressed as the collapse of local realism. Local realism suggests that the Bell inequality is based on the conjunction of two hypotheses: locality and realism.
That interpretation has been heatedly debated. Those debates, however, are mostly meaningless once we realize that the rejection of the Bell inequality only means the rejection of a local explanation.
Whether that explanation implies realism or not is irrelevant. Once we reject it, we are left with quantum nonlocality intact.
Thus, it is nonsense to say that quantum mechanics is a local theory because the Bell inequality is a classical result.
For the sake of brevity, I will leave out the other notorious mistake called counterfactual definiteness.
 
  • Like
Likes gentzen
  • #592
On the other hand local/microcausal relativistic QFT is in accordance with the violation of Bell's inequality. So it's more tempting to give up "realism" and simply take the quantum state as the complete description of the state of the system.
 
  • #593
Demystifier said:
Loosely speaking, the Bell theorem says that any theory making the same measurable predictions as QM must necessarily be "nonlocal" in the Bell sense. (Here Bell locality is different from other notions of locality such as signal locality or locality of the Lagrangian. By the Bell theorem, I mean not only the original Bell inequality and its close cousin CHSH inequality, but also the results such as GHZ theorem and Hardy theorem which involve only equalities, not inequalities.) However, any such theorem actually uses some additional assumptions, so many people argue that it is some of those additional assumptions, not locality, that is violated by QM (and by Nature). The aim of this thread is to make a list of all these additional assumptions that are necessary to prove the Bell theorem. An additional aim is to make the list of assumptions that are used in some but not all versions of the theorem, so are not really necessary. The following list of necessary and unnecessary assumptions is supposed to be preliminary, so I invite others to supplement and correct the list.

Necessary assumptions:
- macroscopic realism (macroscopic measurement outcomes are objective, i.e. not merely a subjective experience of an agent)
- statistical independence of the choice of parameters (the choices of which observables will be measured by different apparatuses are not mutually correlated)
- Reichenbach common cause principle (if two phenomena are correlated, then the correlation is caused either by their mutual influence or by a third common cause)
- no causation backwards in time

Unnecessary assumptions:
- determinism (unnecessary because some versions of the theorem use only probabilistic reasoning)
- Kolmogorov probability axioms (unnecessary because the GHZ theorem uses only perfect correlations, i.e. does not use probabilistic reasoning at all)
- hidden/additional variables (unnecessary because some versions of the theorem, e.g. those by Mermin in Am. J. Phys., use only directly perceptible macroscopic phenomena)
- microscopic realism (unnecessary for the same reason as hidden/additional variables)

Tumulka (https://arxiv.org/pdf/1501.04168.pdf) also discusses many of these assumptions.

Has anyone tried to deny the Reichenbach common cause principle to avoid denying locality? I'm trying to think how that would go...I suppose one could argue that the correlation between two phenomena is just a brute fact with no underlying cause. So maybe it is some Copenhagen interpretations that do that?
 
  • Like
Likes physika and Demystifier
  • #594
  • Like
Likes Demystifier and Minnesota Joe
  • #595
Tumulka's (R4) also seems like a necessary assumption and one that MWI denies:
Every experiment has an unambiguous outcome, and records and memories of that outcome agree with what the outcome was at the space-time location of the experiment.
 
  • Like
Likes Demystifier

Similar threads

Replies
333
Views
14K
Replies
6
Views
2K
Replies
226
Views
20K
Replies
228
Views
13K
Replies
153
Views
7K
Replies
19
Views
2K
Back
Top