Bell violations + perfect correlations via conservative Brownian motion

  • I
  • Thread starter iste
  • Start date
  • #1
iste
42
15
TL;DR Summary
Recent paper: Stern–Gerlach, EPRB and Bell Inequalities: An Analysis Using the Quantum Hamilton Equations of Stochastic Mechanics:

https://link.springer.com/article/10.1007/s10701-024-00752-y

Then discusses some flaws in stochastic mechanics.
--------------

Recent paper does as described in title: https://link.springer.com/article/10.1007/s10701-024-00752-y

Thought this was interesting. First time I have personally seen a paper that comprehensively describes a full model of these kinds of Bell scenarios from the stochastic mechanics perspective (a relatively recent though equivalent formulation of stochastic mechanics, reference 9). I imagine that the fact that the model's predictions agree with orthodox quantum mechanics is not necessarily anything new; but it is cool to see a paper replicate the strangest predictions of quantum mechanics from what is explicitly a "conservative Brownian motion".

--------------

Unfortunately, Markovian stochastic mechanics is excessively non-local and has incorrect multi-time correlations (These criticisms are described in the "Review of stochastic mechanics" paper and "Mystery of stochastic mechanics" lecture notes in this link: https://web.math.princeton.edu/~nelson/papers.html). However, there seems reason to believe that allowing the diffusion to be non-Markovian could address these issues.

For instance, in the following proposal of a non-Markovian re-formulation using "quantum diffusion", the non-locality issue is addressed (And Edward Nelson had already shown that a non-Markovian diffusion could in principle deal with this issue in his 1985 quantum fluctuations book):

https://scholar.google.co.uk/scholar?cluster=4203589515917692457&hl=en&as_sdt=0,5&as_vis=1

It's worth noting that Markovian diffusions seem inherently pathological in terms of allowing superluminal propagations, to the point that it has been shown that any kind of relativistic diffusion must be non-Markovian: e.g.

https://arxiv.org/abs/cond-mat/0608023
https://arxiv.org/abs/cond-mat/0501696

Regarding multi-time correlations, the "quantum diffusion" model from above also violates realism because its joint probability distribution for successive times has negative values, implying different multi-time statistics to the traditional Markovian approaches that uphold realism. There are interesting parallels between Markovian approaches and Bohmian mechanics, as described in the links below, which seem quite suggestive that realism is connected to the incorrect multi-time correlations: e.g.

https://arxiv.org/abs/cond-mat/0608023
https://arxiv.org/abs/2208.14189

In a recent quantum formulation that also claims that unitary quantum mechanics is equivalent to a type of non-Markovian stochastic process, the indivisibility of its trajectories also violates realism and produce novel multi-time interferences / temporal correlations because of how they violate Markovianity:

https://arxiv.org/abs/2302.10778
https://arxiv.org/abs/2309.03085
https://www.physicsforums.com/threads/a-new-interpretation-of-quantum-mechanics.1060576/

The relation between violated Markovian consistency conditions and realism / invasive measurement can be seen in the link below:

https://arxiv.org/abs/2012.01894
(Sections: III, B-C; IV, E)

One final major criticism of stochastic mechanics is the Wallstrom criticism, that a quantization condition has to be plugged in arbitrarily by hand. Recently it has been suggested that this criticism may actually be completely invalid. In previous formulations of stochastic mechanics, authors had been throwing away a divergent part of the stochastic Lagrangian because it doesn't contribute to the equations of motion. By simply not throwing it away, the desired quantization condition automatically follows for free:

https://arxiv.org/abs/2301.05467
(page 31-32)
https://arxiv.org/abs/2304.07524

Its also worth noting that the author of the re-formulation of stochastic mechanics in these links claims that it correctly reproduces all aspects of quantum mechanics including the correct multi-time correlations. The basis of this formulation is deriving a generalized complex diffusion equation, where the Schrodinger equation and Brownian diffusion are both among special cases, and showing that it has solutions equivalent to stochastic processes.
 
  • Like
Likes Fra and gentzen
Physics news on Phys.org
  • #2
The 2024 Beyer-Paul paper you cited, as well as the 2023 Kuipers paper, both fall victim to the same issue I keep raising here in the Interpretations subforum. Those arguments might even have merit, were it not for critical omissions. Namely: They ignore long-standing experiments that flat-out contradict their stochastic arguments.*


A. From the 2024 Beyer-Paul paper: "...the spins may be correlated depending on the preparation of the initial state..." and "Our description therefore belongs to the class of psi-epistemic theories in the nomenclature of Harrigan and Spekkens. This epistemological view of the meaning of the wave function makes it a tool for inference about our expectations for experimental results for a given preparation procedure." The term "preparation" is the problem concept here. The initial preparation need not be an entangled state.

Yes, they acknowledge a type of nonlocality, specifically: "For the EPRB experiment, we have seen that although the second particle is locally separated from the first particle, its spin is changed in reaction to the measurement of particle 1. The position of particle 2 is unaffected, but its spin changes over the course of the measurement of particle 1 due to the non-local quantum torque..." There cannot be any such torque in remote entanglement. See this experiment in which the entanglement correlations are created from particles that have never interacted - so there is no way that particle 1 would even "know" it is to become entangled with particle 2 at any point in time at which they both exist. The decision to entangle those 2 particular photons - or place them into a unentangled Product state - is itself performed remotely (spatially distant) and can be done at any time before or after they are measured without regard to the usual causal constraints. Pretty strange that!

Experimental delayed-choice entanglement swapping (2012) Ma et al

"In the entanglement swapping procedure, two pairs of entangled photons are produced, and one
photon from each pair is sent to Victor. The two other photons from each pair are sent to Alice and Bob,
respectively. If Victor projects his two photons onto an entangled state, Alice’s and Bob’s photons are entangled although they have never interacted or shared any common past. What might be considered as even more puzzling is Peres’ idea of “delayed-choice for entanglement swapping”. In this gedanken experiment, Victor is free to choose either to project his two photons onto an entangled state and thus project Alice’s and Bob’s photons onto an entangled state, or to measure them individually and then project Alice’s and Bob’s photons onto a separable state. If Alice and Bob measure their photons’ polarization states before Victor makes his choice and projects his two photons either onto an entangled state or onto a separable state, it implies that whether their two photons are entangled (showing quantum [Entangled State] correlations) or separable (showing classical [Product State] correlations) can be defined after they have been measured.
"


B. In a similar fashion, one can see the same issue in the 2023 paper. It claims as follows: "It is expected that, in order to explain entanglement in multi-particle systems, the notion of locality in the stochastic theory must be relaxed. For example, for two particles ... the entanglement will introduce a stochastic coupling ... If this coupling is preserved, when X1 and X2 are separated, this coupling will introduce a non-locality in the stochastic law that governs this two particle state." Note the requirement that the coupling must be preserved. There is no coupling to preserve in the Ma experiment, as the particles never interact - and are only "coupled" remotely! (And in fact there is no requirement that the intermediate (Victor's) photons interact either.)

Additionally, Kuipers states: "In a non-relativistic theory, the notion of time is an affine parameter. Therefore,
the non-relativistic stochastic theory is causal with respect to time.
" And yet, and contradicting his assertion, garden variety non-relativistic QM correctly describes the cited Ma experiment. And it lacks any reasonable notion of (Einsteinian) causality, which is a common element of all Delayed Choice experiments. There are dozens of those. In 110 pages in length, he makes no mention or reference to these.



Without diving into it specifically, the GHZ Theorem (and experimental confirmation of same) flat out contradicts the idea that there can be ANY predetermination of spin components even when there is the type of preparation as per your cited stochastic papers. GHZ is not mentioned in either Beyer-Paul or Kuipers. See for example:

Multi-Photon Entanglement and Quantum Non-Locality (2002) Pan-Zeilinger

So does the Stochastic Interpretation actually replicate the strangest predictions of quantum mechanics? It doesn't address Delayed Choice Remote Entanglement Swapping, and it doesn't address GHZ - both of which demonstrate elements that contradict this interpretation. And if these aren't strange enough to qualify, I don't know what is.

So the expected scientific critique and conclusion should be: Any attempt to explain QM, that does not address the cited long-standing experimental results, should be immediately rejected as out-of-date - and consequently untenable. It's no different than what we would conclude about an interpretation pushing local realism that fails to address Bell. You just can't ignore the experimental canon, nor hand-wave it away.

-DrC


*Note: I hate being like a broken record on this subject, and I don't mean to come across as pedantic or nit-picky. But it's time we all acknowledge important works that contributed to a Nobel award in 2022.
 
  • Like
Likes PeroK, DrClaude, Lord Jestocost and 1 other person
  • #3
Mis-linked a paper. The 7th link down should be this one about Bohmian mechanics:

https://arxiv.org/abs/1503.00201

(Will reply to DrC momentarily)
 
Last edited:
  • #4
DrChinese said:
Any attempt to explain QM, that does not address the cited long-standing experimental results, should be immediately rejected as out-of-date - and consequently untenable.
I certainly agree with this.

The problem I see with several attempts to "explain" entanglement experiments, is that it starts from a conservative stance where a fixe dynamical law, such as a hamiltonian is just "given". Any such attempt are likely to always fail to find explanations the lie in the nature or origin of the interactions themselves. "Stochastic mechanics" as per Nelson seems to be like that, so while I am symphatetic to some traints, the stance and the way its done is too conservative to offer power.

I think it is be difficult to "explain" the QM weirdness from a "pure interpretation", as you constrain yourself to stick to what is supposedly given, and just play around with it. So where would the explanatory power possible come from?

/Fredrik
 
  • #5
iste said:
In a recent quantum formulation that also claims that unitary quantum mechanics is equivalent to a type of non-Markovian stochastic process, the indivisibility of its trajectories also violates realism and produce novel multi-time interferences / temporal correlations because of how they violate Markovianity:

https://arxiv.org/abs/2302.10778
https://arxiv.org/abs/2309.03085
https://www.physicsforums.com/threads/a-new-interpretation-of-quantum-mechanics.1060576/

The relation between violated Markovian consistency conditions and realism / invasive measurement can be seen in the link below:

https://arxiv.org/abs/2012.01894
(Sections: III, B-C; IV, E)
If one were to explore the understanding of the the process where Barande's "transition matrixes" emerge (which he does not in his paper), then I personally would think it could be alot more interesting and we could also perhaps start to understand where this mysterious "memory" that is implicit in non-markovian stochastics really "is".

/Fredrik
 
  • #6
Sorry, extremely late reply.

DrChinese said:
They ignore long-standing experiments that flat-out contradict their stochastic arguments.

I think, despite having been criticized, Nelson's stochastic mechanics still arguably seems to be a near complete formulation of quantum mechanics, regardless of how you would like to interpret quantum mechanics. Being, centered around deriving the Schrodinger equation from assumptions of classical stochastic processes, the reasons for the behavior it predicts is going to be comparable to quantum mechanics. (It may have incorrect multi-time correlations; but to defend them, according to links 7 & 8 above, apparently so does Bohmian mechanics. Empirical equivalence of stochastic to quantum mechanics in this case can then arguably be re-asserted in a similar way that Bohmian mechanics does - explicitly accounting for measurement. And Kuipers' new formulation apparently gets the correlations right without that amendment anyway).

Anyway, so these predictions will happen for similar formal reasons to quantum mechanics - the schrodinger equation and its solutions - because stochastic mechanics is a direct model of that for the most part, regardless of the deeper how of this achievement. From what I see, the initial entangling interaction is a red herring because what entanglement is actually about is the relative phases in coherent states, and measurement settings. The initial entangling interaction may be a way of the experimenter to control part of these variables but these variables are actually what does the heavy lifting. Their mechanisms then should be baked somewhere in the stochastic description whether implicitly or explicitly. The Barandes non-Markovian formulation in links 9-10 implies this kind of thing much more explicitly with its dictionary that translates between stochastic matrices and complex number representation. Obviously, you are going to react to this kind of reasoning with incredulity because it is just reiterating a quantum explanation or mechanisms for something that you think should be more intuitively classical; but to me, if you can derive the Schrodinger equation from assumptions of classical stochastic processes then you are also manifesting these mechanisms with you, but embedded in the stochastic description somewhere. If we assume that these mechanisms are baked into the description, and causing these quantum correlations, it then seems very unlikely to me that this stochastic mechanics could not reproduce entanglement swapping or the GHZ example. Because clearly, even if they seem different, they are all mediated by the same core mechanisms, just in slightly different ways. Initial entangling interaction may fix phase relations in systems which is a core driver for the correlations. It then doesn't matter that particles did not initially interact because what matters is the phase relations they have which has been fixed from the start in the quantum description:

https://scholar.google.co.uk/scholar?cluster=5867761140794379223&hl=en&as_sdt=0,5&as_vis=1

https://scholar.google.co.uk/scholar?cluster=17666541244192212757&hl=en&as_sdt=0,5&as_vis=1 (you can ignore these papers' interpretation / semantics wrt locality/ nonlocality which is not relevant to the point)

"Though there is no prior fixed relation between the internal variables of (1, 4), the Bell measurement on (2, 3) chooses a sub-ensemble in which there is an observed correlation between particles 2 and 3 and hence between particles 1 and 4, due to fixed prior relationship in internal variables of particle pairs (1, 2) and (3, 4)." (from second link directly above)

To reiterate, the point is that this mechanism will be embedded in the stochastic description from which one can derive the Schrodinger equation, and given that the stochastic mechanics in the Beyer paper generates the correlations of more conventional entanglement, it is highly unlikely that stochastic mechanics in this paper will not reproduce GHZ and entanglement swapping. It should be expected imo. I don't see why it wouldn't if it can reproduce the more conventional example of entanglement correlations which are frankly bizarre enough.

I don't think the delayed paradigm that imply things like retrocausality is an issue because even in the paper you link, they talk about how it is only a problem if you think that the wavefunction is a real physical object and collapse an actual physical event. If you do not subscribe to that then collapse is just statistical conditioning at a formal level and is not physical; the timing aspect then isn't relevant. Stochastic approaches do not take the wavefunction to be real and should be viewed in a way which is more analogous to statistical ensemble view or the wavefunction.
 
  • Like
Likes lodbrok
  • #7
Fra said:
If one were to explore the understanding of the the process where Barande's "transition matrixes" emerge (which he does not in his paper), then I personally would think it could be alot more interesting and we could also perhaps start to understand where this mysterious "memory" that is implicit in non-markovian stochastics really "is".

Yes, this is certainly an interesting and unanswered question - I certainly don't have an answer. Part of Barandes' perspective on this I think is the idea that non-Markovianity are more general than Markovian processes. Certainly this is true when it comes to special relativity where they must be non-Markovian. Nelson in his 1985 quantum fluctuations book also states that he sees no particular reason for the processes in stochastic mechanics to be Markovian, but they are easier to formulate. The broad category of reciprocal (Bernstein) processes in the 4th link used to re-formulate stochastic mechanics are actually in general non-Markovian too. Link mentions this:

https://scholar.google.co.uk/schola...7803028&hl=en&as_sdt=0,5&as_ylo=2020&as_vis=1

The Bayesian marginalization condition in the Barandes paper, from which he attributes the linearity in quantum mechanics, seems to be analogous to these reciprocal processes - or in Nelson's stochastic mechanics, the interplay of forward and backward diffusions that are responsible for time-reversibility. This time-reversibility for stochastic processes seems weird but it also seems a natural consequence of maximizing the entropy / minimum relative entropy wrt trajectories between two points in time, alluded to in the link above. The picture then from the stochastic perspective is particles are immersed in a sea of background fluctuations and the whole system can be seen as being in something like a kind of stationary statistical equilibrium.

It seems that time-reversibility is actually enough on its own to obtain some central aspects that appear in quantum mechanics - non-commutativity and uncertainty relations for position and momentum. You can see this in the Kuipers paper of link 13. Also work by Koide in links below, you can derive these things when you assert this kind of time-reversibility in any kind of stochastic system:

https://scholar.google.co.uk/scholar?hl=en&as_sdt=0,5&as_vis=1&q=koide+uncertainty+relations&btnG=

So even though maybe Barandes' indivisible transition matrices are maybe an assumption whose origin is not entirely clear, there are actually some concrete aspects of quantum mechanics that can be derived generally for stochastic systems under some conditions.
 
  • #8
iste said:
A. Sorry, extremely late reply.

B. ... centered around deriving the Schrodinger equation from assumptions of classical stochastic processes, the reasons for the behavior it predicts is going to be comparable to quantum mechanics. ...

Anyway, so these predictions will happen for similar formal reasons to quantum mechanics - the schrodinger equation and its solutions - because stochastic mechanics is a direct model of that for the most part, regardless of the deeper how of this achievement. ...

C. Obviously, you are going to react to this kind of reasoning with incredulity because it is just reiterating a quantum explanation or mechanisms for something that you think should be more intuitively classical; but to me, ...

D. https://scholar.google.co.uk/scholar?cluster=5867761140794379223&hl=en&as_sdt=0,5&as_vis=1

https://scholar.google.co.uk/scholar?cluster=17666541244192212757&hl=en&as_sdt=0,5&as_vis=1 (you can ignore these papers' interpretation / semantics wrt locality/ nonlocality which is not relevant to the point)

"Though there is no prior fixed relation between the internal variables of (1, 4), the Bell measurement on (2, 3) chooses a sub-ensemble in which there is an observed correlation between particles 2 and 3 and hence between particles 1 and 4, due to fixed prior relationship in internal variables of particle pairs (1, 2) and (3, 4)."
(from second link directly above)
A. Never too late!

B. You realize that this is just a complete hand-waving away of my criticism. These papers are decades out of date with experiment. If an argument is "it respects/reproduces Schrodinger equation and therefore all predictions of QM": why do they go to lengths to mention entanglement or Bell at all?

And by the way, if I read a paper on entanglement that even mentions the Schrodinger equation, it is a rarity (and I actually don't recall any offhand). I'd be interested in a direct reference to how the Schrodinger equation leads to Entangled State statistics for Delayed Choice entanglement swapping in photons, for example. (Something that leaves nothing to the imagination - a direct quote from a textbook maybe? Anyone?)

You can't just say that an hypothesis works for A, B and C because it works for A. Good start, yes, but the devil is in the details. Just think back to those poor local realists who said their hypotheses worked because it worked for the examples they covered. That might make sense when you don't know any better, but the challenge flag has been thrown - and long ago. No one bothers with new interpretations that don't address Bell; and there is no reason to bother with new interpretations that don't also address remote swapping, delayed choice or GHZ.

C. I am objecting because the paper does not address modern experiment. I have no preconceived direction for science to advance in, although I have opinions on certain elements of QM. (I am not a Bohmian.)

D. This is an accurate statement, but you have either misread it or misunderstood it. 1 & 4 had no prior relationship whatsoever - not "born" with any correlation to each other. At some time, they did develop such a relationship - while distant. And the spacetime distant decision to create that relationship can be effected at any time - before measurement of 1 & 4, after measurement of 1 & 4 (delayed choice), and even (crazily!) both before and after the creation of 1 & 4 in the same experiment.

So there can be no nonlocal "torque" from 1 to 4 or vice versa, and in fact they never need co-exist in any common time (regardless of reference frame). How can there be torque to something that doesn't even exist yet? In fact: At the time 1 is measured, there can be many potential future photons as candidates to become the entangled 4 photon to go with 1? If these questions don't bother you, then sure, ignore them.

Do you think all of the experimentalists I cited are wasting their time because the outcomes of their tests are as expected? Because this is amazing stuff, and is not being overlooked by most theorists. Each of the experiments I mention should be considered as raising the bar on theory.
 
  • Like
Likes PeroK
  • #9
iste said:
So even though maybe Barandes' indivisible transition matrices are maybe an assumption whose origin is not entirely clear, there are actually some concrete aspects of quantum mechanics that can be derived generally for stochastic systems under some conditions.
I support this direction of reasoning.

One general reason why so many are seeking "stochastic mechanics" or "random walks" or "geodesic motion" is that it per see offers the most natural explanation of all. Just like in GR, this is certainly beautiful and natural. But the problem of a random walk (which is what you get lacking any other rules or forces" is that you must define the "space" where the random walk takes place.

And the problem with most "entropic methods" is precisely that the space in which you do random walks or maximize entropy are ambigous. And this amgiousness is related to the "choice" of transition probablities. Cane we make this "choice" a physical process, then then the stochastic part is not objectionalble IMO.

/Fredrik
 
  • #10
DrChinese said:
I'd be interested in a direct reference to how the Schrodinger equation leads to Entangled State statistics for Delayed Choice entanglement swapping in photons, for example.
All of these experiments are in the non-relativistic domain (yes, they use "photons", but they don't rely on any properties of photons that require QFT to analyze), and as far as I know nobody has claimed that the experimental results contradict the predictions of standard QM. So the Schrodinger equation predicting the results is expected. (We went over this for a specific entanglement swapping scenario in a prior thread a while back and I explicitly showed how it worked for that case--I didn't explicitly write down the Schrodinger equation, but for the case under discussion the significant parts of the analysis are in the interactions at devices like beam splitters, which are treated as instantaneous unitary state transformations; that is basically a limiting case of Schrodinger dynamics when the interaction time scale is very short compared with the non-interacting propagation time scale.)

What the experiments you refer to do do is continue to undermine alternative models that, while they match the predictions of standard QM in many areas, imply that at some point we will find that standard QM fails and an alternative that is more to the intuitive liking of that particular critic will come into play. But every time the boundaries are pushed further, standard QM is found to still work, despite the fact that its predictions become increasingly counterintuitive.
 
  • Like
Likes PeroK and DrChinese
  • #11
DrChinese said:
Do you think all of the experimentalists I cited are wasting their time because the outcomes of their tests are as expected?
I should make clear, in view of my previous post just now, that of course I believe these experiments should be done. Expanding the boundaries of experimental confirmation of a theory, particularly a theory as counterintuitive as QM, is a valuable contribution to science.
 
  • Like
Likes DrChinese
  • #12
DrChinese said:
D. This is an accurate statement, but you have either misread it or misunderstood it. 1 & 4 had no prior relationship whatsoever - not "born" with any correlation to each other. At some time, they did develop such a relationship - while distant. And the spacetime distant decision to create that relationship can be effected at any time - before measurement of 1 & 4, after measurement of 1 & 4 (delayed choice), and even (crazily!) both before and after the creation of 1 & 4 in the same experiment.

So there can be no nonlocal "torque" from 1 to 4 or vice versa, and in fact they never need co-exist in any common time (regardless of reference frame). How can there be torque to something that doesn't even exist yet? In fact: At the time 1 is measured, there can be many potential future photons as candidates to become the entangled 4 photon to go with 1?
But this "torque" isn't defined until the bell measurement of (2,3) is made, and the result communicated, so what exactly is crazy about this? what is the objection to the "filtering or sub-ensemble" argument?

For ME, the "mystery/open problem" is: as the mechanism in bells ansatz obviously does not offer a viable explanation, while respecting the correlation - what other mechanism/understanding does? By mechanism, I do not ask for a hidden variable that restores determinism (imo this dream is doomed to fail since long), I only ask what the causal mechanism is that explains the "observed physics" at all locations; AND their correlation.

That you seek a "mechanism" is NOT the same as trying to restore realism or determinism, at least not for me, which I think is one persistent point of confusion in the discussions? mechanism is not same as determinism; it can also mean the mechanism for defining the dice (this is how i use the term).

DrChinese said:
Do you think all of the experimentalists I cited are wasting their time because the outcomes of their tests are as expected? Because this is amazing stuff, and is not being overlooked by most theorists. Each of the experiments I mention should be considered as raising the bar on theory.
For me these experiments only strenghtens the arguments that any remote influence isn't a reasonable explanation if anyone thought there was still a slim chance (except of course when the adjective "non-local" simply denotes bell inequality deivations, then sure it's clearly "non-local" and no need to debate that).

/Fredrik
 
  • #13
Fra said:
what is the objection to the "filtering or sub-ensemble" argument?
There isn't one if you are using a statistical/ensemble intepretation to begin with.

However, if you are using a realist interpretation, where the quantum state describes the actual, physical state of individual quantum systems, then the objection is obvious: "filtering" and "sub-ensembles" have nothing whatever to do with the actual, physical state of individual quantum systems.

So before we can even decide whether this is an issue or not, we need to decide which kind of interpretation we are going to use.
 
  • Like
Likes Fra
  • #14
Fra said:
That you seek a "mechanism" is NOT the same as trying to restore realism
Why not? Wouldn't any "mechanism" have to be real?
 
  • #15
PeterDonis said:
Why not? Wouldn't any "mechanism" have to be real?
Because what is "real" or "primary ontology" depends on the explanatory framework of choice or interpretation, and what I think most people mean by "realism" in the discussions of bell's theorem and entanglment is not something I see as a mandatory primary notion.

The original ideas from the early days of QM to restore determinism and realism, such as those implicit in bells ansatz are "dead" for me.

Just because Bell cleanly shot down, one potential solution, does not mean we suddently have an understanding to the mechanisms behind entanglement that is satisfactory, and it does not mean that those still seeking understanding doesn't learned the lesson of bells inequality.

What I personally see as a potential "mechanism", without "realism", is to unravel the logic behind apparently random physics interactions. And then I don't ask, "why does the dice land in this face insted of that face". Incompleteness is an early acceptance in any view that is observer centered like qibst inspired views. So the question is more like, what determines the bias of the dice? To just have that as manually tuned input, is not an explanation at all, it is just 100% finetuning. Randomness does not mean there is an actual hidden variable. Randomness is just the observer inability to predict, but not necessarily out of bell style ignorance, but due to constraints of commutative rules in communication. But what determines the bias of the dice, is a "decision problem" in my perspective, this is why i keep associating to evolutionary game theory and other things, as it's within those model abstractions I see the "mechanism". The mechanisms there would in a way be "real", but not in the sense most people mean. So saying it's not real avoids confusion.

/Fredrik
 
  • #16
Fra said:
The mechanisms there would in a way be "real", but not in the sense most people mean. So saying it's not real avoids confusion.
I'm not sure I see how. If you're going to insist on using "real" with some idiosyncratic meaning of your own, the best way to avoid confusion would seem to me to be to not use the word "real" at all, but instead say what you actually mean using words whose meanings are less likely to be misunderstood. (Similar remarks would apply to other ambiguous words.)
 
  • Like
Likes PeroK
  • #17
PeterDonis said:
A. So the Schrodinger equation predicting the results is expected. (We went over this for a specific entanglement swapping scenario in a prior thread a while back and I explicitly showed how it worked for that case--I didn't explicitly write down the Schrodinger equation, but for the case under discussion the significant parts of the analysis are in the interactions at devices like beam splitters, which are treated as instantaneous unitary state transformations; that is basically a limiting case of Schrodinger dynamics when the interaction time scale is very short compared with the non-interacting propagation time scale.)

B. What the experiments you refer to do do is continue to undermine alternative models that, while they match the predictions of standard QM in many areas, imply that at some point we will find that standard QM fails and an alternative that is more to the intuitive liking of that particular critic will come into play. But every time the boundaries are pushed further, standard QM is found to still work, despite the fact that its predictions become increasingly counterintuitive

A. I recall that (although we differ still on the subject). But to discuss this further properly requires I start a new thread... :smile:

B. :thumbup:
 
  • #18
Fra said:
A. But this "torque" isn't defined until the bell measurement of (2,3) is made, and the result communicated, so what exactly is crazy about this?

B. What is the objection to the "filtering or sub-ensemble" argument?

C. For me these experiments only strengthens the arguments that any remote influence isn't a reasonable explanation if anyone thought there was still a slim chance...

D. (except of course when the adjective "non-local" simply denotes bell inequality deviations, then sure it's clearly "non-local" and no need to debate that).

/Fredrik
A. Now seriously, how can that make sense? The 2 & 3 BSM is performed (or can be) AFTER 1 & 4 are measured and no longer exist. At the time they cease to exist, there is nothing whatsoever that connects them.

And by what scientific standard it is that the communication/timing thereof of results of ANY experiment (with components that are space-like separated) relevant? Only in the convoluted explanations of those who deny the obvious, is this a factor. Because guess what, none of the actual cited experiments list this as an issue.

B. The objection is simple: the same criteria are applied for the runs in which Product (Separable) State statistics are generated, and for those that generate Entangled State statistics. And the "filter" is: arrival of photons 2 & 3 within a defined time window. In other words, there essentially is no filtering. The "sub-ensemble" is really all events that meet this criterion, for either statistical set. Calling it "filtering" is misleading, and a red herring that reflects a basic misunderstanding of the experiment. All scientific experiments define what events are to be considered. These are no different.

C. Again, please be serious. Experiments in which the experimentalists think a form of "action at a distance" is being demonstrated cannot possibly be considered the opposite, as you claim to believe. It's like saying that every experiment that shows the speed of light is c strengthens your belief in the opposite.

The issue we all agree on, pretty well, is that in these experiments featuring a complex quantum context (typically Alice, Bob and Victor): since time, distance and order are not limiting factors as in classical experiments, there is something called "quantum causality" instead. It does not follow classical causality, and seems to defy any reasonable descriptive mechanism. But the predictive rules do work nicely. So of course when the term "action at a distance" is invoked, by me or anyone else, no one really no "what action" is occurring; and no one knows when or where it is occurring. Because varying the order or location seems to makes no difference. You could say the quantum nonlocality and quantum causality are one and the same.

D. I agree with your comment here, we have come to accept this description as standard science in the QM subforums.
 
  • #19
DrChinese said:
. If an argument is "it respects/reproduces Schrodinger equation and therefore all predictions of QM": why do they go to lengths to mention entanglement or Bell at all?

I don't think they are mentioning Bell and entanglement explocitly as an argument, per se; it's just something they wanted to model.

DrChinese said:
And by the way, if I read a paper on entanglement that even mentions the Schrodinger equation

I am not mentioning it specifically in relation to entanglement. I am just mentioning the fact that the paper is using a formulation where the aim was to derive the schrodinger equation from assumptions about stochastic processes, implying the predictions of nonrelativistic quantum mechanics can be gotten out of these stochastic processes in a way that there is an equivalence between the quantum theory and the stochastic one.

My argument then was that if the reasons that "normal" entanglement correlations occur is not inherently different to why correlations occur in entanglement swapping or GHZ, then I see no reason why the theory would not replicate the GHZ and entanglement swapping.

DrChinese said:
This is an accurate statement, but you have either misread it or misunderstood it. 1 & 4 had no prior relationship whatsoever - not "born" with any correlation to each other. At some time, they did develop such a relationship - while distant. And the spacetime distant decision to create that relationship can be effected at any time - before measurement of 1 & 4, after measurement of 1 & 4 (delayed choice), and even (crazily!) both before and after the creation of 1 & 4 in the same experiment.

My point for that quote is that the determinants of the entanglement correlation in general are the measurement setting and the internal phase variable. The fact that particles 1 & 4 did not initially interact doesn't matter, what matters is the internal phase variable which may have just happened to be set at some separate initial event. I was then suggesting that if that stochastic formulation is based on an equivalence between the schrodinger equation and equations describing the stochastic processes behaviors, then the internal phase and its role should be baked into the stochastic theory and the entanglement swapping and GHZ should be reproducible just as for the conventional entanglement. I don't see why these experiments wouldn't be reproduced.
 
  • #20
iste said:
My point for that quote is that the determinants of the entanglement correlation in general are the measurement setting and the internal phase variable. The fact that particles 1 & 4 did not initially interact doesn't matter, what matters is the internal phase variable which may have just happened to be set at some separate initial event. I was then suggesting that if that stochastic formulation is based on an equivalence between the schrodinger equation and equations describing the stochastic processes behaviors, then the internal phase and its role should be baked into the stochastic theory and the entanglement swapping and GHZ should be reproducible just as for the conventional entanglement. I don't see why these experiments wouldn't be reproduced.
What determinants of entanglement correlation? What measurement setting? And what the heck is an "internal phase variable"? We're talking about photons created from distant independent sources that never come in contact with each other becoming entangled due to a mutually remote agent when they no longer exist.

And you think that is the same thing as what Bell was talking about in 1964 when he wrote about spin 1/2 entanglement? You don't see the giant leaps that have occurred in the last 30 years? Obviously the authors you have cited don't see much new, and I guess they take it for granted that their novel interpretation can handle these complex cases of entanglement, swapping and delayed choice.

I would call a nonlocal stochastic interpretation of QM that matches our understanding circa 1980 as ... umm, not sure what I would call it. But at least it admits of nonlocal action at a distance ("torque"). So at least it's not local realistic. :smile:
 
  • #21
DrChinese said:
to discuss this further properly requires I start a new thread... :smile:
And you have now done that, so further discussion of this issue will be there.
 
  • #22
DrChinese said:
What determinants of entanglement correlation? What measurement setting? And what the heck is an "internal phase variable"?

Yes, those two things are the two determinants of the entanglement correlation when calculated using the local amplitudes in the Unnikhrishnan papers I linked accompanying that quote.

Yes, this is the mechanism on the level of quantum description, not beneath. But my point is that all of the entanglement correlations rely on the same set of factors. The quote is saying that the internal phase variable is the deeper explanation for the entanglement correlations than the initial interactions and so they are the underlying mechanism for why the particles that never met are correlated. Obviously this explanation still carries all of the mysteries of non-local quantum mechanics. My point is just that if a stochastic formulation is based on a derivation of the schrodinger equation from stochastic processes (and inversely, schrodinger solutions being equivalent to stochastic processes), then the mysterious determinants of correlations should be embedded in the stochastic description and work in the same way on a formal level.

Maybe the formulation might not be perfect but if its predictions for conventional entanglement are correct, and entanglement swapping works by the same mechanism on the level of quantum formalism, then I don't see why the stochastic solutions shouldn't also produce the same behavior. I'm not arguing for some detailed mechanistic way of explaining away these correlations, just that it seems reasonable that the model would also reproduce predictions for entanglement swapping if it does so for the more 'normal' entanglement.

If I'm honest, the jump of a stochastic model producing these quantum correlations is much much bigger than the subsequent jump to entanglement swapping - the author of the paper in their thesis actually say (and I agree) that "the physical explanation for such a specific correlation remains baffling" so I don't really see the rhetorical pull of the entanglement swap scenario over and above this - it is kind of ridiculous enough without needing to talk about entanglement swapping. And again, on the level of quantum formalism, the mechanisms don't seem different for swapping vs. 'normal' - or vs. delayed versions too, which don't have the same rhetorical power in a stochastic formulation which doesn't have a real, physical wavefunction or collapse.

DrChinese said:
interpretation of QM that matches our understanding circa 1980 as ... umm, not sure what I would call it.

But QM itself hasn't changed since the 1980s so if a formulation like Bohmian mechanics reproduces quantum mechanical predictions, it shouldn't matter either. I would say Stochastic mechanics is like Bohmian mechanics in that sense - it is an interpretation but it is also a mathematical formulation of quantum behavior which may be just as complete in terms of empirical equivalence as Bohmian mechanics is. The multi-time correlation issue seems to be the only glaring incorrect prediction people have found in stochastic mechanics. I just don't see a strong reason to believe it would not reproduce entanglement swapping.
 
  • #23
DrChinese said:
A. Now seriously, how can that make sense? The 2 & 3 BSM is performed (or can be) AFTER 1 & 4 are measured and no longer exist. At the time they cease to exist, there is nothing whatsoever that connects them.
It makes sense because the "torque" is statistical. You can't define this for a single individual photon. And the "2&3 picking/filtering" of courses changes this statistics.

(*) I am guess what disturbs you about this "explanation" is that you don't see a that the requires "pre-correlation" is possible without a bell inequality violating HV? IF so, your issue is well taken, but that is a separate piont.
DrChinese said:
And by what scientific standard it is that the communication/timing thereof of results of ANY experiment (with components that are space-like separated) relevant? Only in the convoluted explanations of those who deny the obvious, is this a factor. Because guess what, none of the actual cited experiments list this as an issue.
In any scientific experiment the data must be collected before inferences are made.

I'm curious what is "the obvious here"? Is it that there MUST be some sort of action at a distance? Is that what is obvious? (I'm not teasing you, just want to make sure i don't misunderstand)
DrChinese said:
B. The objection is simple: the same criteria are applied for the runs in which Product (Separable) State statistics are generated, and for those that generate Entangled State statistics. And the "filter" is: arrival of photons 2 & 3 within a defined time window. In other words, there essentially is no filtering. The "sub-ensemble" is really all events that meet this criterion, for either statistical set. Calling it "filtering" is misleading, and a red herring that reflects a basic misunderstanding of the experiment. All scientific experiments define what events are to be considered. These are no different.
In what you wrote here, what is the actual objection? I don't get it. If you don't want to call it "filtering" but instead sub-ensembling, well fine with me. But this can't be the objection itself? Is it the same as (*)?
DrChinese said:
C. Again, please be serious. Experiments in which the experimentalists think a form of "action at a distance" is being demonstrated cannot possibly be considered the opposite, as you claim to believe. It's like saying that every experiment that shows the speed of light is c strengthens your belief in the opposite.
Do you by "action at a distance" mean "non-locality" (al la Bell)? then it's not what i meant. I though reserving ONE badass adjective for Bell was fine and perhaps enough, you want both? :smile:

DrChinese said:
The issue we all agree on, pretty well, is that in these experiments featuring a complex quantum context (typically Alice, Bob and Victor): since time, distance and order are not limiting factors as in classical experiments, there is something called "quantum causality" instead. It does not follow classical causality, and seems to defy any reasonable descriptive mechanism. But the predictive rules do work nicely.
We agree, and I think we are both driven by trying to "understand" what is happening? We don't question that it in fact happens. Thanks to all excellent experiments. We just look for understanding along different directions.

/Fredrik
 
  • #24
Fra said:
But what determines the bias of the dice, is a "decision problem" in my perspective, this is why i keep associating to evolutionary game theory and other things, as it's within those model abstractions I see the "mechanism".

May have absolutely no relevance at all to your views but just thought was a notable coincidence that there is some game theory in the formulation underlying the paper. The time-reversibility aspect I was talking about is formulated as coming from a Nash equilibrium.

https://scholar.google.co.uk/scholar?cluster=663086951774709679&hl=en&as_sdt=0,5&as_vis=1 (Thesis for the formulation used in the OP paper, which also cited the author)
 
  • #25
iste said:
May have absolutely no relevance at all to your views but just thought was a notable coincidence that there is some game theory in the formulation underlying the paper. The time-reversibility aspect I was talking about is formulated as coming from a Nash equilibrium.

It's certainly not a coincidence!

For me the decision theoretic and gaming abstractions are simply natural from my qbist-derivative interpretation. The associations comes naturally. You are an except, but otherwise very few on here seem to think along these lines...

Older thread I posted on the topic

Simplified one can think of Nash equilibrium as a entropic reasoning in strategy/action or belief space. But just like many entropic arguments, where one tried to explain interactions such as for example verlindes entropic gravity as having a stochastic origin, there is always the problem that you need a statespace for this, and where does this come from? Same with game theory, the state of possible strategies etc. For any given setting, you can use the extremal principals, but one can do the ideas a disservice by simplifying too much. This is where the evolutionary twist complicates, but also perhaps cures. I have no illusion that we should take the equations from economic theory and just have it "explain" QM. But the paradigm of interacting players, like agent based modelling, has some IMO advantages over the traditional system dynamics. So the associations to game theory here are at this point largely thinking tools for inspiration to analyse old problems from new angles.

/Fredrik
 
Last edited:
  • #26
Fra said:
It's certainly not a coincidence!

A. Older thread I posted on the topic

B. This is where the evolutionary twist complicates, but also perhaps cures.

/Fredrik

I have adopted the DrC paradigm of responses as it seems quite efficient, ha.

A. Very interesting: Nashian equilibrium is incompatible with quantum physics. When I think about it, this seems very intuitive. Of course it also seems on face value to contradict the last link I gave, with the following quote:

"Finding the Nash equilibrium of a stochastic optimal control problem is the quantum-mechanical counterpart to Hamilton’s principle of least action."

Obviously, the first instinct is to say that the Nash equilibrium in these papers are being applied in different contexts: in the measurements of a Bell scenario vs. the derivation of quantum theory from stochastic processes. The players in the paper you gave I assume are quantum observers, while the players in the thesis are not quantum observers - they are pre-quantum in some sense, prior to the construction of the quantum behavior and quantum scenarios like the Bell ones. As the paper notes, the same technique has been used in classical finance problems. I cannot say for sure whether this instinct is accurate or not accurate without further research.

At the same time, I have to remember that the Markovian stochastic formulation used in the paper is not exactly the same as quantum mechanics, as I mention in OP. It does have at least one explicit area of wrong predictions related to multi-time correlations when measurement is not implied in the description. These I have said I think are related to the Markovianity because Markovianity upholds realism for trajectories which is not the case for quantum mechanics. Bohmian mechanics upholds realism too with its deterministic trajectories and seems to have incorrect multi-time correlations. Both the Bohmian and Stochastic theories seem to be able to "fix" this problem by including measurement interaction explicitly in the description though I do not think this is a desirable way of fixing the problem; imo, non-markovianity (if it actually is a valid fix) is the more natural and parsimonious route to fixing the problem and seems to have a benefit in regard to locality in addition.

That was long paragraph that but essentially the end point I am coming to is that the Markovian stochastic mechanics in the paper is probably incompatible with quantum theory in the same way implied by your Nash paper in the specific case of measurements across time. Markovian stochastic mechanics will not be able to violate any kind of Bell inequality or similar, e.g. Leggett-Garg, when it comes to the unmeasured Markovian diffusion in time - as neither should Bohmian mechanics. So according to your paper the measurement games for these trajectories could not be Nashian (if that is the right adjective). (mistake in this paragraph addressed in next post) Quote from paper in post #3 where I corrected a mistaken link/citation:

"Since Bohmian mechanics has a joint probability distribution for the po-
sitions of particles at all times, the correlation between these positions at
distinct times cannot violate a CHSH inequality and thus BM appears to be
in conflict with the standard theory. The defenders of BM have argued that
their theory nevertheless reproduces the same predictions as SQM once the measurement process is adequately accounted for [21, 12]."


Which is same in Markovian stochastic mechanics because it implies realism. And when I talk about realism I mean purely in the statistical sense of relating a unique joint probability distribution to marginals and vice versa so imo the metaphysical implications of "realism" need further interpretation.

If this Markovian assumption of stochastic mechanics were to then be dropped would there need to be a different kind of technique and a different kind of game and players or player strategies to the nash equilibrium one? I have no clue. Maybe? Maybe not? Markovianity is assumed about the stochastic process though the game being played here is not explicitly a scenario of observers that is related to empirical predictions of quantum mechanics. (Edit: looking it up, appears to me from what I gather, briefly following sourced back, that the technique can be used for both Markovian and non-Markovian systems but I probably haven't read into it well enough - Oksendal & Sulem papers)

B. Very interesting, could you elaborate on this?
 
Last edited:
  • #27
Fra said:
/Fredrik

Just realize I made a mistake in my last post at the following bit:

iste said:
specific case of measurements across time...

So according to your paper the measurement games for these trajectories could not be Nashian (if that is the right adjective).

Bohmian and stochastic mechanics measurement games would not be Nashian assuming that the measurement interactions "fix" the empirical predictions. But the unmeasured behavior across time cannot violate the Bell inequalities... that would then fit how the Nashian criteria is incompatible with quantum mechanics... but then again, can you even formulate a game being played if there is no measurement happening??!!
 
  • #28
This direction of thinking (game theoretic decision theoretic angle) is broad and there are many subdirections within. Many papers in the field, use ideas for QM, and ponder what these things would mean for a game, but I seek (as does one of the papers i quoted in the other thread), tries to do it the other way; we are trying to find new insights into QM from game theory; not the other way around...

iste said:
I have adopted the DrC paradigm of responses as it seems quite efficient, ha.

A. Very interesting: Nashian equilibrium is incompatible with quantum physics. When I think about it, this seems very intuitive. Of course it also seems on face value to contradict the last link I gave, with the following quote:

"Finding the Nash equilibrium of a stochastic optimal control problem is the quantum-mechanical counterpart to Hamilton’s principle of least action."
The paper i quoted says Nash equilibrium of a "classical game", is incompatible with quantum mechanics as it would not violated bell inequality. But "classical games" are not the ultimate games.

Other papers consider Nash equilibrum when "quantum strategies" are allowed, this is often used to take QM insights, and bring into game theory. The paper above tried the other way around. quantum strategies means for example that each player has to consider other agents in superposition of strategies; players can also be "entangled".

But we seek insight into the "meaning" of superposition and entanglements. I think the deeper game theoretic insights to quantum strategies are to be found elsewhere, BUT, the fact that quantum strategies often outperform classic ones, are a hint in an evolutionary context. Nature somehow is likely more stable when quantum strategies exist. This resonates well with standard understanding as we know that quantum principles explains stability of atoms etc. But what if we can turn this argument around, and say that stability predicts quantum interactions. Then we could reach a deeper understanding of quantum weirdness, maybe it's not that weird after all?

iste said:
B. Very interesting, could you elaborate on this?
Elaborating this in details would imply discussing ideas on unsolved problems which is far beyond interpretational dicussions, so not allowed in the forums. There are however bits and pieces of ideas published by a range of authors, which are merely pieces of a big unsolved puzzle.

But without going into speculation and details, what I meant referred to is just the idea that the population and nature of participants in the game (not classical games) is subject to evolution. So the conditions for existence of nash equilibrum are likely violated, in the evolutionary (cosmological) perspective, but it is plausible that there at any state of evolution would exists an approximate nash equilibrium on time scales sufficiently short, relativce to the evolutionary time scale.

The "nash equilibrium" of strategies is guaranteed to exist only for finite player games with compact strategies. If that exists, one can normally associate the dynamics by timeless laws acting in configuration space. But if this idealization doesn't hold, because the population of participants are changing, or because the strategy space are inflating (like during evolution of agents), then then this equilibrum can be approximate only, over timescales, short relative to the evolutionarty scale where the "population" of players are approximately "stationary". This problem is then analogous to time as a parameter of change, and cosmological time. The laws of microsphysics is effectively timeless according to observation, but wether they are "timeless" over the entire lifetime of the universe, remains unknown.

This is conceptually related also to these ideas
Law without law: from observer states to physics via algorithmic information theory
-- https://arxiv.org/abs/1712.01826

This intersects with foundations of probability theory, game theory, decision theory and algorithmic information theory... it's really interesting by many open problems, all the papers are scratching the surface, which is why is is easy for critique to dismiss. The promise is in the big puzzle IMO.

/Fredrik
 
  • #29
Fra said:
The paper i quoted says Nash equilibrium of a "classical game", is incompatible with quantum mechanics as it would not violated bell inequality. But "classical games" are not the ultimate games.

Other papers consider Nash equilibrum when "quantum strategies" are allowed, this is often used to take QM insights, and bring into game theory

Alright, this is fair enough, I didn't even consider the notion of Nash equilibrium for quantum strategies; though now I am more confident that the Nash equilibrium being considered in mine and your papers are being applied to two separate kinds of topics.

Fra said:
Nature somehow is likely more stable when quantum strategies exist. This resonates well with standard understanding as we know that quantum principles explains stability of atoms etc. But what if we can turn this argument around, and say that stability predicts quantum interactions. Then we could reach a deeper understanding of quantum weirdness, maybe it's not that weird after all?

This is definitely an interesting thought, and the last line chimes with the kind of direction I would prefer in talking about quantum interpretation - rather than trying to invent exotic ontologies to explain quantum phenomena, moving towards places that deflate it so it is not truly as seemingly unnatural as our initial impressions seem to suggest.

Fra said:
But without going into speculation and detail ... The laws of microsphysics is effectively timeless according to observation, but wether they are "timeless" over the entire lifetime of the universe, remains unknown

Straight over my head, ha! Not at all familiar with these ideas but they sound interesting. I was aware of that paper but have never really deeply looked at it. Interesting coincidence though, the author of that paper appeared on a podcast having a conversation with the author of the free energy principle papers I linked to you in a conversation from a while ago from the thread about the Barandes papers:

 
  • #30
iste said:
This is definitely an interesting thought, and the last line chimes with the kind of direction I would prefer in talking about quantum interpretation - rather than trying to invent exotic ontologies to explain quantum phenomena
...
Straight over my head, ha! Not at all familiar with these ideas but they sound interesting.
A Perimeter talk https://pirsa.org/11100113
"Does Time Emerge from Timeless Laws, or do Laws of Nature Emerge in Time?"

The ANGLE of analysis is unusual, but it is not really complicated. The basic is is, the the "laws of physics" are inferred via a scientific process, for subatomic physics we repeat short experiments many times to get statistics. So the entire "inference" of the statespaces and "laws" are all inferred in a timescale that is very short. No, there are no signs that the laws changes during our observable universe, but the idea is that if they did change (or mutate like Smolin thinkgs, during big bang or during unification phase the first split seconds) the idea can potentially cure a fine tuning problem of the de factor effectively timeless laws we consider today.

Now the gaming analog could be the that as participants in the game evolve, come and go in the game, at some point certain participants with the ability to encode certain strategies may be for fit and are therefor those that populate the effective nash equilibrium in the "games". And the "big game" is supposedly then nothing by regular reality and physical interactions.

So the gaming perspective with inferring gamers trying to survive is (without going into details!) is simply an ALTERNATIVE ANGLE of analys of the evolution of physical law, why the elementary particles exists and why they have these masses and parameters. It is the SAME problem. Noone has the solution in either paradigm. But new angles may give new insights. This is the only generic point. The questions you are lead to pose also become different in different angles, and in some angles, maybe they are easier to solve? So why not explore?

Now wether the agent base model or gaming paradigm is exotic, or with "agents" are "exotic", is a matter of opinon. But for me the "first persion perspective" (question 1 in your posted youtube with Müller) is about as "natural" or intuitive as anything gets? Much more intuitive than system dynamics.

/Fredrik
 
  • #31
Fra said:
A Perimeter talk https://pirsa.org/11100113
"Does Time Emerge from Timeless Laws, or do Laws of Nature Emerge in Time?"


Thanks for the link! I'll definitely have to take a look and digest.
 
  • #32
I just wanted to update my thoughts on the question by Fredrik about how the "Barandes transition matrices (or lack of) emerge".

It seems to me that if the Barandes non-Markovianity condition is related to correct multi-time correlations due to its interferences, then the amelioration of incorrect Nelsonian/Bohmian multi-time correlations by measurement imply that the non-commutativity of position and momentum are in some way the source of the non-Markovianity, since clearly measurement related disturbance would be what is ameliorating the faulty correlations when explicitly accounting for measurement.

I then saw that the effect of the non-commuting Non-Selective Measurements on temporal behavior are described in the following paper:

https://arxiv.org/abs/quant-ph/0306029

"So the NSM of the position ˆx at time t = 0 not only has changed immediately the probability distribution of the momenta p, as we have analyzed in b1), but it has changed also the probability distribution of x at any time t > 0. Now the reader may wonder why the probability distributions ρP (x|t) and ρM (x|t) were the same at t = 0, see (3.16)-(3.17), but they are different at any time t > 0. The explanation is that during the evolution, which is given by ˙x = p and couples x with p, the distributions in x are influenced by the initial distributions in p which, as shown in (3.18) and (3.19), are different in the two cases in which we perform, case b), or not perform, case a), the NSM of ˆx at t = 0. So the NSM of ˆx influences immediately the distribution of probability of the conjugate variable p. Next, since the momenta p are coupled to x via their equations of motion, the changes in the distribution of p are inherited by the distribution of the positions at any instant of time t > 0."

So the authors observe how the non-commutativity means the measurement is disturbing the behavior at other instances of time. It seems that this kind of disturbance would be what is ameliorating the faulty Nelsonian/Bohmian multi-time correlations. Maybe then the Barandes non-Markovianity and its interferences are linked to the non-commutativity which also exists in the Barandes theory like in QM - measurements at one time disturb the trajectory / transition statistics for other times, violating (total) joint probability consistency conditions for its trajectory's transition matrices.
 

Similar threads

  • Quantum Interpretations and Foundations
7
Replies
223
Views
7K
  • Quantum Interpretations and Foundations
Replies
1
Views
965
  • Quantum Interpretations and Foundations
2
Replies
37
Views
2K
  • Quantum Interpretations and Foundations
Replies
2
Views
1K
  • Quantum Interpretations and Foundations
2
Replies
54
Views
4K
  • Quantum Interpretations and Foundations
Replies
20
Views
2K
  • Quantum Interpretations and Foundations
Replies
14
Views
4K
  • Quantum Interpretations and Foundations
15
Replies
491
Views
29K
Replies
2
Views
1K
  • Quantum Interpretations and Foundations
Replies
15
Views
729
Back
Top