A new realistic stochastic interpretation of Quantum Mechanics

  • #456
pines-demon said:
Sure but can you or Barandes provide a simple example of INMS processes? What does it look like?

Again a simple example of how that memory effect emerges would be great.
Depends on what you think is simple, but I think finance agents is simple and graspable.

Lets say we consider the dynamics of the market value of a stock. Then a simple model is that the market value is somehow determined by the collective expectations of the agents. And each agent can have different levels of sophisticated predictions, but an agent with memory will somehow have a memory depth and use part of history, in it's evaluation of the expectation. This means that the probability of a given value tomorrow, does not depend only on what it is today, but alsowhat is was in the past. But realistically an agent does not remember or know ALL of history, it consider or retains only part of it. This can be called "memory depth", but one can also imagine agents where the memory depth relates to its capacity, and thus every agent as a lossy retention of the past; which influence the expectations of the future.

So the market memory is this collective effect where the agents memory of thet past affects the valuations. This makes the transition from value to another emergent and non-markovian.

Non-divisible is because the emergent dynamics depends critically on the collective behaviour of agents including feedback loops. Arbitrarily dividing it into individual possible strategies and them just average them will miss the feedback between agent; and agents vs the collective.

This naturally has a cooperative element, as any part of the system is dependent on it's immediate surrounding. So even a "selfish agent" cant ignore that it depends on its environment. This is where emergence and selforganisation comes in.
pines-demon said:
are you proposing some sort of "hard" emergent macroproperty that cannot be explained from the micro theory?
The above is the simple idea. But the critical part to take this to next level is HOW the agents revise their expectations based on the history. This is a more deep water that goes beyond the basic question of getting a handle on what "indivisibility" and non-markovian can mean. If we mix this discussion with the meaning of non-disible and non-markobian i think it will be a mess. The agents memory can a simple model as an explicit actual memory or it can be implicit as in i the agents doe reinfored learning; thus the agent itself may well evolve and change behaviour (learn), this is an "implicit memory". But still makes in non-markovian.

a random references... but finance papers arent asking the ame questions as we do, so the analogies are never perfect...
Non-Markovian Dynamics for Automated Trading

Edit: Forgot to mention the obvious as well, that the "memory" of the agent, is in a sense "subjective hidden varibles" that does not satsfiy bell assumptions. And one can IMO very well consider hte agents actions even to be partly stochastic. But the market dynamics is an interaction of these processes. That si the point.

I think this is "easy" as it requires no "weirdness". the only challenge, is to "translate" agent processing, agent encoding, agent phenomenolgoy to "physics". But this is just a "framework". no mistakes needs to be done to think that any of this implies that particles "think" like human, or process inforamtion like humans. It can be "implicit", and even related to evolutionary learning, and thus "memory effects".

/Fredrik
 
Last edited:
  • Like
Likes JC_Silver
Physics news on Phys.org
  • #457
Another noob question, as always, how would QFT look under a INMS interpretation? If particles have definite position/momentum, how would field theories behave? The same? Different? Better? Worse?
I ask because QFT is a powerful tool that assumes particles are a result of disturbances on a field and not classical particles, as far as my education took me.
 
  • #458
JC_Silver said:
Another noob question, as always, how would QFT look under a INMS interpretation? If particles have definite position/momentum, how would field theories behave? The same? Different? Better? Worse?
I ask because QFT is a powerful tool that assumes particles are a result of disturbances on a field and not classical particles, as far as my education took me.
Barandes has claimed, but to my knowledge, never published on this, that his approach generalizes to infinite configurations, and that then his dictionary approach maps to QFT.
 
  • Like
Likes pines-demon and JC_Silver
  • #459
JC_Silver said:
I'm not gonna lie, this is what gets me about the non-Markovian process, because the dice rolls isn't a non-Markovian process, it's a regular stochastic process. Since I'm not well versed in stochastic processes of any kind and I can't find good sources on non-Markovian processes online that are not by Barandes (as we used to say in the long past, my Google-fu isn't strong enough), I'm left not understanding exactly why the particle has definite positions between division events, because as far as I understand, the dice roll also doesn't exist between one roll and the next.

Again, sorry for bothering >.<
When modeling a physical system, one should reflect whether the physical model was restricted to a subset of the system’s phase space to analyze its behavior. When only a subset of the system's coordinates was examined, the system's behavior might appear "non-Markovian", thus the notation “apparent non-Markovian” would be more suitable.
 
  • Like
Likes JC_Silver
  • #460
Fra said:
Depends on what you think is simple, but I think finance agents is simple and graspable.

Lets say we consider the dynamics of the market value of a stock. Then a simple model is that the market value is somehow determined by the collective expectations of the agents. And each agent can have different levels of sophisticated predictions, but an agent with memory will somehow have a memory depth and use part of history, in it's evaluation of the expectation. This means that the probability of a given value tomorrow, does not depend only on what it is today, but alsowhat is was in the past. But realistically an agent does not remember or know ALL of history, it consider or retains only part of it. This can be called "memory depth", but one can also imagine agents where the memory depth relates to its capacity, and thus every agent as a lossy retention of the past; which influence the expectations of the future.

So the market memory is this collective effect where the agents memory of thet past affects the valuations. This makes the transition from value to another emergent and non-markovian.
This still sounds markovian to me if each agent has a finite memory. Markovian processes aren't necessarily just one step conditionally independent.
Fra said:
Non-divisible is because the emergent dynamics depends critically on the collective behaviour of agents including feedback loops. Arbitrarily dividing it into individual possible strategies and them just average them will miss the feedback between agent; and agents vs the collective.
This seems like the more important point than the non-markovian claim you made above.
Fra said:
.
This naturally has a cooperative element, as any part of the system is dependent on it's immediate surrounding. So even a "selfish agent" cant ignore that it depends on its environment. This is where emergence and selforganisation comes in.

The above is the simple idea. But the critical part to take this to next level is HOW the agents revise their expectations based on the history. This is a more deep water that goes beyond the basic question of getting a handle on what "indivisibility" and non-markovian can mean. If we mix this discussion with the meaning of non-disible and non-markobian i think it will be a mess. The agents memory can a simple model as an explicit actual memory or it can be implicit as in i the agents doe reinfored learning; thus the agent itself may well evolve and change behaviour (learn), this is an "implicit memory". But still makes in non-markovian.

a random references... but finance papers arent asking the ame questions as we do, so the analogies are never perfect...
Non-Markovian Dynamics for Automated Trading
I have to read this paper and reply back. So far, I haven't found much of this discussion that illuminating. I found Barendes own video much more grounded and interesting. One comment that I found stimulating in that video from a colleague of Barendes was that indivisibility does fundamentally imply non-locality. Barendes said that may be true but that we just don't have an intuition about indivisibility.
 
  • Like
Likes pines-demon and JC_Silver
  • #461
iste said:
Which problem? That Barandes' indivisility isn't easily visualizable?
Yeah I mean how are other stochastic intepretations describing the entanglement experiments?
iste said:
The major formulation has instantaneous non-local behavior but it is not clear what this means. Some recent physicists have interpreted it as just epistemic in the sense that the instantaneous non-local behavior may not represent an actual physical event but just the updating of information. But this isn't rigorously justified.
When I insist on nonlocal forces or non-local interactions for Barandes I truly insist on the idea of FTL or action-at-a-distance. Clearly the weak "nonlocality" has to be a feature of any QM interpretation.
 
  • Like
Likes DrChinese
  • #462
Fra said:
Lets say we consider the dynamics of the market value of a stock. Then a simple model is that the market value is somehow determined by the collective expectations of the agents. And each agent can have different levels of sophisticated predictions, but an agent with memory will somehow have a memory depth and use part of history, in it's evaluation of the expectation. This means that the probability of a given value tomorrow, does not depend only on what it is today, but alsowhat is was in the past. But realistically an agent does not remember or know ALL of history, it consider or retains only part of it. This can be called "memory depth", but one can also imagine agents where the memory depth relates to its capacity, and thus every agent as a lossy retention of the past; which influence the expectations of the future.

So the market memory is this collective effect where the agents memory of thet past affects the valuations. This makes the transition from value to another emergent and non-markovian.

Non-divisible is because the emergent dynamics depends critically on the collective behaviour of agents including feedback loops. Arbitrarily dividing it into individual possible strategies and them just average them will miss the feedback between agent; and agents vs the collective.

This naturally has a cooperative element, as any part of the system is dependent on it's immediate surrounding. So even a "selfish agent" cant ignore that it depends on its environment. This is where emergence and selforganisation comes in.

The above is the simple idea. But the critical part to take this to next level is HOW the agents revise their expectations based on the history. This is a more deep water that goes beyond the basic question of getting a handle on what "indivisibility" and non-markovian can mean. If we mix this discussion with the meaning of non-disible and non-markobian i think it will be a mess. The agents memory can a simple model as an explicit actual memory or it can be implicit as in i the agents doe reinfored learning; thus the agent itself may well evolve and change behaviour (learn), this is an "implicit memory". But still makes in non-markovian.

a random references... but finance papers arent asking the ame questions as we do, so the analogies are never perfect...
Non-Markovian Dynamics for Automated Trading

Edit: Forgot to mention the obvious as well, that the "memory" of the agent, is in a sense "subjective hidden varibles" that does not satsfiy bell assumptions. And one can IMO very well consider hte agents actions even to be partly stochastic. But the market dynamics is an interaction of these processes. That si the point.

I think this is "easy" as it requires no "weirdness". the only challenge, is to "translate" agent processing, agent encoding, agent phenomenolgoy to "physics". But this is just a "framework". no mistakes needs to be done to think that any of this implies that particles "think" like human, or process inforamtion like humans. It can be "implicit", and even related to evolutionary learning, and thus "memory effects".
Yeah that's why I insist in simple. I want some toy model. Clearly many socio-economic problems, if modeled naively, will predict nonlocal correlations. In trying to understand the "memory effect" I would like to understand the most minimalist version of it. For example is it because we cannot trace every e-mail between stock analysts (instant messaging)?

Edit: for clarity I can conceive many ways to reproduce entanglement using classical analogies. For example if you give FTL walky-talkies to the electrons you can reproduce the non-local correlations. I can also explain the experiments with some conspiracy between sources and devices. That's what is happening in the stock market. What I am missing so far is why do we need the stochasticity to explain this non-local aspect, what does stochasticity add here?
 
Last edited:
  • Like
Likes Lord Jestocost
  • #463
pines-demon said:
Yeah that's why I insist in simple. I want some toy model, clearly many social and economic problems if modeled naively will predict nonlocal correlations. In trying to understand the "memory effect" I would like to understand the most minimalist version of it. For example is it because we cannot trace every e-mail between stock analyst (instant messaging)?

Edit: for clarity I can conceive many ways to reproduce entanglement using classical analogies. For example if you give FTL walky-talkies to the electrons you can reproduce the non-local correlations. I can also explain the experiments with some conspiracy between sources and devices. That's what is happening in the stock market. What I am missing for far is why do we need the stochasticity to explain this non-local aspect, what does stochasticity add here?
Stochasticity is already in QM, so I assume you mean the indivisible non-markovian stochasticity.
If Barandes is right, INMS gives us the Born rule out of pure math. It would be important considering MWI argues it derives the Born rule from branch counting and textbook QM just imposes it.
On non-locality, I don't think we NEED INMS to explain non-locality, but rather we cannot get away from it even with INMS. Non-locality is very well tested after all.
However how that indivisibility translates into the physical reality is unclear.
 
  • #464
JC_Silver said:
Stochasticity is already in QM, so I assume you mean the indivisible non-markovian stochasticity.
No. I mean stochasticity as in the so-labelled stochastic interpretations of quantum mechanics. Referring to interpretations that imply that QM can be understood as a stochastic process. Specifically I am interested on to how these interpretations explain entanglement (including INMS).
 
  • #465
pines-demon said:
Yeah I mean how are other stochastic intepretations describing the entanglement experiments?

When I insist on nonlocal forces or non-local interactions for Barandes I truly insist on the idea of FTL or action-at-a-distance. Clearly the weak "nonlocality" has to be a feature of any QM interpretation.

I will try to make another thread now describing a possible model for photon correlations.
 
  • Like
Likes pines-demon and JC_Silver
  • #466
iste said:
I will try to make another thread now describing a possible model for photon correlations.
Another thread about Barandes‘ INMS?

After thinking about Barandes (and trying to understand a bit his background and earlier work), and discussing Einstein‘s letters to Born with a friend, I came to the conclusion that he is most of all unhappy with the existing interpretations. And for similar reasons as Einstein. You might object, that Barandes is proposing a new formulation, while Einstein did not. OK, then let us look at DrChinese for a moment: Just like Einstein, the results of those experiments he keeps citing indicate to him that most existing interpretations cannot be right. Peter Donis believes that DrChinese has some interpretation relative to which his statements can be understood. But whether or not this is true is not important, because this is not the part which is important to DrChinese. And in a similar way, the interpretation currently developed by Barandes is not the important part for him. He is much closer to Einstein, in thinking about many different aspects of QM, and in his belief that there must be a simple solution.
 
  • Like
Likes physika and DrChinese
  • #467
gentzen said:
Another thread about Barandes‘ INMS?

After thinking about Barandes (and trying to understand a bit his background and earlier work), and discussing Einstein‘s letters to Born with a friend, I came to the conclusion that he is most of all unhappy with the existing interpretations. And for similar reasons as Einstein. You might object, that Barandes is proposing a new formulation, while Einstein did not. OK, then let us look at DrChinese for a moment: Just like Einstein, the results of those experiments he keeps citing indicate to him that most existing interpretations cannot be right. Peter Donis believes that DrChinese has some interpretation relative to which his statements can be understood. But whether or not this is true is not important, because this is not the part which is important to DrChinese. And in a similar way, the interpretation currently developed by Barandes is not the important part for him. He is much closer to Einstein, in thinking about many different aspects of QM, and in his belief that there must be a simple solution.
If they are making their own model, even if using Barande's work as inspiration, I agree it should go on a different thread.

(Happy holidays)
 
  • #468
gentzen said:
Another thread about Barandes‘ INMS?
Nope, it is stochastic mechanics, albeit somewhat related to the Barandes' formulation at least in the sense of saying that quantum systems are actually stochastic systems.
 
  • Like
Likes JC_Silver
  • #469
As for the simplicity, I honestly think it is really difficuly to see things from a new perspective when current theories are cast in a certain paradigm. I think it's some conceptul grip or intuition to these things you seek right? Then I would say that this might required, stepping outside the current "system dynamics" or "newtonian schema" paradigm. It is at least how I see it. I have been thinking in terms of a different paradigm for at least 20 years, by it is clear enough that "most physicists" still think withing the existing paradigm; which is of course necessary as that is where our current theories work, but when it comes to makeing progoress... I think we need to find new perspectives. And Baranders is sniffing in this direction.
pines-demon said:
In trying to understand the "memory effect" I would like to understand the most minimalist version of it. For example is it because we cannot trace every e-mail between stock analysts (instant messaging)?

I'll try to just my view on the things Baranders notes, but from a a bigger perspective. This IMO makes it simple, but also more abstract.

"In trying to understand the "memory effect" I would like to understand the most minimalist version of it. For example is it because we cannot trace every e-mail between stock analysts (instant messaging)? "

In a very abstract sence, and in ther perspective of inferences made by agents, memory effect is conceptually related to a kind of inertia of opinon. In a way, NEW information always has to be valued and merged with the prior information of the agent. This leads to inertia in revising opinion, and this in itself is the memory effect.

That every agent in the game, should have been amble to infer with perfect confidence ALL information about ALL other players is of course not possible. If you think about that, then one is missing the whole point of what constitutes a "game of expectations". The game IS that you have to interact without knowing! That is the game. And very interesting things happens in these games! And seen from a distance (say weakly interacting remote agent) you can study the emergent system dynamics of this system composed of interacting agents. This is the conceptual paradigm I have.

The memory depth of agents would have to relate to the agents choice of modelling or inference sytem, but also somehow be bounded by it's totaly information capacity (probably contrained by it's mass at some point). So how would a smaller and smaller agent be able to survive? It needs to "copperate" with the environment, and thus evolve and adapt. This is evolution, and also constitutes a kind of collected "implicit" memory, that is defined in relation to the environment. So the agents and their evironment would have a close relation on that they both influece each others evolution.

When a collection or population or such agents (whatever they are, lets not speculate now - it's just an abstraction at this point) then this will spontanously lead to "herding behaviour". This has IMO nothing to do with conspiracy; it is about "spontanous cooperation", no fine tuning required.
pines-demon said:
Edit: for clarity I can conceive many ways to reproduce entanglement using classical analogies. For example if you give FTL walky-talkies to the electrons you can reproduce the non-local correlations. I can also explain the experiments with some conspiracy between sources and devices. That's what is happening in the stock market.
I don't see what you are talking about, in the thinkgs I envision and try to describe above, there is absolutely NO "FTL stuff". There is no conspiracy in the stock market? The only way you can conclude FTL stuff, is IMO if you insiste on viewing things in the wrong perspective. Or if you also reinvent those words similar to how bell reinvented other words.

Another thread posed the question wether bells notion of locality is even helpful in understanding QM? My opinion is that it is definitely not.
pines-demon said:
What I am missing so far is why do we need the stochasticity to explain this non-local aspect, what does stochasticity add here?
IMO, the function of "stochastics" is that it is a process that can not be further resolved from the perspective of an agent. Randomness is in my view of things always relative the the agents/observers information processing capacity and thus "memory depth".

This is how I view that the actions of agents are "stochastic", but the probability space where this stochastic takes places is not a global one, and it is also constantly evolving as the agents "learns", and cooperates with the environment. This is why I early in the thread asked to see Baranders explain the PROCESS and context of where these transition probababilitites emerge. But this is IMO what may is "begind" all this, and this is still missing, and this is why it is hard to understand. But what I tried to convey in simplex conceptual terms is at least for ME as simple as it gets. But this is admittedly a confused position if you insisten on the understanding in terms of an "effective system dynamics" model. And then YES; you are probably right that one could interpret this as a "conspiracy of fine tuning modelparamters"... but for ME, this whole thing is the supposed SOLUTION to the finetuning problem... that is mosy readily know from the string theory landscape for example.

None of this is clear from Baranders papers, but this is my view of this; and it's from this perspective I see things.

To make this a real theory we need many things that does not exists:
- explicit mathematical framework for hos to computer and simulation evolution
- explicit mathematical starting assumptions of agents structure
- explict mathematial ideas of how agents structure and dimenstionality can transform/emerge
- explicit mathematical model for the relatrinal structure between agents (that should lead to spacetime as we know it)

So Baranders papers is scratcing the surface by suggesting a way to stop thinking in terms of hilber spaces and "wave interference", and instead thing of "interfering" stochastic systems!

Edit: perhaps I could describe it like this: I do understand that the "emergenct system dynamics" at some steady state, will LOOK like a "conspiracy of fine tuning", if you used the current paradigm. And this is thus not a plausible explanation. But this is because we do not understand the importance of the emergence, and sellf-organisation, and here the "memory effect" is the key mechanism.

/Fredrik
 
  • #470
Fra said:
In a very abstract sence, and in ther perspective of inferences made by agents, memory effect is conceptually related to a kind of inertia of opinon. In a way, NEW information always has to be valued and merged with the prior information of the agent. This leads to inertia in revising opinion, and this in itself is the memory effect.

That every agent in the game, should have been amble to infer with perfect confidence ALL information about ALL other players is of course not possible. If you think about that, then one is missing the whole point of what constitutes a "game of expectations". The game IS that you have to interact without knowing! That is the game. And very interesting things happens in these games! And seen from a distance (say weakly interacting remote agent) you can study the emergent system dynamics of this system composed of interacting agents. This is the conceptual paradigm I have.

The memory depth of agents would have to relate to the agents choice of modelling or inference sytem, but also somehow be bounded by it's totaly information capacity (probably contrained by it's mass at some point). So how would a smaller and smaller agent be able to survive? It needs to "copperate" with the environment, and thus evolve and adapt. This is evolution, and also constitutes a kind of collected "implicit" memory, that is defined in relation to the environment. So the agents and their evironment would have a close relation on that they both influece each others evolution.

When a collection or population or such agents (whatever they are, lets not speculate now - it's just an abstraction at this point) then this will spontanously lead to "herding behaviour". This has IMO nothing to do with conspiracy; it is about "spontanous cooperation", no fine tuning required.
As far as I know we are talking about "realistic-like" interpretation and not on how observers should interpret their the statistics as degrees or belief or something like that (like Qbism), right?
If we stick to realistic-like interpretations what I am asking is much more simple. Explain somekind of Mermin device using some classical analogy. Bohmians explain it by removing the conditions that particles cannot talk to each other. We should be able to do the same with stochasticity or the "memory effect" if these are valuable interpretations.
Fra said:
I don't see what you are talking about, in the thinkgs I envision and try to describe above, there is absolutely NO "FTL stuff". There is no conspiracy in the stock market? The only way you can conclude FTL stuff, is IMO if you insiste on viewing things in the wrong perspective. Or if you also reinvent those words similar to how bell reinvented other words.
I do not say that in the market people use FTL devices. What I am saying is that in the market, people have many ways to communicate including instant messaging which is very quick in regards to some of the dynamics. Also there is conspiracy in the market, which is not as clearly possible for elementary particles.
Fra said:
Another thread posed the question wether bells notion of locality is even helpful in understanding QM? My opinion is that it is definitely not.
I agree that here we disagree...
Fra said:
IMO, the function of "stochastics" is that it is a process that can not be further resolved from the perspective of an agent. Randomness is in my view of things always relative the the agents/observers information processing capacity and thus "memory depth".
I am under the impression that you want to reinterpret everything in terms of a theory of knowledge by agents, in that case we are no longer arguing in the same grounds. That's why I asked my first question in this comment. When you say agents you mean particles or do you mean the person doing the experiment?
 
  • Like
Likes javisot20
  • #471
pines-demon said:
As far as I know we are talking about "realistic-like" interpretation and not on how observers should interpret their the statistics as degrees or belief or something like that (like Qbism), right?
If you mean the kind of "realism" that bell speaks about then, it's not what I talk about.

But yes I speak of a less native version of realism, meaning that we should be able to explain what happens at microlevel - at least to the extent possible tunil the resolving power is out and there is an residual "randomness".

So by realism I mean that each view or interpretation has it's own "ontological elements" that are considered "real", in terms of which all else is explained via some processes.

But I think(??) with some exceptions, most agree that the "naive realism" that is impicit in bell theorem can not work. I think we agree here but aren't sure? For sure the "hilbert abstractions" and "wave interference of complex waves" are not "realistic" unless we can anchors these things in some microsctructure. But that may wll involve qbist like stuff... but see below
pines-demon said:
If we stick to realistic-like interpretations what I am asking is much more simple. Explain somekind of Mermin device using some classical analogy. Bohmians explain it by removing the conditions that particles cannot talk to each other. We should be able to do the same with stochasticity or the "memory effect" if these are valuable interpretations.
In the original description of the mermin device, the "fallacy" is in place already in the description, just like it is already in place in Bells anzats, which is exactly what Baranders points out, but gave it some new fancy names such as indivisility. (I called this myself equipartition fallacy before, but it the same ting)

The problem is that the dividing the process into hypothetical explicit relations to hte environment (such as polarization angles) is a conceptual fallacy with the assumption of isolation. At least it is; withing the framework of inference that I see this from. If it is truly isolation; there is no way we could make both a state tomography and process tomopgraphy of this - if you disagree, please explain how. So the construction is IMO not "inferrable" - thus invalid. This is how I see it, Baranders just says it does not "logically follow from marginalization rules". I think he thinks in different terms than I do, but the conclusion is the same. It is a fallacy.

pines-demon said:
I do not say that in the market people use FTL devices. What I am saying is that in the market, people have many ways to communicate including instant messaging which is very quick in regards to some of the dynamics. Also there is conspiracy in the market, which is not as clearly possible for elementary particles.
Not sure I understand. Can you give a simple conceptual example of what kind of conspiray in the market you think about? (Not I am not saying there are not human conspiracys, I am saying that i do not think this is a necessary to explain the memory effect, unless you label the same think as conspiracy).
pines-demon said:
I am under the impression that you want to reinterpret everything in terms of a theory of knowledge by agents, in that case we are no longer arguing in the same grounds. That's why I asked my first question in this comment. When you say agents you mean particles or do you mean the person doing the experiment?
I mean a coherent subsystems of the universe, that interacts/observers with the rest of the universe, so sure a particles, is an abstract agent, although the word "particle" leaves an unnecessary classical tang to it.

It does not mean then know math, or have built in semiconductor computers or have bgrain, but they have a microstructure and and microstate, whose state and evolutions in effect are the precursors of learning and implicit memory, and via cooperation of memory may increase it's memory. This is maybe best seen abstractly in terms of algoritmic learning reinforced and evolutionary.

But the realism here is this: it is not human information process, but it is limited by physics. The simple information processing an agent such as a particle can do, is likely highly constrained. It seems unreaslboe to to think that that the iformaion available in the macroscopic environment of say the copenhagen interpretation can be encoded and stored explicity in an electron, right? Thus the memory depth of the agent limits, and influiences how it acts. Which forces it to act stochastically, not because of "ignorance" buy becuse of limited memory depth.

I think this illustrates the perspectie that I thinnk helps understand some of the concepts for me at least. I can't explain it more without getting into explicit speculations, and it was never the purpose. The purpose was to try to convey one possible perspective where indivisibility and the memory effect are easy to understand (no quantum magic, flt or weird stuff needed; all we need to evoluiontary reinforced learning in agent based models, so very real imo)

/Fredrik
 
  • #472
pines-demon said:
I am under the impression that you want to reinterpret everything in terms of a theory of knowledge by agents, in that case we are no longer arguing in the same grounds. That's why I asked my first question in this comment. When you say agents you mean particles or do you mean the person doing the experiment?
I now see another thinkg what you mean. You seems to think the "knowledge" is not physical.

The trick is then; the "knowledge" of a particle (as an abstract agent) would of course be physically encoded both explicitly in its microstate, and implicitly as its microstructure would in some sense have a relation to the environment. But similar to a neural network, there are inputs and outputs - these define the relation to the environment - probably spacetime and the inter-particle forces, and there are internal hidden layers, that are not directly accessible to other agents, only indirectly via learning from interactions.

I hope this clarifies that agents or knowledge can be thought of as "real", and that it unifies also observations and physical interaction. The difference between a measurement and an interaction is just one of perspective. This relates also to another unsolved issue, the measurement problem. but there is still no agreed upon solution to that either.

We have decoherence theory, but in my terms, that solution corresponds to suggesting; there is always a "big enough" agent where decoherence explains things. And while I think this is partly correct for most part, it runs into problem still when you consider cosmology and unification.

For me all these open problems are closely related. I think many of then will be solved together once we get a better understanding of quantum mechanics. There are so many things I think motivates trying out some unorthodox perspectives. Interference of complex waves and hilbert spaces does not seem to offer a good understanding of "reality". It offers however a good desciptive analysis of small systems, from the perspective of macrolevel agents - which is what the QM was made for. But to understanding the framework in a bigger picture is missing.

/Fredrik
 
  • Like
Likes gentzen
  • #473
Fra said:
If you mean the kind of "realism" that bell speaks about then, it's not what I talk about.
Bell does not speak of "realism" but that's another story. Realism as in local realism was not used by Bell he preferred local causality. Realism as local realism these days seems to be something related to pre-xistence of values before measurement. I was using realism in a more broad sense, that there are actual mechanisms in nature that we can describe and not that probability is kind of subjective to the person doing the math/observation like in qbism.
Fra said:
But yes I speak of a less native version of realism, meaning that we should be able to explain what happens at microlevel - at least to the extent possible tunil the resolving power is out and there is an residual "randomness".

So by realism I mean that each view or interpretation has it's own "ontological elements" that are considered "real", in terms of which all else is explained via some processes.
Something like that...

Fra said:
In the original description of the mermin device, the "fallacy" is in place already in the description, just like it is already in place in Bells anzats, which is exactly what Baranders points out, but gave it some new fancy names such as indivisility. (I called this myself equipartition fallacy before, but it the same ting)
I do not see the fallacy... You want to make a device the produces certain results, you have classical ways to do it but that are not allowed by the rules, and the true solution that involves quantum mechanics. However you can always argue that quantum mechanics is doing one of these not allowed rules behind the scenes. The question is which.
Fra said:
The problem is that the dividing the process into hypothetical explicit relations to hte environment (such as polarization angles) is a conceptual fallacy with the assumption of isolation. At least it is; withing the framework of inference that I see this from. If it is truly isolation; there is no way we could make both a state tomography and process tomopgraphy of this - if you disagree, please explain how. So the construction is IMO not "inferrable" - thus invalid. This is how I see it, Baranders just says it does not "logically follow from marginalization rules". I think he thinks in different terms than I do, but the conclusion is the same. It is a fallacy.
Sorry I do not follow. What do you mean that we cannot do state and process tomography?
Fra said:
Not sure I understand. Can you give a simple conceptual example of what kind of conspiray in the market you think about? (Not I am not saying there are not human conspiracys, I am saying that i do not think this is a necessary to explain the memory effect, unless you label the same think as conspiracy).
Conspiracy in the sense that two variables that you thought were independent, were in fact managed by the same corporation but not very transparently to the public. Detectors that had some extraneous correlations from before the experiment began could explain entanglement.
Fra said:
I mean a coherent subsystems of the universe, that interacts/observers with the rest of the universe, so sure a particles, is an abstract agent, although the word "particle" leaves an unnecessary classical tang to it.

It does not mean then know math, or have built in semiconductor computers or have bgrain, but they have a microstructure and and microstate, whose state and evolutions in effect are the precursors of learning and implicit memory, and via cooperation of memory may increase it's memory. This is maybe best seen abstractly in terms of algoritmic learning reinforced and evolutionary.

But the realism here is this: it is not human information process, but it is limited by physics. The simple information processing an agent such as a particle can do, is likely highly constrained. It seems unreaslboe to to think that that the iformaion available in the macroscopic environment of say the copenhagen interpretation can be encoded and stored explicity in an electron, right? Thus the memory depth of the agent limits, and influiences how it acts. Which forces it to act stochastically, not because of "ignorance" buy becuse of limited memory depth.

I think this illustrates the perspectie that I thinnk helps understand some of the concepts for me at least. I can't explain it more without getting into explicit speculations, and it was never the purpose. The purpose was to try to convey one possible perspective where indivisibility and the memory effect are easy to understand (no quantum magic, flt or weird stuff needed; all we need to evoluiontary reinforced learning in agent based models, so very real imo)

/Fredrik
My problem with encoding stuff into the environment is that given everything in a volume of space it should still only be influenced by whatever is in its light cone. If we accept this and we accept that we have independent measurements devices that do not have hidden correlations, then quantum mechanics is predicting a type of correlations that we cannot explain using realistic theories (hopefully realistic in your sense). I still think a Mermin's device solution even if naive of how the memory effect could work against that conclusion would be enlightening.
 
  • Like
Likes jbergman
  • #474
pines-demon said:
Something like that...
Then we seem to roughly agree on that part good.

I'll think and see if I can explain more clearly in some other way and still keep it general.

But the conceptual framework I try to convey are supposed to have these properties

- EXPLAIN the correlations even with arbitrary independent detector settings (requires ISOLATION)
- it escapes bells theorem, beacuse the premise of the theorem does not apply (divisibility assumption)
- it does NOT bring back determinism, the hidden varible can never be used by any agent to predict the future outcomes, the residual randomness is there
- all causal influence are local, and by local I mean here that the stochastic actions of any agent is influenced(guided) ONLY by what in its own limited memory. So the memory effect is is essentiall. Without explicit or implicit memory, there would be not self organisation or emergence.

But let me think if I can find some other "simple" made up example to explain the concepts of the framework without making speculations of physical interactions. I think there are too many things at once perhaps to discuss, but unfortinately they are connected, so isolated one thing from other problems, always misses a point.
pines-demon said:
I do not see the fallacy... You want to make a device the produces certain results, you have classical ways to do it but that are not allowed by the rules, and the true solution that involves quantum mechanics. However you can always argue that quantum mechanics is doing one of these not allowed rules behind the scenes. The question is which.

Sorry I do not follow. What do you mean that we cannot do state and process tomography?
These terms are used also for normal QM.
https://en.wikipedia.org/wiki/Quantum_tomography

But in the general sense it means also, if an agent has an internal model with initial states and a dynamical law of a subsystem. I think it is not possible to infer that statespace and that dynamical law without beeing able to, first learn both the initial state with confidence, and the dynamical law (hamiltoninan) with confidence. To do this you typically need repeated interactions with "similarly prepared" systems etc. For example, it is impossble to find a regularity in something that happens once and claim it is more likely than some null hypothesis.

So to make sense of the divisibility ansatz in Bells theorem, you need a method to determine the hidden variable; so that you can describe how the causal relation is for each such variable; THEN sum it to get the average. But the premise is that it is isolated. And it is not a logical conclusion at all to assume that the dynamical law is unchanged once the isolation is broken. So you will never be able to infer the "mechanism" by which a given hidden variable gives a certain result; because it's isolated. And if its not isolated we know it immediately decoheres.

Even without QM, but just assuming a framework where hamiltonians are also required to be "measured" indirecetly in terms of historical inferences, leading to an implicit memory in the observing agent, the anzats is not sound and not general enough.

These ideas on tomography relates to other existing ideas of "unifying state space" and state of dynamical law. I mean in system dynamics, we tend to think of the initial information as the STATE. And this evolves as per some dynamical law. But I think that in the perspective of learning and inference, the information implicity in the dynamical law, is essential, and it is just as physical. But it's not encoded explicitly, but implicitly, via evolved learning and tuning. This is related to the memory effect as well.

/Fredrik
 
  • #475
PAllen said:
TL;DR Summary: I attended a lecture that discussed the approach in the 3 papers listed below. It seems to be a genuinely new interpretation with some interesting features and claims.

These papers claim to present a realistic stochastic interpretation of quantum mechanics that obeys a stochastic form of local causality. (A lecture I recently attended mentioned these papers). It also claims the Born rule as a natural consequence rather than an assumption. This appears to me to be a genuinely new interpretation. I have not delved into the papers in detail, but figured some people here may be interested.

https://arxiv.org/abs/2302.10778
https://arxiv.org/abs/2309.03085
https://arxiv.org/abs/2402.16935
I tried to read most of the first paper. I will take his word that the new math duplicates the old math, which we still then need to interpret. In that part I observed something of a contradiction or slight of hand or unclarity, however you'd like to describe it. By saying there is fundamental stochastic behavior, this says that the probabilities we are dealing with are objective probabilities, and that no information exists that could resolve the probabilities. However, it also states that in the two slit experiment the particle actually goes through only one slit. And also says that the measurement process is an example of conditional probability. This is a process by which we update our probabilities with information we did not previously have, in other words we seem to be explicitly dealing with subjective probabilities here. So, again, I assume the math is an equivalent formalism, but as an interpretation, I think it fails to keep clear what type of probabilities objective/subjective are involved. How can it be both stochastic and have definite hidden values at the same time?
Another way to put it - where exactly is the stochastic event in the two-slit experiment? If it is well before the interaction with the slits then why is the wave collapse in a different location, with the observation?
 
Last edited:
  • Like
Likes DrChinese and pines-demon
  • #476
GentDave said:
By saying there is fundamental stochastic behavior, this says that the probabilities we are dealing with are objective probabilities, and that no information exists that could resolve the probabilities.
Well, he intents to deal with objective probabilities. However, the second part is your own attempt to make „objective probability“ more rigorous. But if this second part should lead to contradictions, then this is your problem, not his.

GentDave said:
However, it also states that in the two slit experiment the particle actually goes through only one slit.
But it is of no consequence at all in his formulation through which slit it goes. If it would have a discontinuous trajectory and spend some time near both slits, nothing would change. In fact, since he assumes discrete states (for simplicity), the trajectories are actually forced to be discontinuous.
GentDave said:
And also says that the measurement process is an example of conditional probability. This is a process by which we update our probabilities with information we did not previously have, in other words we seem to be explicitly dealing with subjective probabilities here.
Most importantly, the measurement process corresponds to a (local) splitting event. And because of that, you actually can have more or less information about what happened. So that you can have subjective probabilities here is not in contradiction to the fundamental stochastic behavior.

GentDave said:
… I think it fails to keep clear what type of probabilities objective/subjective are involved. How can it be both stochastic and have definite hidden values at the same time?
Just because there are objective probabilities doesn‘t mean that you always know everything that can be known. And it would be strange if you didn‘t learn new information from a measurement. So my impression is that you are confusing yourself here.

GentDave said:
Another way to put it - where exactly is the stochastic event in the two-slit experiment? If it is well before the interaction with the slits then why is the wave collapse in a different location, with the observation?
The stochastic event is after the interaction with the slit, when it has actual consequences. Barandes is free to postulate as much stochastic effects as he likes before, but they have no consequences at all in his formalism.
 
  • #477
gentzen said:
Well, he intents to deal with objective probabilities. However, the second part is your own attempt to make „objective probability“ more rigorous. But if this second part should lead to contradictions, then this is your problem, not his.


But it is of no consequence at all in his formulation through which slit it goes. If it would have a discontinuous trajectory and spend some time near both slits, nothing would change. In fact, since he assumes discrete states (for simplicity), the trajectories are actually forced to be discontinuous.

Most importantly, the measurement process corresponds to a (local) splitting event. And because of that, you actually can have more or less information about what happened. So that you can have subjective probabilities here is not in contradiction to the fundamental stochastic behavior.


Just because there are objective probabilities doesn‘t mean that you always know everything that can be known. And it would be strange if you didn‘t learn new information from a measurement. So my impression is that you are confusing yourself here.


The stochastic event is after the interaction with the slit, when it has actual consequences. Barandes is free to postulate as much stochastic effects as he likes before, but they have no consequences at all in his formalism.
I'm using "objective probability" to mean that the information needed to resolve the uncertainty does not exist and "subjective probability" to mean that the information to resolve the uncertainty exists, but we don't have the information. He does explicitly say that it goes through a single slit, but a discontinuous path makes more sense (at least to me), so maybe that was a slight misstatement on his part or maybe I misread what he meant there. Page 11 "the particle really does go through a specific slit in each run of the experiment". And I agree that when you measure, you get more information of course. This happens regardless of what sort of probabilities were involved until then. And placing a stochastic event after the slit makes sense as well. So really its just that one line that sounds like Bohm that is confusing. Although - when he says the measurement even is an example of conditional probability. Page 18 "wave function collapse therefore reduces to a prosaic example of conditioning". That sure sounds like he is saying the information existed all along and we just discovered what was already there. But, it does not have to mean that, so that could be me reading too much into that statement.
 
  • #478
GentDave said:
He does explicitly say that it goes through a single slit, but a discontinuous path makes more sense (at least to me), so maybe that was a slight misstatement on his part or maybe I misread what he meant there.

He says somewhere you can use continuous paths. He is just simplifying for the purpose of communication.

GentDave said:
That sure sounds like he is saying the information existed all along and we just discovered what was already there. But, it does not have to mean that, so that could be me reading too much into that statement.

Yes, he is saying that.

GentDave said:
I think it fails to keep clear what type of probabilities objective/subjective are involved. How can it be both stochastic and have definite hidden values at the same time?
Another way to put it - where exactly is the stochastic event in the two-slit experiment?

I think to make clear the distinction between this formulation and kinds of subjective quantum interpretations, the nature of probabilities should probably be seen as fundamentally objective but I think but there is nothing stopping someone from taking a subjective perspective - objective frequencies inform and beliefs and make those beliefs empirically meaningful.

Stochastic and definite hidden values? Look at stochastic processes wikipedia page and you will see the distinction:

"A stochastic process is defined as a collection of random variables defined on a common probability space"

These random variables are at every point in time. They just give you the probabilities of how the system behaves at every point in time. This corresponds to the information in the wavefunction.

You then have:

"A sample function is a single outcome of a stochastic process, so it is formed by taking a single possible value of each random variable of the stochastic process"

These are the definite values / positions / configurations of particles. You sample the random variables to get a single definite path that a particle could take, in a definite position everywhere along the way. How do you actually realize the probabilities of the random variables? You just repeat the experiment many, many times. You will get many many many definite paths; and the frequencies in which they sample possible values / configurations / positions at all the different times will approach the probabilities of the random variables at the different times.

You can have some scenario where the configurations / values / positions are maximally uncertain as described by a random variable, but all this means is that if you repeat an experiment many times, the frequencies will be uniform; nonetheless, these frequencies are about definite particle positions / configurations / values / outcomes that occur when the expeeriment is repeated. The uncertainty principle is then just a statement about exactly this kind of thing: e.g. if a particle takes a position with probability 1, its momentum probabilities will be uniformly spread - but in both cases you are talking about frequencies for definite outcomes in principle, even if this means repeating an experiment many times with a particle always ending up in the same position.

So when I said earlier: "how the system behaves at every point in time" this always means over many many repetitions of an experiment.
 
  • #479
gentzen said:
Well, he intents to deal with objective probabilities. However, the second part is your own attempt to make „objective probability“ more rigorous. But if this second part should lead to contradictions, then this is your problem, not his.


But it is of no consequence at all in his formulation through which slit it goes. If it would have a discontinuous trajectory and spend some time near both slits, nothing would change. In fact, since he assumes discrete states (for simplicity), the trajectories are actually forced to be discontinuous.

Most importantly, the measurement process corresponds to a (local) splitting event. And because of that, you actually can have more or less information about what happened. So that you can have subjective probabilities here is not in contradiction to the fundamental stochastic behavior.


Just because there are objective probabilities doesn‘t mean that you always know everything that can be known. And it would be strange if you didn‘t learn new information from a measurement. So my impression is that you are confusing yourself here.


The stochastic event is after the interaction with the slit, when it has actual consequences. Barandes is free to postulate as much stochastic effects as he likes before, but they have no consequences at all in his formalism.

iste said:
He says somewhere you can use continuous paths. He is just simplifying for the purpose of communication.



Yes, he is saying that.



I think to make clear the distinction between this formulation and kinds of subjective quantum interpretations, the nature of probabilities should probably be seen as fundamentally objective but I think but there is nothing stopping someone from taking a subjective perspective - objective frequencies inform and beliefs and make those beliefs empirically meaningful.

Stochastic and definite hidden values? Look at stochastic processes wikipedia page and you will see the distinction:

"A stochastic process is defined as a collection of random variables defined on a common probability space"

These random variables are at every point in time. They just give you the probabilities of how the system behaves at every point in time. This corresponds to the information in the wavefunction.

You then have:

"A sample function is a single outcome of a stochastic process, so it is formed by taking a single possible value of each random variable of the stochastic process"

These are the definite values / positions / configurations of particles. You sample the random variables to get a single definite path that a particle could take, in a definite position everywhere along the way. How do you actually realize the probabilities of the random variables? You just repeat the experiment many, many times. You will get many many many definite paths; and the frequencies in which they sample possible values / configurations / positions at all the different times will approach the probabilities of the random variables at the different times.

You can have some scenario where the configurations / values / positions are maximally uncertain as described by a random variable, but all this means is that if you repeat an experiment many times, the frequencies will be uniform; nonetheless, these frequencies are about definite particle positions / configurations / values / outcomes that occur when the expeeriment is repeated. The uncertainty principle is then just a statement about exactly this kind of thing: e.g. if a particle takes a position with probability 1, its momentum probabilities will be uniformly spread - but in both cases you are talking about frequencies for definite outcomes in principle, even if this means repeating an experiment many times with a particle always ending up in the same position.

So when I said earlier: "how the system behaves at every point in time" this always means over many many repetitions of an experiment.
I think I have clear in my head what the paper is saying now. I'm not sure you and I are communicating 100% clearly, however. Which is OK. I'm coming from a statistical (and physics) background with a dash of philosophy. And my definitions of objective probability and subjective probability are questions about the existence or non-existence of information. This is slightly different than asking about the nature of the wavefunction and if it is "real". I do understand what a stochastic process is. Here is what I would now say he is saying in my own words. There is a stochastic process (involving objective uncertainty) that selects one path and then another repeatedly. And at any given moment one of these paths is real, but that may change in the next moment and this change is stochastic and objectively uncertain. However, when we make an observation we do find a definite value that it actually had AT THAT MOMENT. so we are resolving our subjective uncertainty regarding the momentary value of the variable. My opinion is that I like the idea.
 
Last edited:
  • Like
Likes JC_Silver, iste and gentzen
  • #480
GentDave said:
I'm using "objective probability" to mean that the information needed to resolve the uncertainty does not exist and "subjective probability" to mean that the information to resolve the uncertainty exists, but we don't have the information.
The notion of wether "information exists" becomes more complicated if you add learning and emergence/organisation to parts of the system.

If we assume that "information exists" means that, there is at least one observer (at some point), or someone that "knows", but it's just not me?

Then this becomes subject to evolution, as initially perhaps noone knows, but with time someone can learn or find out; it may emerge as a result of some learning/inference process. Also the only way for someone to find out if someone else has the information, is by interacting with others; which again changes things. I think at some point interacting observers can evolve an emergent effective agreement of consenscus, this is then "objectivity", but that is only effective locally, it has no global rooting.

Except of course in the space of mathematics of our models. But that is I think redundant embedding lacking physical basis, but it is what keeps seducing theorists.

This mechanism, is critical at least for my conceptual understanding, and it is what I see behind the memory effect.

/Fredrik
 
  • Like
Likes JC_Silver
  • #481
Fra said:
I'll think and see if I can explain more clearly in some other way and still keep it general.
...
But let me think if I can find some other "simple" made up example to explain the concepts of the framework without making specuations of physical interactions.
I have been thinking howto to explain the "mechanism" where the memory effect can explain correlation via a kind of "hidden variables", and yet escape beeing subject to bells inequality, but without speculation or detail as that blurs the conceptual overview.

I think to make up an explicit toy example, would involve lots of assumptions about how information are encoded and stored, and how actions are randomly chosen by guided by probabilities, and making the examples realistic without engaging in specuialtion I have hard to see doable in a simple way.

So I tried to explain again conceptually, but found out that I still need to first list a number of say axiomatic assumptions, not about details, but about constrainin principles on causality and interactions in an agent based model view, that in themselves can be interpreted as "speculations", so I deleted it :rolleyes:

Therefore I will pass try to explain the function of memory effect and the proble of dividisiblity. If I find paper that does something similar I can get back to it, but all the papers I have find are all tangenting this thing from different angles, many are related to AI research, and many are relating to agent based modelling, but none put the right things together yet.

Happy new year!

/Fredrik
 
  • #482
Fra said:
The notion of wether "information exists" becomes more complicated if you add learning and emergence/organisation to parts of the system.

If we assume that "information exists" means that, there is at least one observer (at some point), or someone that "knows", but it's just not me?

Then this becomes subject to evolution, as initially perhaps noone knows, but with time someone can learn or find out; it may emerge as a result of some learning/inference process. Also the only way for someone to find out if someone else has the information, is by interacting with others; which again changes things. I think at some point interacting observers can evolve an emergent effective agreement of consenscus, this is then "objectivity", but that is only effective locally, it has no global rooting.

Except of course in the space of mathematics of our models. But that is I think redundant embedding lacking physical basis, but it is what keeps seducing theorists.

This mechanism, is critical at least for my conceptual understanding, and it is what I see behind the memory effect.

/Fredrik
Hi Thank you for your reply I mean “information exists” in a different sense. When I’m asking about "objective probability" I'm asking "does the universe 'know' the result?” That is - even if you had all information in existence and infinite computing time can ,in this case, the next exact position of the particle be known ? If it can't then its next position is objectively undetermined. - As is assumed in a mathematical stochastic process. If it could be known, but we just lack information then that is a subjective uncertainty. Every probability we know of outside of quantum mechanics is a subjective uncertainty, although the abstract random generator in math is always assumed to be an objective uncertainty. What DOES complicate this a bit in my view is when you start thinking about nonlocal relations and the role "future knowledge" plays.

I’m also sensing you are coming from a global entanglement point of view – like "many worlds" perhaps. I don’t share that idea though. In those models the universe is deterministic and there is never any objective uncertainty. So it is a concept that separates various interpretations.

If I understand the author he is saying the next position of the photon is objectively undetermined but its current position is always only a subjective uncertainty.
 
  • #483
GentDave said:
Hi Thank you for your reply I mean “information exists” in a different sense. When I’m asking about "objective probability" I'm asking "does the universe 'know' the result?” That is - even if you had all information in existence and infinite computing time can ,in this case, the next exact position of the particle be known ?
For me this is not a question I ask, it is not physical for me. Your hypothetical question implies infinite computational capacity. For me "physical questions" must be constructible from what the questioner (ie the observer or agent) has to work with; and subject to any actual constraints in terms of memory of information capacity with reduces the "set of possible questions" that are physical.
GentDave said:
If it can't then its next position is objectively undetermined. - As is assumed in a mathematical stochastic process. If it could be known, but we just lack information then that is a subjective uncertainty. Every probability we know of outside of quantum mechanics is a subjective uncertainty, although the abstract random generator in math is always assumed to be an objective uncertainty. What DOES complicate this a bit in my view is when you start thinking about nonlocal relations and the role "future knowledge" plays.

I’m also sensing you are coming from a global entanglement point of view – like "many worlds" perhaps. I don’t share that idea though. In those models the universe is deterministic and there is never any objective uncertainty. So it is a concept that separates various interpretations.
Ouch no I am not into mwi ?:)

If we are talking about understanding "normal descriptive QM", I an more into statistical or copenhagen interpretation. But in this context, of trying to understand the QM foundations deeper, my stance is a kind of qbist inspired interpretation where there action under irreducible uncertainty is a key.

/Fredrik
 
  • #484
Fra said:
For me this is not a question I ask, it is not physical for me. Your hypothetical question implies infinite computational capacity. For me "physical questions" must be constructible from what the questioner (ie the observer or agent) has to work with; and subject to any actual constraints in terms of memory of information capacity with reduces the "set of possible questions" that are physical.

Ouch no I am not into mwi ?:)

If we are talking about understanding "normal descriptive QM", I an more into statistical or copenhagen interpretation. But in this context, of trying to understand the QM foundations deeper, my stance is a kind of qbist inspired interpretation where there action under irreducible uncertainty is a key.

/Fredrik
It make sense that your comments come from a Copenhagen interpretation. Initially it was sort of founded on the idea that anything we can't observe does not count. and there is only subjective uncertainty. I think that was an early wrong turn. An example I'd use is a black box that we can not open. We can test its function with inputs and observing outputs. But our best models may postulate moving bits inside we can't see. What we can't measure may still be important for forming our best theories.
 
  • #485
GentDave said:
An example I'd use is a black box that we can not open. We can test its function with inputs and observing outputs. But our best models may postulate moving bits inside we can't see. What we can't measure may still be important for forming our best theories.
This is fully in line with how I prefer to think as well. In fact i take it probably eve more serious, in that from the perspective of an agent/observer the future is always a "black box". And the learning, such as inference to best explanation, of what is not "directly visible", is a physical process.

In the simplifications of quantum mechanics, where we study small subsystems, which can be preparted many times and whose dynamics are shortlive, relative to the time of the "inference" (including tomoghraphic processes and acquiring statistics), it is indeed possible to infer an internal structure and dynamical law that does constitute the effective inference to best explanation.

In QM as we know it, this is presumed in the starting point. The "process" of inferring the state space and dynamical laws (states preps and hamiltoninans) are not considerd subject to "dynamics". It is treated outside. And this is precisely the unacceptable idealisation that prevents us from getting deeper.

GentDave said:
If I understand the author he is saying the next position of the photon is objectively undetermined but its current position is always only a subjective uncertainty.
This makes sense yes.

I would put it so that the FUTURE is objectively undertermined because the abduction or inference to best explanation requires TIME so the future is fundamentally undetermined; all we have is our "best expectations of the future".

Put perhaps that PAST can be said to be the "subjectively undetermined" (beacuse even one might say that history is a "fact" no single subsystem/observer could have inferred and encode in a non-lossy way all this). So the "best description of the past" we have is still subject to uncertainty as any recording is necessarily incomplete and lossy.

This I agree resonates well with the memory effect, divisibility and non-markov features as well.

Copenhagen interpretation or statistical interpreation is what you get as as "limiting case"; for small short-lived "quantum-systems"; as described from a dominany macroworld, where information processing capacity is practically unlimited. And as this, and nothing else philosohical, was what the founders of QM worked on, I still think the old Copenagen or statistical interpretation is the best and most honest one, at least in the historical perspective.

/Fredrik
 
  • #486
Fra said:
This is fully in line with how I prefer to think as well. In fact i take it probably eve more serious, in that from the perspective of an agent/observer the future is always a "black box". And the learning, such as inference to best explanation, of what is not "directly visible", is a physical process.

In the simplifications of quantum mechanics, where we study small subsystems, which can be preparted many times and whose dynamics are shortlive, relative to the time of the "inference" (including tomoghraphic processes and acquiring statistics), it is indeed possible to infer an internal structure and dynamical law that does constitute the effective inference to best explanation.

In QM as we know it, this is presumed in the starting point. The "process" of inferring the state space and dynamical laws (states preps and hamiltoninans) are not considerd subject to "dynamics". It is treated outside. And this is precisely the unacceptable idealisation that prevents us from getting deeper.


This makes sense yes.

I would put it so that the FUTURE is objectively undertermined because the abduction or inference to best explanation requires TIME so the future is fundamentally undetermined; all we have is our "best expectations of the future".

Put perhaps that PAST can be said to be the "subjectively undetermined" (beacuse even one might say that history is a "fact" no single subsystem/observer could have inferred and encode in a non-lossy way all this). So the "best description of the past" we have is still subject to uncertainty as any recording is necessarily incomplete and lossy.

This I agree resonates well with the memory effect, divisibility and non-markov features as well.

Copenhagen interpretation or statistical interpreation is what you get as as "limiting case"; for small short-lived "quantum-systems"; as described from a dominany macroworld, where information processing capacity is practically unlimited. And as this, and nothing else philosohical, was what the founders of QM worked on, I still think the old Copenagen or statistical interpretation is the best and most honest one, at least in the historical perspective.

/Fredrik
I would agree the future is objectively undetermined, but many interpretations do not agree. The wavefunction delelops deterministicly and the future is fully determined by the present state of things in many interpretations. The one statement of yours I disagreed with was when you described one person and then another and another learning about a quantum result and that that changes the result (if I understood correctly). That implies a global entanglement and I don't thing that is the case.
 

Similar threads

Back
Top