In summary: Sabine Hossenfelder in her video, argues that superdeterminism should be taken seriously, indeed it is what quantum mechanics (QM) is screaming for us to understand about Nature. According to her video, superdeterminism simply means the particles must have known at the outset of their trip whether to go through the right slit, the left slit, or both slits, based on what measurement was going to be done on them.Superdeterminism is a controversial topic in the foundations community, as explained in this video by Sabine Hossenfelder. She argues that superdeterminism should be taken seriously, indeed it is what quantum mechanics (QM) is screaming for us to
  • #141
My issue with SD is that I think it completely misses the point and perspective of an actual player/agent.

If you believe in "determinism", I would be the first to happily argue that it seems inconsistent to not go all the way and suggest superdeterminism. So had I been into determinism I too would have argued for SD. But then again, that does not help, as it misses the point.

If the laws or rules and initial conditions required for the SD to work, can not be learned by an actual inside observer, then it's predictive and explanatory value is essentially zero (even if it, in some sense would be true). It's just some mental lego that makes no difference.

This is contrast to say a real computable algorithm for how a real observer can LEARN from it's interaction and make predictions about it's environment and own future.

/Fredrik
 
Physics news on Phys.org
  • #142
Fra said:
If you believe in "determinism", I would be the first to happily argue that it seems inconsistent to not go all the way and suggest superdeterminism.
I don't see why. Superdeterminism adds to ordinary determinism a claim (in my view highly implausible) about precise fine-tuning of initial conditions, in order to ensure that all measurement results come out just right to make us believe that the correct laws of physics are quantum mechanics, when in fact they are completely different.

Fra said:
This is contrast to say a real computable algorithm for how a real observer can LEARN from it's interaction and make predictions about it's environment and own future.
Are you claiming that this is only possible if the actual physics of our world is not deterministic?
 
  • #143
RUTA said:
No one here has shown me how the Bell states account for the missing conserved quantities per this "open system" explanation of entanglement (via classical thinking) only when Alice and Bob make different measurements.
The Bell measurements have nothing to do with it. Alice makes a measurement and angular momentum is conserved in her closed system, including the measurement apparatus and lab. Bob does the same. If both systems conserve momentum, of course when you consider them together it's also conserved.
 
  • Like
Likes gentzen
  • #144
RUTA said:
No one here has shown me how the Bell states account for the missing conserved quantities per this "open system" explanation of entanglement (via classical thinking) only when Alice and Bob make different measurements. On the other hand, I can explain exactly how the Bell states map to conventional quantum-classical thinking when viewing them as pertaining to just the particles involved. PeterDonis said it is incumbent upon me to provide my explanation, but he has no such requirement to provide the details for his "open system" explanation (which I cannot follow at all).
Here, @RUTA is completely right. What are the details of the "open system" explanation?
 
  • #145
Already in classical mechanics it's clear that the conservation laws hold for closed systems only. If you change to a description, where you consider only part of the system, for this part alone the conservation laws do not need to hold. That doesn't change with quantum theory.
 
  • Like
Likes mattt
  • #146
To my mind, the problem is that measurements in quantum mechanics do not necessarily involve "local interactions" between a measured system and a measuring device. The question is: What "local interactions" would exchange classically conserved quantities?
 
  • #147
Since according to relativistic QFT all interactions are local also the interactions between the quantum system and a measuring device are local, and of course in interactions you "exchange conserved quantities".
 
  • Like
Likes mattt and gentzen
  • #148
In “Quantum measurements and new concepts for experiments with trapped ions”, Ch. Wunderlich and Ch. Balzer remark:

So far, in the discussion of measurements on quantum systems we have not explicitly considered the case of negative result measurements (for a recent review see (Whitaker 2000).) We will restrict the following discussion to quantum mechanical two-state systems for clarity. In some experimental situations (real or gedanken) the apparatus coupled to the quantum probe and quantum system, may respond (for example by a “click” or the deflection of a pointer) indicating one state of the measured system, or not respond at all indicating the other. Such measurements where the experimental result is the absence of a physical event rather than the occurrence of an event have been described, for instance, in (Renninger 1960, Dicke 1981). A negative-result measurement or observation leads to a collapse of the wave function without local physical interaction involved between measurement apparatus and observed quantum system. This will be discussed in more detail in the following paragraphs. In particular, the meaning of the concept “local physical interaction” is looked at in this context.
 
  • #149
PeterDonis said:
I don't see why. Superdeterminism adds to ordinary determinism a claim (in my view highly implausible) about precise fine-tuning of initial conditions, in order to ensure that all measurement results come out just right to make us believe that the correct laws of physics are quantum mechanics, when in fact they are completely different.
Fine tuning is implausible I agree. I just think that from perspective of purity of the deductive stance, fine tuning seems to be a consequence.

But of course i reject the whole stance.

PeterDonis said:
Are you claiming that this is only possible if the actual physics of our world is not deterministic?
No, not if you don't mind fine tuning 🙂 Then the illusion of learning can be reduced to an incomplete perspective, just like the illusion of the flow of time in the timeless universe.

/Fredrik
 
  • #150
Lord Jestocost said:
Here, @RUTA is completely right. What are the details of the "open system" explanation?
As I pointed out to @RUTA, this is backwards. The claim that angular momentum conservation is violated is an extraordinary claim, so that claim is the one that needs to have a detailed explanation that takes into account that the measured systems are open systems. Pointing out that the measured systems are open systems is just making clear what any valid supporting argument for the claim that angular momentum is not conserved would have to include.

Why the systems being open systems is relevant, as I have already pointed out multiple times, should be obvious: angular momentum (and other conserved quantities) can be exchanged during the measurement.
 
  • Like
Likes gentzen and vanhees71
  • #151
Lord Jestocost said:
In “Quantum measurements and new concepts for experiments with trapped ions”, Ch. Wunderlich and Ch. Balzer remark:
As I have already pointed out, "negative result measurements" are irrelevant to this thread since no such measurements are involved in the experiments being discussed.
 
  • Like
Likes vanhees71
  • #152
Fra said:
No, not if you don't mind fine tuning 🙂
So then you are claiming that if our universe is deterministic, we can only learn things if it's also superdeterministic?
 
  • #153
PeterDonis said:
So then you are claiming that if our universe is deterministic, we can only learn things if it's also superdeterministic?
I didn't try to put it like that, but it seems to me the notion of true novel learning, resonates badly with determinism, unless the idea is that the learning is an illusion. Which I supposed is at least a coherent idea, but seems useless to me as it lacks a handle for a real observer.

My main point was that, if one really believes in eternal timeless laws that are deterministic at fundamental level, then the notion of experimenters free choice must be an illusion. At which complexity level othrewise, would determinism stop?

I still don't advocate or fine SD sensible, but I find it easier to understnad someone elses stance or position if it seems self-consistent, not not makes exceptions for macroscopic human observers. Mixing it seems confused IMO.

/Fredrik
 
  • #154
Fra said:
it seems to me the notion of true novel learning, resonates badly with determinism
Of course if you throw in vague terms like "true novel learning" can make it seem that way, but that's a problem with you using vague terms, not with determinism.

Everything we know about how actual learning works in living things is completely consistent with the underlying laws of physics being deterministic. If you want to claim that that kind of learning isn't "true novel learning", of course I can't stop you, but I don't see why I should care. The kind of learning that enabled me to learn, say, General Relativity or quantum mechanics is the kind of learning I care about, and that kind of learning is perfectly consistent with determinism.

Fra said:
if one really believes in eternal timeless laws that are deterministic at fundamental level, then the notion of experimenters free choice must be an illusion
This is false. There are cogent concepts of free will that are perfectly consistent with determinism. The literature on compatibilism, which is what I have just described, is voluminous. You can, of course, just decide not to accept any of that literature as valid, but once again, I don't see why I should care. The kind of free will that enables me to choose what to post here, or what job to take, or what house to buy, or what person to marry, is the kind of free will I care about, and is the same kind of free will that enables experimenters to choose what experiments to run. And the literature on compatibilism shows that that kind of free will is perfectly consistent with determinism.
 
  • #155
PeterDonis said:
And the literature on compatibilism shows that that kind of free will is perfectly consistent with determinism.
As a philosophy student, I take a great deal of interest in this topic. Do you have any specific references on it, that I could attempt to access via my university library? If my memory serves me right, I've seen a number of authors indicate free will being incompatible with determinism. But I'll need to check.
 
  • #156
StevieTNZ said:
As a philosophy student, I take a great deal of interest in this topic. Do you have any specific references on it, that I could attempt to access via my university library?
Actually most of my knowledge of the literature comes from the books on free will by Daniel Dennett, Elbow Room and Freedom Evolves, both of which have extensive bibliographies listing other books and papers. The point of view I have been describing is basically the one Dennett argues for in those books.
 
  • Like
Likes StevieTNZ
  • #157
StevieTNZ said:
If my memory serves me right, I've seen a number of authors indicate free will being incompatible with determinism.
Yes, that's correct. Incompatibilism (Dennett sometimes calls it "libertarianism", not to be confused with the political party or philosophy of that name) vs. compatibilism is one of the long-running debates in this field.
 
  • Like
Likes StevieTNZ
  • #158
PeterDonis said:
Actually most of my knowledge of the literature comes from the books on free will by Daniel Dennett, Elbow Room and Freedom Evolves, both of which have extensive bibliographies listing other books and papers. The point of view I have been describing is basically the one Dennett argues for in those books.
Thanks, I'll search that up.
 
  • #159
Hmm, I realize it was a mistake of me to even start talking about free will, sorry about that :H That term had notthing todo with the main point of mine anway. I could have shaved that out. But here we are... my take on free will is that it's not really a clear scientific concept, nor one that is important for the modelling. Ie. how would agent A, be able to distinguish wether the actions of agent B are due to some "true free will" or if it simply is apparent, ie. actions determined by information only agent A has, and that are "hidden" from agent B?

This latter form of "freedom" (which think is somwhat similar to what compatiblist think) is not what I would care labelling "free will", but I agree this freedom "allows" for the illusion of it. But wether it's actually a free or not, I doubt can be determined by experiment. I can't figure out how it would be done. It's also not that important for me at least.
PeterDonis said:
Of course if you throw in vague terms like "true novel learning" can make it seem that way, but that's a problem with you using vague terms, not with determinism.
By that term is meant that from the perspective of any real agent (ie a coherent part of the universe having finite mass etc), it is not possible to encode the state space of the universe, and hold information (let alone how it was inferred) about the hole universe), this is why a real agent learning means the statespace and theory space is constantly revised. This is how information in biological systems are stored over evolution. But it's not how information is stored in the Newtonian paradigm where the future are determined by the past + fixed inexcplicable eternal laws we thus need to finetune. So both the state of law and initial conditions is a finetuning task that is not possible to solve by any finite agent - so my take on it is that the abstraction itself is "bad" for theory building. This argument is related to Lee Smolins arguments https://arxiv.org/abs/1201.2632, which is also a critique on the "Newtonian paradigm" as he calls it.

To me the thinking that is relvant and perhaps related to "free will" if one insists on talking about it, is more like "freedom of action". As I see it, this freedom is first of all relative to the observer, as the "freedom" is just the unpredictable part. But I don't see why this has to be dressed as ruled by some imaginary "free will", all one can infer is that it's random. So I think of agents "decisions" is simply a random walk, or dice throwing. No conscious decisions or "free will" seems needed for the model, as I fail to see how it should be modeled, as opposed to just a random walk (where one can think that the agent "chooses" his walk, rather than doing it randomly).

/Fredrik
 
  • #160
Fra said:
By that term is meant that from the perspective of any real agent (ie a coherent part of the universe having finite mass etc), it is not possible to encode the state space of the universe
Yes.

Fra said:
this is why a real agent learning means the statespace and theory space is constantly revised
Meaning, the agent's model of the rest of the universe? Of course, this is obvious.

Fra said:
This is how information in biological systems are stored over evolution.
Information in the genes, yes.

Fra said:
it's not how information is stored in the Newtonian paradigm
Why not? It's perfectly possible in Newtonian physics to have one piece of the universe encoding some finite amount of information about other parts of the universe, or some finite model of the rest of the universe.

Fra said:
This argument is related to Lee Smolins arguments https://arxiv.org/abs/1201.2632, which is also a critique on the "Newtonian paradigm" as he calls it.
This isn't an argument, it's a speculative hypothesis which we have no way of testing, now or in the foreseeable future.

Fra said:
all one can infer is that it's random
One can't even infer that. Pseudo-random, in the sense that computers have "random number generators" which are deterministic (give them the same starting seed and they will output the same sequence of numbers), but whose output meets all statistical tests of randomness when properly seeded. Human brains could use the same sort of thing to generate "random" choices when needed; there's no need for "intrinsic" randomness in the sense of some interpretations of QM.
 
  • #161
PeterDonis said:
Why not? It's perfectly possible in Newtonian physics to have one piece of the universe encoding some finite amount of information about other parts of the universe, or some finite model of the rest of the universe.
Note that the "Newtonian paradigm" in this context is not the same as "Newtonian(classical) physics". Even QM as it stands today follows the Newtonian paradigm, which is charaterised by

1. timeless statspace (q,p) in classical physics and the quantum state space in QM
2. fixed (timeless, nonchanging) dynamical LAWS (ie hamiltonian etc)
3. The state that changes within the statespace as per dynamical evolution laws
Ie. in this paradigm there is no regular learning in time, all "learning" would be about finetuing the initial conditions as well as the fixed laws in an effectively infinite state space. The fixed laws here are like "hidden rules" that we treat as uknown (just like the HV mechanism of Bell, but generalized).

But that "process" is not part of the description (ie physics), which is a problem as it's treated as a human problem or "just" a limitation of practical matter only. The alternative view is that this "limitations" has deep implications on how physical interactions evolve and relate. And the alternative view also does not have the same fine tuning problem.

One can, as for example Smolin does, is a problem. The paper was just a random reference. He has written several books on the topic. But it's also admitted that his ideas are in minority, so that many will disagree isn't unexpected. I think one of hte latest books is https://www.amazon.com/dp/0544245598/?tag=pfamazon01-20.

Smolin does not offer the answers in the books, but he raises issues with current models with several arguments.
PeterDonis said:
One can't even infer that. Pseudo-random, in the sense that computers have "random number generators" which are deterministic (give them the same starting seed and they will output the same sequence of numbers), but whose output meets all statistical tests of randomness when properly seeded. Human brains could use the same sort of thing to generate "random" choices when needed; there's no need for "intrinsic" randomness in the sense of some interpretations of QM.
I agree completely! I use the word "randomness" synonymous to "lack of known rule" and beeing randomly distributed statistically. It of course does imply that existence of a yet unknown rule. In this sense I consider randomness to be contextual, as the context (observer/agent) is the one that has to try to decode the noise.

The implicit conjecture though, is that the "choosen" actions of an IGUS/agent is independent on what input appears to it as random noise, except possibly to the extent that the processing of the noise itself has indirect influences (as it's hard to screen yourself from noise, even if its useless) This for example would suggest that agents that can't communicate (beacuse they see each others messages as just noise) decouple also in their physical interactions. The potential benefit with this is in theory which tries to model physical interactions from communication; if we realize that the agents are just small bits of matter.
And thus the influence of any hidden rule, can be thought of as analgoous to hidden variables. Thinking that the hidden rules, influeces agents who are unaware of it (like HV in bells theorem are assuming to influence the outcomes) should I think likely give experimentally testable differences. I don't think this exists but i can at least imagine a generalistation of bells theorem, but applied not to "hidden variables" but to "hidden rules". If I didn't see any future chance of that this would make a difference I would not bother.

About what is possible falsifyiable about this alternative to Newtonian paradigm, is something Smoling also took seriously, so one attempt to at least illustrated how this in principle can lead to predictions is his hypothesis of cosmological natural selection.

https://arxiv.org/abs/hep-th/0612185
The idea there is that the laws of physics "mutate" at the big bang. For me his specific idea is not what is important, nor do we need to discusse it, but it's an example of how the idea of evolving laws (in contrast to Newtonian paradigm where the laws of physics just are, and to ask WHY is not something we should do - it's just up to experiment) at least MIGHT give predictions, as a way to please critiques.

/Fredrik
 
  • #162
Fra said:
in this paradigm there is no regular learning in time
Sure there is. "Learning" is a process that happens as part of your #3, the state changing in accordance with dynamical laws.

Fra said:
The implicit conjecture though, is that the "choosen" actions of an IGUS/agent is independent on what input appears to it as random noise
Only in the sense that the agent won't perceive any pattern in random noise, so it won't take any action that depends on perceiving a pattern.

But if, for example, I have a white noise sound generator and I turn up the volume very high, you're going to do something in response. It won't be anything based on perceiving a pattern in the noise, but that doesn't mean you won't choose any action at all. You sort of acknowledge this:

Fra said:
except possibly to the extent that the processing of the noise itself has indirect influences (as it's hard to screen yourself from noise, even if its useless)
But doing something to avoid the noise isn't the same as "processing" the noise.

Fra said:
This for example would suggest that agents that can't communicate (beacuse they see each others messages as just noise) decouple also in their physical interactions.
This doesn't follow at all. I don't need to communicate with you to hit you over the head with a baseball bat and take your wallet. "Physical interactions" is much, much broader than just communication.
 
  • #163
Fra said:
Even QM as it stands today follows the Newtonian paradigm, which is charaterised by

1. timeless statspace (q,p) in classical physics and the quantum state space in QM
2. fixed (timeless, nonchanging) dynamical LAWS (ie hamiltonian etc)
3. The state that changes within the statespace as per dynamical evolution laws
Quantum field theory doesn't fit this pattern. #1 can sort of apply if you have a closed system, but most systems of interest are not closed. #2 applies only in a very weak sense, as phase transitions can change the effective dynamical laws. #3 doesn't apply because, as a relativistic theory, QFT does not have a "state" that changes with "time"; it can't, because that would require an invariant concept of "now", and there is none in relativity.
 
  • #164
Jarvis323 said:
...Do quantum systems occupy...----...a world that is unknown to us, and
maybe we cannot obtain knowledge of? Or something else?
This issue, to me, makes the principle of locality suspect, or at least unclear to me, in the context of QM.
That have a direct relationship with the epistemic, ontic or complete interpretations of the quantum state.

.-only one pure quantum state corrrespondent/consistent with various ontic states.

.-various pure quantum states corrrespondent/consistent with only one ontic state.

.-only one pure quantum state corrrespondent/consistent with only one ontic state.
Einstein, incompleteness, and the epistemic view of quantum states
https://arxiv.org/pdf/0706.2661.pdf

"ψ-ontic if every complete physical state or ontic state in the theory is consistent with only one pure quantum state; we call it ψ-epistemic if there exist ontic states that are consistent with more than one pure quantum state

The simplest possibility is a one-to-one relation. A schematic of such a model is presented in part (a) of Fig. 1, where we have represented the set of all quantum states by a one dimensional ontic state space Λ labeled by ψ. We refer to such models as ψ-complete because a pure quantum state provides a complete description of reality".
 
  • Like
Likes Jarvis323
  • #165
PeterDonis said:
Sure there is. "Learning" is a process that happens as part of your #3, the state changing in accordance with dynamical laws.
If the future state is implied by the past. The past encodes the same informatiom as the future. So wherein lies the learning or information gain??

/Fredrik
 
  • #166
Fra said:
If the future state is implied by the past. The past encodes the same informatiom as the future.
These statements only apply to the entire universe. They do not apply to tiny subsystems of the entire universe.

Fra said:
So wherein lies the learning or information gain??
Information can get transferred between subsystems even if, over the entire universe, the total information is constant.

Btw, the above comments assume that the concept of "information" being used (a) makes sense, and (b) is the relevant one to use for analyzing, say, learning by humans. I would like to see more than just an assertion from you of those points.
 
  • #167
PeterDonis said:
the above comments assume that the concept of "information" being used (a) makes sense, and (b) is the relevant one to use for analyzing, say, learning by humans. I would like to see more than just an assertion from you of those points.
Perhaps it will help if I describe a different concept of "information".

Suppose I write a computer program to accomplish some non-trivial task. Consider the bits in my computer's memory that store the program code. Before I write and execute the program, those bits store arbitrary values that do not enable the accomplishment of anything useful. Afterwards, those bits store a very particular set of values that do enable the accomplishment of something useful.

I might describe this process as storing information in those bits. The whole process could be perfectly deterministic. But it certainly seems like information is there in the computer's memory that wasn't there before. Where did the information come from?

At least one common answer is that "information" here means negentropy: the process of setting the bits in the computer's memory to the particular values that encode my program is a large decrease of entropy, because instead of being random bits, corresponding to a large phase space volume (the phase space volume containing all of the bit patterns storable in those bits that don't accomplish anything useful), they are now a very particular bit sequence, corresponding to a tiny phase space volume (the phase space volume of all the bit patterns storable in those bits that accomplish exactly the same thing as my program does). And, according to the second law, I must have increased my own entropy by at least as much in order to accomplish this: all the pizza and soda and Doritos I consumed while writing the program that then got converted into the energy to operate my brain and body, with this metabolic process involving a large entropy increase. And again, this whole process could be perfectly deterministic.

@Fra, whatever concept of "information" you are using, it would not appear to be the concept described above. So what is it?
 
  • Like
Likes gentzen
  • #168
PeterDonis said:
These statements only apply to the entire universe. They do not apply to tiny subsystems of the entire universe.
Yes, agreed. Which leads us to the topic: what "paradigm" does make sense for an agent, the Newtonian paradigm does not? (this is what I have been talking about)

By paradigm I mean choice of abstractions or mathematical methods for representing theoryspace, states and changes.(Examples of "paradigms" are say ODE/PDE, and boundary/intiial value problems. Another paradigm is ABM Agents based modelling. Often the same phenomenan can be described in BOTH paradigms, but perhaps sometimes one of them makes the logical clearer.)
As Smolin argues in several papers and books, this paradigm does make sense and have extreme power, but mainly when you study small subsystems at a short timescale. There is no need to deny this, this possibly even the best way to do it in this case. But open problems in physics, such as trying to unify interactions at very different energy ranges perhaps? could be easier in another paradigm. Finetuning and renormalisation problems are IMO symptoms related to trying to use this successful paradigm outside it's corroborated domain of fitness.

I totally agree that all this is fuzzy, but i wouldn't expect any of this to be easy to express or discuss. Step 1, is to for a second to at least be aware of that most of current physics are forged from this paradigm, and ask if it is good?

From the perspective of defining the "initial information" in these, seen as learning, both specifying initial conditiions, state space structure and the dynamical laws, implies prior information in the wider sense.

PeterDonis said:
Information can get transferred between subsystems even if, over the entire universe, the total information is constant.
A conservative mainstream answer then is that the agent (seen as a subsystem of a closed system) is described in principle by taking the master description of the whole universe and simply averaging out the environment. But this paradigm rests on the same abstractions as for the whole universe, which you then just "reduce". This "solution", is the problem as it contains a finetuning problem. Ie. the "explanation" requires alot of finetuning. In other words, the "explanation" by construction requires an a priori improbable premises.

The more pragmatic conservative approach is to simply consider the theory for the subsystem as an effective theory. This is of course totally fine and it's the standard view I would say, but how can we understand the emergence and relations of the difference theories, and most important, how can be guess the interaction of two agents that each encode two different effective theories - if we do not understand the emergent theories in a deeper way? The standard view is still that we can via various renormalisations ans phase transtiions infer an effective low energy theory given that the ultimate high energy theory is known. But what about the reverse? And the othre problem is that knowing the high energy theory again is too complex to encode an agent. If we need to encode all this in som gods view or external "superagent" - how does that help us understand the unification and emergence of interactions?

Note: discussing the measures of "information", "entropy" etc is interesting, but that i fear can be it's own discussion. I admit that there are multiple cans of worms here, but opening them all at ones gets messy. Just a short note: I think the shannon type of "information" is for reasons like above, not a satisfactory measure; as it relies on a fixed background structure (classical bits for example). I think the measure of information that an agent itself can define, withouyt referencing external references, is necessarily relative somehow, I'll stop there.

Edit: A note here that is also revealing the two perspectives. do we abstract the "Agent/Observer" as a subsystem of the close universe? Or do we abstract hte agent from the inside as something, learning and struggling in an unknown environmment? This differences in perspectives makes a big difference. Compare the task of riemann geometry to define curvatures in terms of intrinsic curvatures only. Rather than imagining the curved surface embedded in an flat enviroment. The perspective with intrinsic vs extrinsic information theory is the same here. The conventional theory is extrinsic and explains the agent by seeing it from a bigger embedding. This is just as insatisfactory as describing curvature of a surface in terms of extrinsic (intrisically non-measuralbe) quantitites. This is not a perfect analogy, but it illustrates the point. All the "measures" and "measurements" of the agent must be defined in terms of intrisiccally available things, in order to avoid blind finetuning of external parameters.

/Fredrik
 
  • #169
PeterDonis said:
At least one common answer is that "information" here means negentropy: the process of setting the bits in the computer's memory to the particular values that encode my program is a large decrease of entropy, because instead of being random bits, corresponding to a large phase space volume (the phase space volume containing all of the bit patterns storable in those bits that don't accomplish anything useful), they are now a very particular bit sequence, corresponding to a tiny phase space volume (the phase space volume of all the bit patterns storable in those bits that accomplish exactly the same thing as my program does). And, according to the second law, I must have increased my own entropy by at least as much in order to accomplish this: all the pizza and soda and Doritos I consumed while writing the program that then got converted into the energy to operate my brain and body, with this metabolic process involving a large entropy increase. And again, this whole process could be perfectly deterministic.

@Fra, whatever concept of "information" you are using, it would not appear to be the concept described above. So what is it?
Short comment on this.

In a bayesian probability approach, I see two sorts of information, first the "explicit" one, encoded by the prior probability itself, then we have the implicit one, prior information encoded by the structure of the probability space itself, for example it's dimensionality and the metric of hte indexspace, it's usually fixed for a given model.

In the Newtonian scheme there are also two sorts of informaiton, the one "explicit" in the state, which one can assign an entropy provided one has some a priori probability and the implicit one encoded in the dynamical LAW and the state space and it's structure.

My view and interpretation is that both forms of information are equally important. The differences lies merely in time scales, in the differential sense, the prior information and dynamical law is fixed, and we get the normal dynamics. But in larger time scales, ther has to be a backreaction changing the prior information. (Analogy is the geometry rules dynamics or particles withing spacetime on short timescales, but on longer timescales the geometry also changes, but in the case i consider here it gets more complicated)

So by information I include also the "prior implicit information", and this is not included in the shannon metrics (or other forms). These entropy measures are also differential measures of information change or information gain. But seeminly not valid over large scaels of time.

/Fredrik
 
  • #170
Fra said:
Which leads us to the topic: what "paradigm" does make sense for an agent, the Newtonian paradigm does not?
And the only answer at this point is that this is an open area of research. We're certainly not going to resolve that issue here.

Fra said:
Compare the task of riemann geometry to define curvatures in terms of intrinsic curvatures only. Rather than imagining the curved surface embedded in an flat enviroment. The perspective with intrinsic vs extrinsic information theory is the same here.
No, it isn't, because modeling a spacetime in GR using intrinsic curvature does not require the spacetime to be embedded in any higher dimensional space, and we have no evidence that our universe is so embedded.

Whereas, we know all agents in our universe are embedded in the universe as a whole. And we also know that all agents process information from the rest of the universe and act on the rest of the universe. So there is a huge difference from the case of modeling a spacetime using intrinsic curvature.

Fra said:
So by information I include also the "prior implicit information"
Do you have a reference for this?
 
  • #171
PeterDonis said:
And the only answer at this point is that this is an open area of research. We're certainly not going to resolve that issue here.
Yes of course, as is the case with many things discussed in this subforum. I never claimed to have the final answer. I just wanted to highlight the alternative perspectives for the sake of discussion, which I presume is reasonable?
PeterDonis said:
BNo, it isn't, because modeling a spacetime in GR using intrinsic curvature does not require the spacetime to be embedded in any higher dimensional space, and we have no evidence that our universe is so embedded.
I was not referring to GR (which is a much later application), you are right there. What I had in mind was more general, the history of differential geometry and at least as I was told and beeing taught about the ponderings of Gauss (and Riemann later became his student), he was struggling to find intrisic measures of geometry, constructable by an imaginary inside agent living in the manifold.
PeterDonis said:
Whereas, we know all agents in our universe are embedded in the universe as a whole. And we also know that all agents process information from the rest of the universe and act on the rest of the universe. So there is a huge difference from the case of modeling a spacetime using intrinsic curvature.
In reality this is the case yes. But I see that the abstractions and paradigms in which theoretical physics construct theories, does not IMO respect this in the right way. I am not saying there is a problem with nature, just that i question wether our current paradigms are satisfactory for the general case.
PeterDonis said:
Do you have a reference for this?
Not sure what you mean but the concept of prior information vs "prior probability" is a conceptual one. The prior probability is of course supposed to "represent" at least PART of the prior information, but hardly all of it. For me this is clear, not sure what reference we need. But this has been discsussed before.
See ET Jaynes https://bayes.wustl.edu/etj/articles/prior.pdf
It was raised also in this thread https://www.physicsforums.com/threads/bayesian-statistics-in-science.1008801/page-2
But the quest to develop this and make more mathematics out of the "prior information" than just the "prior probability" in a given space, is I think an open question. For discussing just highlighting this is a good start, and we can omit speculations. Jsut to take an example, the gene could perhaps be the correspondence of a prior probability (conceptually that is ), but the gene itself needs to be backed up by an environment and machinery for transcription and protein synthesis, so there is "more information" in the evolutionary states that just the geneome alone. What mathematics we need for this, in the corresponding case of physics? The analogies are just for inspiration, it of course does not make sense to compare exactly genomes with the physical law. As the latter is made out of matter, how are the laws encoded? (some interesting thoughts are in this paper also from Smolin, https://arxiv.org/abs/1205.3707, it at least sets my head off into an interesting direction, where the implicit prior information is encoded in the environment)

/Fredrik
 
  • #174
Jarvis323 said:
I am interested in what people get from those paper also, in addition this paper:

Hidden assumptions in the derivation of the Theorem of Bell​

https://arxiv.org/abs/1108.3583

Maybe a new thread should be started though?

The work of De Raedt et al has nothing to do with this thread. So yes, a new thread would be appropriate if you choose to persue. You should also be specific about what element of their team's work you are interested in. Keep in mind that it is not generally accepted science, as there are literally dozens of papers that make similar assertions about Bell's Theorem. Those have yet to gain traction within the community, any more than Superdeterminism* has.

Their work (Hess, De Raedt, Michielsen) approaches things from a different perspective, and there is a lot of computer science involved. I know a bit about it, and would be happy to discuss where I can add something useful.*I would say that Superdeterminism ("The Great Conspiracy") as a hypothesis has received virtually nil serious consideration to date. For the obvious reason that its very premise means it can never add anything to our understanding of the quantum world, either theoretically or experimentally. (The same is true of all conspiracy theories, of course.) As FRA says in post #141 above: "If the laws or rules and initial conditions required for the SD to work, can not be learned by an actual inside observer, then it's predictive and explanatory value is essentially zero (even if it, in some sense would be true)."
 
  • Informative
Likes Jarvis323

Similar threads

Back
Top