Can Quantum Computers Validate the Many-Worlds Interpretation?

  • Thread starter Count Iblis
  • Start date
  • Tags
    Mwi Proof
In summary, the argument appeals to artificial intelligence that can be implemented by a quantum computer. So, the first part of the argument is that however the brain works, it is ultimtely formally describable using a finite number of bits. Therefore it can be implemented by a computer and thus also by a quantum computer.
  • #106
ueit said:
I didn't ask for a definition of the observer that is "static", "stable", "independent of it's context". As the "observer" seems to be some sort of primitive in your theory there should be some definition of it, don't you think? I mean, what is your theory about?
...
I do not understand the meaning of this. For now, I don't know what an observer means in your theory, much less what an uncertain/ constrained observer refers to.
...
May be, but at least the observer is defined somehow.
...
So, do you just redefine the term "quantum system" as "observer"? Are there systems that are not observers?

Ok, now I see what you mean. I thought the basic meaning of observer was clear but may not. I'm sorry.

I'll note that I don't yet have a complete theory, I am working and looking for a reconstruction of current models. But some starting points and design principles are in place.

Roughly my view is like this.

About observers, to avoid confusion I'll note that there are two views of that.

* The inside view

- is the view of the universe an inside observer has.
This VIEW defines the observer.

analogies:

1. It's like the distinction between the self, and non-self. But this is difficult because since the observer is not static, this boundary is fuzzy and evolving.

2. Another interesting analogy is like the distinction between what you know FOR SURE and what you are only guessing. I'm sure you would agree that there is a fuzzy boundary here, in particular where you are "almost sure" but not quite. Or you can argue that you are never sure and it's all about various degrees of certainty - this view matches in my view as well.

Since I'm picturing a reconstruction I avoid using too much standard QM terminology since people would tend to think I in an unreserved way refers to existing concetps.

But loosely speaking, I think the hilbert space is part of the observers identity. And I am not talking about the hilbert space of the environment in a decomposition H_univers = H_observer x H_remainder, I'm suggesting that math makes no sense because it mixes inside view and birds views in an IMHO conceptually illegal way.

So the hilbert space of the universe, as seen from the inside observer, is CONSTRAINED by the complexity of the observer. A simple observer can not relate to the full complexity of it's environment.

So from the inside view, I call the home of the information and state vectors as a system of microstructures. And this can not be questioned objectively by the inside observer. It just is. However, as to the question how it became to me, then there is an evolutionary picture in which this microstructure evolves and can gain complexity (which I associate also to mass in some form)

The "problem" for the inside view, is to survive the challange of the environment. In this picture, what was usually called an inconsistency between views, is here instead just exactly what causes the evolution (both time an large perspective)

* external view

This is the view, where ONE observer ponders that parts of this own environment can be thought of as separate observers that are mutually interacting. IE. One observer observes other observers.

In this view, the notion of observer is farily unclear. But my point is that this is not a big problem. It is only a problem for those who can't let go of some realist ideals.

ueit said:
I don't think it is possible to build a theory without laws because you cannot calculate/predict anything.
...
How can a observer compute probabilities if there are no objective physical laws?
...
Again, I am not sure how one can predict anything in the absence of a law.

This is an example of a very general an repeating problem. It also comes in other disguises.

It's the origin problem.

I didn't say I think there is no law, whatn I mean is that IN GENERAL there is no objective law which we can be sure all observers agree upon.

Instead, I am suggesting that objective LAW is emergent.

There are predictions, but they only live in an evolving context, so even faulty prediction has a place. The observer which embodies consistently flawed predictions, will have his microstructure destroyed and deformed by environmental feedback.

So in an near equilibrium scenario, there are fapp type of objective laws, and we recover pretty much the standard physics, but what I am suggesting is a possible way to in a deeper way understand why the laws of physics are like they are, and wether they are better seen as evolving or fixed.

I'm suggesting that you can LEARN and improve, without having fixed rules for learning, because whole point is that you do not only learn as per fixed rules, you even learn the learning rules. The context is evolution.

The evolutionary context is IMO the best way to see law. Wether these observed laws are the same as some "real laws" is something to which nature is indifferent.

Not sure if that made sense...

/Fredrik
 
Physics news on Phys.org
  • #107
More: Thus do I think that the "measurement" operators are evolving together with the hilbert space (this is what I mean by evolving observer). The hilbert space as SUCH also contains information, that is usually not accounted for. I think this is constrained by the mass or complexity of the observer. Thus the complexity of an observer, and it's history, does actually contrains what possible interactions it can participate it. This is the idea on howto eventually make predictions.

The fact that ALL observers participate in gravity and has inertia, gets a natural explanation here since the inertial mass is the "complexity count" of the system of microstructures (which includes the _inside observed_ hilbert space).

The information theoretic primordal of inertia is an evidence count. It's the resistance against perturbation. All structurs have this, including what is currently thought of as abstract things, such as hilbert spaces. HIlber space relates to the state vector as the memory state to the memory hardware. The memory hardware itself, contains information too. This is not acknowledge in current formalism. I think we can do better.

/Fredrik
 
  • #108
Regarding the subject.

I have a raw idea, may be it is incorrect, but anyway.

After the decoherence the off-diagonal elements quickly approach 0, so only diagoal elements are left. But let's chose the diagonal element with a very low probability - almost impossible event.

If the probability is very low, then value in that diagonal cell can be in the same range as the neighbour off-diagonal elements. Hence, if you are on a very unprobable branch then there might be an interaction with the 'other branches' or more precisely, with the 'bath' of the yet non-decoherenced states.

Oversimplifying, MWI predicts that if you roll the dice 1000000 times and always get 6, then look around, very probable there are other weird things happening around, like you can see 2 semi-transparent images of both cats.
 
  • #109
More likely, you'll hear the 7 am alarm go off.
 
  • #110
Count Iblis said:
More likely, you'll hear the 7 am alarm go off.

:smile::smile::smile:

I would like to be in that world! Wait - "I" already am?
 
  • #111
Hurkyl said:
Of course. But the point is that QM is the least incorrect description of the real world.
I'm not convinced that even that view can be justified. The axioms of QM are supposed to be these:

i) all isolated systems evolve in time according to the Schrödinger equation
ii) [the probability rule -- you know the details already]

The weird thing is that ii describes the time evolution of the system and the measuring device during a measurement, but i is supposed to describe the time evolution. We clearly don't want two time evolution axioms, since we might end up with a contradiction.

So how do we avoid contradictions? I'm glad you asked. I only see two possibilities:

1. The time evolution in ii is only a special case of the time evolution in i.
2. You are only allowed to use the time evolution axiom on systems that you are not a part of.

If 1 is correct, it must be possible to prove it. But attempts to do so have failed. The successful attempts to derive the probability rule all used another axiom, which is essentially equivalent to ii).

If 1 is incorrect, then we are left with 2, and in that case I really don't see how to justify the view that QM is a description of the universe (Edit: I don't see how we can even think of it as a description of a fictional universe). 2 is however completely consistent with the view that QM is just an algorithm that tells us how to calculate probabilities of possible results of future experiments, given the results of past experiments.
 
Last edited:
  • #112
Dmitry67 said:
Regarding the subject.
After the decoherence the off-diagonal elements quickly approach 0, so only diagoal elements are left. But let's chose the diagonal element (1) with a very low probability - almost impossible event.

If the probability is very low, then value in that diagonal cell can be in the same range as the neighbour off-diagonal elements. Hence, if you are on (2) a very unprobable branch then there might be an interaction with the 'other branches' or more precisely, with the 'bath' of the yet non-decoherenced states.

Oversimplifying, MWI predicts that (3) if you roll the dice 1000000 times and always get 6, then look around, very probable there are other weird things happening around, like you can see 2 semi-transparent images of both cats.

(1) seems to imply that you define probability as "degree of plausibility" so that an event assigned a high probability has a high likelihood of occurring and vice versa.

(2) seems to imply that probabilities can be assigned or calculated for branches. In MWI a branch either exists or does not exist. If you are certain that those branches exist, then their degree of plausibility should be 1. So how can a branch have a low degree of plausibility. This is the part where it is not clear what the probability of a branch is supposed to mean. Do you mean, the ratio of the number of identical branches to the total number of branches? That is a frequency not a probability.

(3) If you roll a die 1000000 times and always get a 6, there is no need to suspect weirdness (unless of course you have a penchant for mysticism). Rather it tells you that the probability of the next roll giving a 6 is almost 1.0. Note that this probability has changed from 1/6 at the first throw to almost 1. Even though the die is the same and nothing else has changed ontologically. That is why the only consistent application of probability is as an epistemological representation of degree of plausibility.

If you recognize that frequency is ontological then you will interpret an observed frequency only as telling you that the die is really biased. But if you mix frequency and probability, you may suspect that since the world does not obey what is in your mind (1/6), then something weird is going on. Rather, what is happening is that, prior to the experiment you formed an opinion (probability 1/6) about how the die might behave based on limited information about the specific die. Then as the experiment proceeded, you learned more information about the die, and updated your opinion accordingly (probability ~1). Mean while all along, the die has not changed. No worlds have branched. The only thing that has happened is that your state of knowledge about the die has improved.

If I say the probability of a cat being dead is 0.129245, I am simply saying, based on the information at my disposal, the cat is less likely to be dead than alive and on a scale from 0 to 1, where 0 means certainly not, and 1 means certainly, the degree of plausibility can be assiged to 0.129245. It does not mean I have to measure a large number of cats to be able to count the frequency and then divide the number of dead cats over the total to get 0.129245. Some experiments can only ever be performed once. Still we can assign a probability to them.
 
  • #113
Just for fun, here are my reflections on parts of the discussion coloured by my personal view.
Fredrik said:
We clearly don't want two time evolution axioms, since we might end up with a contradiction.
How about if instead, what you think of as a contradiction (=a problem), is nothing but a physical interaction between physical views? Becase after all, it seems a lot of people on here keep mixing different views, bird views and frogs views and IMHO some of the "logical contradictions" are not valid because they mix different context.

An alternative interpretation is that we are in fact facing contradictory views, which results not in a "logical contradiction" but in a "physical interaction".

In a certain sense, the wave function collapse is the RESULT of a kind of contradiction. The conflict between prior opinion and new evidence. The wave function collapse is in my view the physical realisation of negotiation.

Fredrik said:
2 is however completely consistent with the view that QM is just an algorithm that tells us how to calculate probabilities of possible results of future experiments, given the results of past experiments.
This is a rephrasing of what I tried to say before but where the vision didn't get through:

I argue that the utility of this probability, is not to in historical-retrospect conclude wether our expectations were right, rather the probability expression our opinon of the future, determines our actions Now. Thus the context of probability need not be as historial frequencies, instead its something more involve that has to do with actions.

In this picture, two views(two physical observers) having different expectations of the future, and computing different probabilities, are not a logical contradiction. Instead my conclusion is that the implications is a physical interaction.

/Fredrik
 
  • #114
Fra said:
How about if instead, what you think of as a contradiction (=a problem), is nothing but a physical interaction between physical views? Becase after all, it seems a lot of people on here keep mixing different views, bird views and frogs views and IMHO some of the "logical contradictions" are not valid because they mix different context.
The time parameter is supposed to be the same in both views, and it's clear that there are some statements that we could make about time evolution in the frog's view that would contradict the statement about time evolution in the bird's view. We can't just state an axiom about time evolution in the frog's view and expect it to not contradict the statement we have already made about time evolution in the bird's view. If the frog's view axiom doesn't contradict the bird's view axiom, then it must be possible to use the latter to prove that the former holds. But it doesn't seem to be possible.

Fra said:
An alternative interpretation is that we are in fact facing contradictory views, which results not in a "logical contradiction" but in a "physical interaction".
If it's just an interaction, then I wouldn't call it a contradiction. The contradictions I talked about above are actual contradictions, which would make the theory invalid. You should use another word for what you're talking about, e.g. "complementary". If the second kind of time evolution is a special case of the first, then we have two complementary views. I don't have a problem with that, except for the fact that the second kind of time evolution doesn't seem to be a special case of the first.
 
  • #115
I'm just throwing in my views to fuel the fire. I'm well aware that I probably have a different view on this, and they we may not reach an agreement.

Fredrik said:
The time parameter is supposed to be the same in both views, and it's clear that there are some statements that we could make about time evolution in the frog's view that would contradict the statement about time evolution in the bird's view.

As a note first, I do not believe in some strict axiomatic approach to solve open problems.
Axiomatisations are useful when a theory is maturing and as a way to clean it up. But I don't think the creative process are usefully abstracted in terms of axiomatisation.

What I mean is that - in the general case - the time evolution does not necessarily describe what WILL happen, not even statistically. It describes which evolution the observer expects tp to happen, and it's actions are then consistent with expectations, not consistent with the yet unknown future.

If two observers, have different expectations on the future, clearly a conflict appears. And since I see law and symmetry as evolving, the objective law or transformation symmetry simply can't be established from the inside view, to correct for this conflict in advance. The only way is to play your cards to the best of your information, and face the consequences.

In the normal/custom view of law and symmetry in physics the transformation laws that generates all frogs, does in deed recover a bird view consistency. I am suggesting that's a special, idealised case. In general, the symmetry that recovers the different views are subject to the same measurement process.

In human science this "measurement process" is the scientific development itself, years and years to modelling, experimenting etc. Has produced a set of symmetry transformations and abstractions. I am suggesting that in the next awareness and revolution of physics, this process must itself be seen as part of PHYSICS. I think in particular the problem of quantum gravity suggests this.

So the birds view or gods view that a lot of people have, are IMHO invalid as a physical view. It's an imagined (we can all imagine mathematically such view) view that really never is realized in nature.

Fredrik said:
If it's just an interaction, then I wouldn't call it a contradiction. The contradictions I talked about above are actual contradictions, which would make the theory invalid.

Hmmm maybe I misunderstood again. I'm not sure to what extent I disagree with your personal view, I just throwed in this reflection.

From my view I'd say the inconsistency appears because there are invalid uses of different physical views. Just because one from the armchair position can consider a "birds view", doesn't mean it has any place in the real world.

I'm not saying it can't have a place, it sure can. I'm just saying it isn't sure, and my personal view is that such a view is an illusion that is misleading.

So in this case I agree that if you keep insisting on the bird views like is often done, that makes no sense.

Fredrik said:
You should use another word for what you're talking about, e.g. "complementary". If the second kind of time evolution is a special case of the first, then we have two complementary views. I don't have a problem with that, except for the fact that the second kind of time evolution doesn't seem to be a special case of the first.

> i) all isolated systems evolve in time according to the Schrödinger equation

The property of a system to be isolated is a hypotetical constraint IMO. It make sense in same cases, not in general.

In a realistic scenario I think the inference of this closedness must be described.

Somehow the closedness, is defined relative to a fictive birds view. A real inside observer,
can only have indications of that it's closed, and act as if it was. Wether contradicting evidence for this will appear the future he don't know. But neither does that matter.

I think the whole notion of closed systems is one of my objections to the standard formalism. It's perfectly fine as a FAPP constraint in particle physics. But that's because the scale between the observer, human scientists and subatomic systems is so large that the idealisation is perfectly valid.

But what about on the cosmological scales, or other extreme scenarios involving black holes, then I think the abstraction falls flat.

> ii) [the probability rule -- you know the details already]

I'm sorry if I missed something, but are you talking about borns rule or something else?
If it's borns rule, how do you mean that is a time evolution? I think I missed some part of your previous reasoning sorry. I didn't follow the details of all the past discussions inthis long thread.

/Fredrik
 
  • #116
mn4j said:
(1) seems to imply that you define probability as "degree of plausibility" so that an event assigned a high probability has a high likelihood of occurring and vice versa.

(3) If you roll a die 1000000 times and always get a 6, there is no need to suspect weirdness (unless of course you have a penchant for mysticism). Rather it tells you that the probability of the next roll giving a 6 is almost 1.0. Note that this probability has changed from 1/6 at the first throw to almost 1. Even though the die is the same and nothing else has changed ontologically. That is why the only consistent application of probability is as an epistemological representation of degree of plausibility.

1 No, I still owe you an answer to you question: what is a probability in MWI. In my previous post I've just ignored that subject, thinking about the proof. I have to admit I need more time.

3 Yes, you can suspect that dice is not fair, but you can repeat the same xperiment with elementary particles, which are well known to be identical. This is what guys on colliders do - thay repeat the same experiment billions of times, and few times they get what they want. So they are waiting for quite unprobable events.
 
  • #117
Fra said:
> ii) [the probability rule -- you know the details already]

I'm sorry if I missed something, but are you talking about borns rule or something else?
If it's borns rule, how do you mean that is a time evolution?
Yes, the Born rule, with the associated "collapse". It describes a time evolution because it doesn't just say that a measurement of B on a system in state |u> will give us the result b with probability |<b|u>|2. It also says that if we got the result b, the system is now in state |b>, at least to a very good approximation. I called this a "Copenhagenish" formulation of QM rather than "the Copenhagen formulation" because I didn't assume that the "collapse" to state |b> is exact.
 
  • #118
Fredrik said:
Yes, the Born rule, with the associated "collapse". It describes a time evolution because it doesn't just say that a measurement of B on a system in state |u> will give us the result b with probability |<b|u>|2. It also says that if we got the result b, the system is now in state |b>, at least to a very good approximation. I called this a "Copenhagenish" formulation of QM rather than "the Copenhagen formulation" because I didn't assume that the "collapse" to state |b> is exact.

Ok, but I'm still not sure where you see the contradiction. Are you thinking that because in one view there is a collapse, and in another view there is no collapse this is inconsistent?

To me the collapse is simply an information update. But I agree that there are details around this that is ignored in standard QM. But to me that's not an interpretational issue, it's a suggestion that QM as it stands, is not quite satifsfactory. QM pretendes to be a measurement theory, but it ignores how the data is stored, and in particular how MUCH data that CAN be stored. Here I think the internal structure of the oberver needs to be taken into account.

In this sense I can agree with you. The "classical observer" concept is simplification.

If we instead ponder the structure of the prior information of the observer, and that this has a certain inertia, then the measurement must be rate as well, and then it could be the information update actually isn't instantaneous due to the inertia. The collapse somehow assumes that the new data is so convincing that it totally crushes the prior state. This isn't realistic in the general case. If this is what you refer to I agree.

But to cure this, I think reinterpretations won't help. We need to reformulate QM, in a way that the current formalism becomes a limiting case only. I think this can be done.

But even so, I think the collpase will not go away completelt. This is why I didn't see hte above objection as cleanly related to the collapse issue.

In principle the collapse is simply an information update, and the unitary evolution is simply our expected change of our environment, to guides us in between information updates.

This I see as no contradiction? And this will I think stay. What will change however, is probably the logic of hte information update. The simple projection concept coming from the hilbert space abstraction is only a simple model, that ignorees the internal structure of the observer.

It tries to be a theory of communication, but only having communication channels, not seeing that you can not choose the channels indepdent of the nodes. the nodes can be saturated.

You see it as an algorithm, and I can make sense of that too. If I were to take your view, I would like to go more extreme (away from human meant algorithms) and idenfity the observer with the algorithm, and thus suggest a picture where the algorithm is evolving. This picture constrains your algorithm.

How about if the physical observer is the physical manifestation and encodign of your algorithm? The physical observers acts and behaves in accordance to your algorithm, which defines his actions as a function of this state of info about his environment? That's quite close to by view. What's missing in QM, is to describe this evolution of algorithms, which would extend the formalism and make the rigid state spaces mroe dynamical too.

/Fredrik
 
  • #119
Fra said:
Ok, but I'm still not sure where you see the contradiction. Are you thinking that because in one view there is a collapse, and in another view there is no collapse this is inconsistent?
No, that's not it. Even if the "collapse" in the frog's view is only approximate, we still have two rules describing a time evolution. Think of rule 1 as describing the time evolution of every part of your body, and and rule 2 as describing the time evolution of your feet when your feet interacts with the other parts of your body. If it's not possible to derive rule 2 from rule 1, then rule 2 contradicts rule 1. For example, your entire body including your feet goes to France. A logical consequence of that is that your feet are in France. If rule 2 says your feet are in Finland, we have a contradiction.

By the way, if you'd like to see a very different argument that arrives at the same conclusion (that it doesn't make sense to think of a wavefunction as describing the system), expressed in a way that sounds very different, check out sections 9.2-9.3 in Ballentine. I just got my copy yesterday, so I hadn't read those sections when this discussion started.
 
Last edited:
  • #120
Fredrik said:
No, that's not it. Even if the "collapse" in the frog's view is only approximate, we still have two rules describing a time evolution. Think of rule 1 as describing the time evolution of every part of your body, and and rule 2 as describing the time evolution of your feet when your feet interacts with the other parts of your body.
Ok, now I see your point.
Fredrik said:
1. The time evolution in ii is only a special case of the time evolution in i.
2. You are only allowed to use the time evolution axiom on systems that you are not a part of.
...
If 1 is correct, it must be possible to prove it. But attempts to do so have failed. The successful attempts to derive the probability rule all used another axiom, which is essentially equivalent to ii).

First, I personally don't think the schrödinger equation is fundamental. In the reconstruction I picture, the expected time evolution of the state vector should follow from a new understanding of the least action principe, which I think of as a form om minimum speculation, or minimum information divergence picture. From this I think the born rule should follow as well. The basic idea is to not take the hilbert state space as axiomatic starting points. Instead the hilbert space, and the evolution of states within it, should be emergent from a more fundamental abstraction.

Now that hasn't happened yet, and maybe it will fail, but if it works, your points will be adressed.

In that sense, it's close to whay hurky said that QM beeing the least wrong description. As it would describe not what WILL happen (becase this is never known) it describes, in a principle of minimum speculation, what is expected to happen, given the current state and statespace structures.

I've decomposed my vision of this reconstruction in steps named metaphorically, two are

1. The logic of guessing
2. The logic of correction

It also aims to contains a reconstruction of probability theory itself, from an hypotetical inside view. The probability spaces are constrained by the complexity in the image. This implies goodbye to the continuum as fundamental.

It also suggests that the born rule, and the feymann action (transition probability) are united. Because in a model of evolving law, states and laws are treated on somewhat similar footing, the only difference is their inertia. Law doesn't change because it's encoded with larger inertia.

This is what I mean with my "ineterpretation" doesn't adhere to the big camps, and it demands a reformulation of QM. I don't see how plain re-interpretations is going to make any difference.

/Fredrik
 
  • #121
Fredrik said:
By the way, if you'd like to see a very different argument that arrives at the same conclusion (that it doesn't make sense to think of a wavefunction as describing the system), expressed in a way that sounds very different, check out sections 9.2-9.3 in Ballentine. I just got my copy yesterday, so I hadn't read those sections when this discussion started.

Thanks, I skimmed those sections. But as expected I do not share the reasoning outlined by the author of that book. His analysis of the problems is subject to critics I tried to convey several times. I'm not sure how to make it clearer, maybe I need to put more thought into my objection to convey it.

Relative to my perspective, he is indeed mixing the views, in order to produce his contradiction. He also seems to have an odd view of the collapse.

I also have a strong feeling that he has a strange interpretation of "description of a system". To me it's obvious that a "description of a system", is NOT a property of a system alone, it is as much a property of context of the description (ie the observer).

/Fredrik
 
  • #122
I'm not sure if it's meaningful to continue, but since these threads all seems to have the function of comparing different views I'll add some more of my view, which I understand is one of the more "solipsist" ones represented on here at least :)

I have tried to explain why this doesn't lead to logicla contradictions - it leads to interactions, which ultimately manifests the selective pressure on evolution of observers. If you think observers is strange, just think of matter if that makes more sense.

I think I see your point now, but I still don't agree with it.

There are several points in the abstractions used by Ballentine I differ with as well, so I'm not sure where to start.

I'll start with a comments to what you said before.

Fredrik said:
Even if the "collapse" in the frog's view is only approximate, we still have two rules describing a time evolution. Think of rule 1 as describing the time evolution of every part of your body, and and rule 2 as describing the time evolution of your feet when your feet interacts with the other parts of your body.

In general about "different time evolutions".

What exactly is TIME? We can't avoid that question here.

My point is that IF there are different time evolutions, they are the expected evolution relative to different views, and the parameterization of time, in a givne view, is as I see it only a parameterisation of the expected probabilistic evolution, but where probabilistic refers to a inside constructed combinatorical system of microstructures, NOT a flat time history. The combinatorical system I envision relatp to actual subjective proper time history like a compressed datafile, relates to the raw time history data. But choice of compression algorithm is what is subject to evolution.

So it is not really a _logical contradiction_. It simply means that the two views (the two observers) will have actions that are not in line with each other. So what does this mean? In my view this means there will be interactions between the views.

The result of this interaction, which can be seen as a negotiation process, is that there will be an EMERGENT consistency between the two views (an equilibration).

From my point of view, your analysis seems to jump right into the assumption that this equilibrium must always in place. This is, from my view, a flaw in the above. I think of this consistency you seek as emergent, you think of it as a hard objective constraint.

I am NOT suggesting that inconsistent views doesn't matter, I am just saying that they are not a logical contradction, like you seems to suggest. I am suggesting that instead they imply a physical interaction between the views. This will cause a mutual negotiation between the views which could have several outcomes. One wiew could be disintegrated, or both views could be adjusted for a mutual agreement.

Fredirk said:
If it's not possible to derive rule 2 from rule 1, then rule 2 contradicts rule 1. For example, your entire body including your feet goes to France. A logical consequence of that is that your feet are in France. If rule 2 says your feet are in Finland, we have a contradiction.

It is NO inconsistency in that the state of inforamtion is such that my feet expects to goto France, and my body expects to goto Finland. Because this does not describe what WILL happen. IT only describes the conditional expectation on what will happen (in my view that is).

So long before my body actually goes to Finland, and my feet to france, the "interaction" I talk about either releases my feet from my body, or more likely, the expectations of both views are deformed along the path, making them all end up in berlin.

But what I suggest imples that the states spaces are subject to dynamics as well.

In terms of the macroscopic superposition, I could put this differently. The imagined thought experiment, staring from initial cnoditions and hypotetical time evolutions is simply *unlikely* to proceed to it's final state. It suggest that the macroscopic superposition, while logically possible, is excessively unlikely, that is why we don't see this. the reason would be that it would be hard to keep information leaking out via the ne

Anyway, I think focus of Ballentine is not constructive. I do not see how this view will help solve the real problems, such as quantum gravity. In my view the motivation for the interpretational issues is that it appears naturally when you ponder what a measurement theory would be like in a more general context, where you simply don't have the massive references of classical observes OR science labs where you can repeath the same experiment as many times as you like. This circumstances are not around in the general case.

One problem is instead what the heck the physical basis of probability and this ensemble really is (I try to adress this; ballentine uses it, as it if was obvious), when it should be (to me at least) obvious why the simple frequentists interpretation doesn't work. Also a simple bayesian probabiltiy is also problematic since it uses a birds view of sample space to construct the conditional probabilities. Instead I think the formalism must be emergent from an incomplete inside view.

Most issues can be seem already in the premises. Hilbert spaces and hamiltonian time evolutions are not innocent starting points. That is a massive baggage. I think the best way to see my objections is via a black box argument.

If you have a black box, and want to learn to predict it, because the black box competes with you (takes your coffe in the morning for example). One can not just pull out of nowhere and internal structure an hilbert space of this black box. This is for you to infere. The only way to start is your interface to the box. But you don't have a definite communication channel, you just see one end of the channel, and have no clue what's on the other end.

Also picture that during this inference, the box keeps changing. So you never ever get enough statistics to justify the standard statistical reasoning. You somehow need a new strategy.

/Fredrik
 
Last edited:
  • #123
Fra said:
Ok, now I see what you mean. I thought the basic meaning of observer was clear but may not. I'm sorry.

I'll note that I don't yet have a complete theory, I am working and looking for a reconstruction of current models. But some starting points and design principles are in place.

Roughly my view is like this.

About observers, to avoid confusion I'll note that there are two views of that.

* The inside view

- is the view of the universe an inside observer has.
This VIEW defines the observer.

analogies:

1. It's like the distinction between the self, and non-self. But this is difficult because since the observer is not static, this boundary is fuzzy and evolving.

2. Another interesting analogy is like the distinction between what you know FOR SURE and what you are only guessing. I'm sure you would agree that there is a fuzzy boundary here, in particular where you are "almost sure" but not quite. Or you can argue that you are never sure and it's all about various degrees of certainty - this view matches in my view as well.

Since I'm picturing a reconstruction I avoid using too much standard QM terminology since people would tend to think I in an unreserved way refers to existing concetps.

But loosely speaking, I think the hilbert space is part of the observers identity. And I am not talking about the hilbert space of the environment in a decomposition H_univers = H_observer x H_remainder, I'm suggesting that math makes no sense because it mixes inside view and birds views in an IMHO conceptually illegal way.

So the hilbert space of the universe, as seen from the inside observer, is CONSTRAINED by the complexity of the observer. A simple observer can not relate to the full complexity of it's environment.

So from the inside view, I call the home of the information and state vectors as a system of microstructures. And this can not be questioned objectively by the inside observer. It just is. However, as to the question how it became to me, then there is an evolutionary picture in which this microstructure evolves and can gain complexity (which I associate also to mass in some form)

The "problem" for the inside view, is to survive the challange of the environment. In this picture, what was usually called an inconsistency between views, is here instead just exactly what causes the evolution (both time an large perspective)

* external view

This is the view, where ONE observer ponders that parts of this own environment can be thought of as separate observers that are mutually interacting. IE. One observer observes other observers.

In this view, the notion of observer is farily unclear. But my point is that this is not a big problem. It is only a problem for those who can't let go of some realist ideals.

I am sory, but I'm not closer to understanding what an observer means in your theory. In a previous post you said that an observer could be an atom for example. Then I have asked you if you have redifined "observer" to mean any system or not. So, I will repeat the question. Do you call "observer" any particle/group of particles? Does an entangled pair of photons qualify as "observer"? Give me some clues about the properties an observer should have so I can understand how this theory is going to be applied to experimental data. I have no clue for example what "an inside view" of an atom is supposed to be and in what sense we can say an atom has something of this sort.

This is an example of a very general an repeating problem. It also comes in other disguises.

It's the origin problem.

I didn't say I think there is no law, whatn I mean is that IN GENERAL there is no objective law which we can be sure all observers agree upon.

Instead, I am suggesting that objective LAW is emergent.

There are predictions, but they only live in an evolving context, so even faulty prediction has a place. The observer which embodies consistently flawed predictions, will have his microstructure destroyed and deformed by environmental feedback.

So in an near equilibrium scenario, there are fapp type of objective laws, and we recover pretty much the standard physics, but what I am suggesting is a possible way to in a deeper way understand why the laws of physics are like they are, and wether they are better seen as evolving or fixed.

I'm suggesting that you can LEARN and improve, without having fixed rules for learning, because whole point is that you do not only learn as per fixed rules, you even learn the learning rules. The context is evolution.

The evolutionary context is IMO the best way to see law. Wether these observed laws are the same as some "real laws" is something to which nature is indifferent.

Because you have an analogy with the theory of evolution I want to point some of the necessary preconditions for such an evolution to take place.

There is certainly something that changes (that's why we call it "evolution"), and this "something" is the molecular structure of DNA. This in turn changes the chances of survival/multiplication of the "DNA owner" and those structures that are successful remain, the other disappear. The life-forms are able to learn to some degree, depending of their brain.

I don't think that evolution would be possible if the laws themselves (as opposed to a certain molecular configuration) would change. How could an animal adapt to an ever-changing environment, how could the selection lead to an improvement if, say, the chemistry of DNA would change over time? A successful DNA today will be crap tomorrow. The "rules" of the game must be stable, at least for the period of time evolution is supposed to take place. Those rules are, IMHO, required for every theory to make sense and be useful at all. Do you have such laws in your theory?

You say the the laws are emergent, however, I don't see how from an absence of any law at fundamental level anything other than statistical noise could emerge.
 
  • #124
Fra, you are no longer talking about quantum mechanics. You see that, right? It's enough to deny one of the axioms of QM to make sure that what we're talking about isn't QM, and you're denying both of the time evolution axioms. (Yes, to say that they define the start of a negotation is to say that they are false).

I haven't been able to make sense of what you're saying about the observers' expectations, so I can only say that I hope you understand that if a "theory" makes two different predictions about the result of the same experiment, then it isn't really a theory.
 
  • #125
Fredrik said:
Fra, you are no longer talking about quantum mechanics. You see that, right?

Yes definitely. In my first post in this thread I said

"My view is more than interpretations, it suggest a reformulation of QM, where QM is emergent."

This does if you read it as intendend not contradict most of the predictions of quantum mechanics and QFT we know is successful (that would be foolish and ignorant, but I'm not doing that). Instead, I have an explanation for this (it's emergent), but in a more general setting, and I think QG is one such domain,

But yes I don't quantum mechanics, as it stands as an acceptable foundation that should be kept unquestioning when pondering about some of the open problems in physics.

That's my "interpretation of QM". I don't see that it's worse than any other :)

Fredrik said:
I can only say that I hope you understand that if a "theory" makes two different predictions about the result of the same experiment, then it isn't really a theory.

Correct. But I am trying to advocate a more delicate view of theory. It's clear that I fail to convey it. But then, this is why I am still working on this. I need to make a lot more progress and tune the arguments. This is why I call this my interpretation, but it really is an ambitious attempt of reconstructing a measurement theory.

/Fredrik
 
  • #126
I tried to reply with some background motivation, since I think a simple answer can be misinterpreted. But I'll try to give more compact answers.

ueit said:
I am sory, but I'm not closer to understanding what an observer means in your theory. In a previous post you said that an observer could be an atom for example. Then I have asked you if you have redifined "observer" to mean any system or not. So, I will repeat the question. Do you call "observer" any particle/group of particles? Does an entangled pair of photons qualify as "observer"?

Yes, and subsystem of the universe can be an observer; observing the rest.

But the most important thing is that the inside and the external pictures are different. This is why the external picture of an observer is never exact or static. It is generally always subject to uncertainty. But not the simply type of uncertainty in the sense of a state vector beeing smeared out of a part of a state space. It's more dramatic, because the state space itself is also uncertain.

As to the obvious question. Why not just replace this silly hierarchy of spaces of spaces of spaces with a larger master state space? Well, because a finite observer can not RELATE to such complexity.

Another problem with that view, is that it leads to a terrible initial value problem. In my view the initial value problem goes away because from the inside view, the state spaces are inflating.

I understand that this is unclear if you are coming from a different perspective.

However, just because any subsystem sort of qualifies as a potential observer, doesn't mean that each subsystem is distinguishable as a coherent system from a given view.

If you think I'm just reinventing words here, then I think I missed to convey several points. But I'll leave this as is.

ueit said:
I have no clue for example what "an inside view" of an atom is supposed to be and in what sense we can say an atom has something of this sort.

Give that none of what I tried to convey, and I admit it's strange and radical, seems to make sense to anyone on here, except possibly a few of the philosophers on here, I am not sure if I can explain this at the moment.

It's fairly clear to me, but it's not yet at the state of a complete theory, it's only fragments I'm still struggling to put together. Fortunatlely and oddly I have a strong confidence in this idea.

But the clues is this:

One "assumption" if you wish to call it that, is that each observer can only hold finite information. This also means any subsystme of the universe can only hold finite info.

This, incombination with all other things I tried to convey with actions and information states, implies that the information capacity of an observer, constraints the complexity and variety of it's interactions.

In particular do I envision that the IMAGE of the remainder of the universe, is constrained to a finite "screen". The degrees of freedom of this screen, is related to the observers complexity.

Anyway, the actions can be built combinatorically from the discrete observer structure.

But ontop of this, things get worse. The degrees of freedom is not fixed. An observer, can "conquer" or come to take control of degrees of freedom in it's environment, and thus "grow" it's own complexity. The revserve process of loosing degrees of freedom also is possible.

This latter thing is in my view closely related to the origin of inertia and mass, and also gravity-like pheonmena emerges in this view, because the degrees of freedom in the environment self-organise and form a web of interconnected communicating systems. this should also explain properties of spcaetime.

In this picture gravity is ultimately originating from two "inertial systems" that are remotely communicating, the result of this is that the two systems are slowly getting closer. The the distance measures is related to information geometric measures.

All these things are related in my view. This is why I really see this as a proper reconstruction, because I do not start with 4D spacetime. I do not start with the known forces.

I start with an abstract concept of communicating structures, and tries to convey to the exten possible at this ummature level, how interactions (laws, symmetrues) are emergent.

This may seems like giving an infinite possibilities, but I think not. The fact that all systems are finite, and if you start the evolution at minimum complexity. The possible structures and also interactions are strongly limited. In fact unification of interactions are a must. Because there is no complexity around to distinguish diversity of interactions. there is only one force.

ueit said:
You say the the laws are emergent, however, I don't see how from an absence of any law at fundamental level anything other than statistical noise could emerge.

I do.

See above. The major trick is the assumption of finite information capacity. I'm sorry but it's apparently hard to describe the idea for some reason. Self organisation and selection should make sure that a large complexity with only uniform noise is very unlikely. Self.organisation and emergence of structure and law in fact stabilises the universe.

This is not longer QM. But like I mentioned, I think that eventually we will understand why the QM structures is the way it is, but in order to do that I think we need to see the entire picture. IE. all interactions, space, time and matter.

/Fredrik
 
  • #127
I hope no one thinks that the idea of finite information capacity suggests the universe is like a game of life, then one has missed the inside vs external view thing. the degrees of freedom in the universe can not be described in a diff invariant or observer invariant way. It's more complicated. The complication arises because any statement of the universe relate to an inside view. There is no true gods view or birds view in my vision.

I agree that it's qualified headache to make sense out of this. But I think of hte suggestions of the visions as probably facts about nature, without trying to simplify. It's my problem to make sense out of it. I can not hide for complications just because they seem inconsistent. More probably something is wrong with my understanding.

/Fredrik
 
  • #128
The finite information capacity is already a prediction of several different partially successful attempts to combine QM and GR. The maximum information that can be stored in a region of space is proportional to the surface area of the boundary of that region. Intuitively, it should be proportional to the volume, but it isn't. Think about making a stack of RAM memory cards for example. At first the information content grows as the volume of the stack, but eventually the stack would collapse to a black hole. :smile: Now the area of the event horizon is proportional to the amount of information that has fallen into it.
 
  • #129
Fredrik said:
The finite information capacity is already a prediction of several different partially successful attempts to combine QM and GR. The maximum information that can be stored in a region of space is proportional to the surface area of the boundary of that region.

Yes, and what I describe is related to this. But so far there is no IMHO fully satisfactory model that implements all these things. There are several research programs and I have tried to skim what others have done and I share fragments of ideas from a number of people working on different ideas:

- C. Rovelli (the orginal reasoing begind relational QM is good, but not the finish)
- Smolin(evolving law and CNS)
- Ariel Caticha(physical law as rules of inference; he tries to "derive" General relativity from the rules of inference, applied to physical systems rather than human brains)
- Olaf Dreyer(internal relativity; relating to the "inside view" I talk about)
- Zurek(what the observer knows, is inseparable from what the observer IS; however I don't fully belong to Zureks decoherence camp, in my view that's aprt of the story not all of it.) etc.
- Penrose also has some interesting thoughts on combinatorical approaches, but he is too much of a realist for my taste.

Try to find some common denominators above, and it's getting closer to what I'm talking about.

/Fredrik
 
  • #130
Needless to say the mathematical model of this is still in progress but there are several ways to associate results from some of the semi-classical QG results with the ideas I envision, that might provide a angle that makes the crazy stuff more appealing.

One this is the randomness of black hole radiation, and the problem of informationloss. In my picture, I expect that eventually the black hole radation contains no information, from the point of view of the black hole (here the black hole takes the role of the observer). However, it can still contain information relative to an outside observer. It has to do with relative complexity.

A simple observer, can not decode the information from a sufficiently complex source, so this means it contains no information. It's observer to be just noise.

It would also be in line with that a small black hole, would radiate in a "less random" faishon as judged from an outside observer. the information content in the radiation is not a property of the radiation itself, it depends on the observer.

In particularly interesting would be to ponder about the exact interaction pattern and stability of very small black holes. Quite possible, these would, if they are stable by some yet unknown quantum rules (just like Quantum theory explains the stability of atoms) would not actually be perceived as black holes to a large outside observer, but rather as particles. It's not totally impossible that the subatomic forces are quantum versions of blach hole interactions. The point would be that the continuum approxiation from GR first of all would be totally invalid. Most probalby there are no physical singularities. but all this no one knows, all we have now are various different semi-classical attempts to probe this domain.

/Fredrik
 
Back
Top