Impact of Gödel's incompleteness theorems on a TOE

  • Thread starter PhysDrew
  • Start date
  • Tags
    Impact Toe
In summary, the conversation discusses the potential impact of Godel's theorem on a possible Theory of Everything (TOE), which is a mathematical framework that aims to unify all physical laws. Some argue that Godel's theorem, which states that any consistent axiomatic system is incomplete, could pose a challenge to the existence of a TOE. However, others point out that physics is not an axiomatic system and that Godel's theorem only applies to certain types of axiomatic systems. Additionally, even if a TOE could be formulated as an axiomatic system, it may still be equiconsistent with other well-known systems and its self-consistency would not necessarily guarantee its accuracy. Ultimately, the conversation concludes that Godel
  • #246
This thread appears to be verging into metaphysics as opposed to physics. Some problems as of late:

1. As far as I know, the Copenhagen and many-worlds interpretations will always yield the same results. Arguing that one is right and one is wrong is taking this thread off-track. Besides, a TOE, if one is ever developed, will almost certainly say that both are wrong.

2. There is a continued misunderstanding / misrepresentation of what a TOE would entail. A TOE will describe the particle zoo and all the ways they can interact. Period. As far as physicists are concerned, the production rules of Conway's Game of Life are a "theory of everything" for that game. The Peano axioms, including induction, similarly are the "theory of everything" for the natural numbers. If the physical TOE is incomplete in the sense of Gödel's incompleteness theorems. So what? Physicists wouldn't care. Their TOE would still be everything that physicists mean by a TOE. You are dealing in metaphysics, not physics.
 
Physics news on Phys.org
  • #247
my, my, y'all seem to be a bunch of very smart peeps. by comparison, i feel rather stupid. but be that as it may, i do have thoughts about this particular subject (odd isn't it? as i am neither a physicist nor a mathematician).

at the risk of showing just how stupid i really am, i feel it necessary to point out something is happening. unless we're sharing some vivid mass hallucination (a possibility, i suppose, but a faint one), there really is a universe out there, doing its thing. and it appears that we understand "it" better than we used to.

a long time ago, when i was in high-school, we were told that F = ma. now, without being pedantic about this, just the existence of that equation means we need the notion of a multiplicative structure to even make heads or tails out of it. in fact, if one regards "a" as a vector-valued function of time (not so unusual, or so i hear), them boom! you're already into the world of 4-dimensional real vector spaces. i hear hilbert spaces are popular with quantum physicists. even if these are crude models of reality, they ARE models of reality. we expect something (knowledge of some sort) from them.

my point is, that mathematics, and the consistency of mathematical theories, has a direct impact on how we communicate those theories. the "TOE's", even if they are just symmetry groups (or n-branes, or whatever) to explain particle interactions, aren't abstract curiosities, but intended to communicate real information about the world as we think it actually may be. as long as we use mathematical theories as languages to describe physical systems, then mathematical theorems (in those theories) imply some actual knowledge about the real world. in just such a way, an undecideable statement, in a mathematical theory we take to be an accurate translation of the way the universe works, filters down to some kind of existential statement about reality.

in other words, if math truly is the appropriate language for describing science, then godel's theorem strongly suggests there are real facts about the universe we can never know. perhaps these facts aren't interesting, that's a subjective call. i find it a bit disturbing to contemplate that we would desire a model of the world that was accurately predictive of all desired information over long periods of time.

one hopes, but i must profess this is more a tenet of faith with me, that certain problems remain intractible, that subatomic interactions (or perhaps super-galactic ones) are complicated enough, that so many possibilities remain, that we never know our future. i hope that even if mr. godel's theorem isn't the relevant one, some other constraint stops us from fully understanding "it all".

several people have expressed the opinion, that such philosophical concerns are not any physicist's primary concern. perhaps not. and yet, i find it intriguing that the theory of relativity, to pick a random example, was born out just such kind of "fruitless" speculation...what kind of structure fits if things are actually like this, instead of that?

no one denies nowadays, the usefulness of high-speed computers in research, and yet i find it surprising that so many people consider very basic questions about the limits of computability to be irrelevant. the limits of our mathematical theories ought to be of some concern, as well, unless we wish to take the accumulated knowledge of the last 500 years, flush it down, and start over.

complete theories do exist, and it is possible that some axiomatic treatment of physics within such a theory is possible, but i doubt it. no one has come up with a logical system that can do what the real numbers do, but without all the fuss. and I'm fairly certain that the real numbers are categorical, if you have a system with their properties, you might as well call it the real numbers as well, and it automatically inherits a natural number object as a subclass, and it will be (mathematically) incomplete.

today's abstract mathematical curiosity may well be tomorrow's pressing concern. (some)physicists seem to (over the years) acquired the bad habit of quietly co-opting the utility of abstraction, while claiming to do the opposite.

i mean no disrespect to any of the posters here. if nothing else, you've all given me several hours of enjoyable reading, and much food for thought.
 
  • #248
D H said:
2. ... If the physical TOE is incomplete in the sense of Gödel's incompleteness theorems. So what? Physicists wouldn't care. Their TOE would still be everything that physicists mean by a TOE. You are dealing in metaphysics, not physics.

I think that's the point of contention. If a physical theory IS incomplete, then by definition it does NOT describe ALL possible physical events, right?
 
  • #249
friend said:
I think that's the point of contention. If a physical theory IS incomplete, then by definition it does NOT describe ALL possible physical events, right?
Nope. All possible physical events would still be consequences of a true TOE. It's just that we would be doomed to never know all of the consequences of the theory, in that whatever list of proven-true or proven-false statements we manage to come up with, it is guaranteed that there are still more true or false statements out there that we have yet to prove.
 
  • #250
D H said:
1. As far as I know, the Copenhagen and many-worlds interpretations will always yield the same results. Arguing that one is right and one is wrong is taking this thread off-track. Besides, a TOE, if one is ever developed, will almost certainly say that both are wrong.
This is false. The Copenhagen interpretation makes no statement about the nature of wave function collapse. So whenever you are dealing with an experimental situation near the boundary of collapse, the Copenhagen interpretation provides no results at all, while the many worlds interpretation makes a very clear prediction for the result (one which has so far held up against experiment).

D H said:
2. There is a continued misunderstanding / misrepresentation of what a TOE would entail. A TOE will describe the particle zoo and all the ways they can interact. Period. As far as physicists are concerned, the production rules of Conway's Game of Life are a "theory of everything" for that game. The Peano axioms, including induction, similarly are the "theory of everything" for the natural numbers. If the physical TOE is incomplete in the sense of Gödel's incompleteness theorems. So what? Physicists wouldn't care. Their TOE would still be everything that physicists mean by a TOE. You are dealing in metaphysics, not physics.
This I agree with. Except for the specious "metaphysics not physics" claim.
 
  • #251
Chalnoth said:
Nope. All possible physical events would still be consequences of a true TOE. It's just that we would be doomed to never know all of the consequences of the theory, in that whatever list of proven-true or proven-false statements we manage to come up with, it is guaranteed that there are still more true or false statements out there that we have yet to prove.

I think this is true of even deductive logic: you can not in practice write out every statement possible even though any statement that is written out can be proven true or false. And we know deductive logic is complete.

As I understand it, incomplete means that there are true statements that are inherently unprovable by the listed axiom of the system. So if a system of physical law is incomplete, then there are events that do occur but are not describle/reducible/provable with that list of physical laws. So when I say, "does NOT describe ALL possible physical events", I mean is not provable by the axiomized list of physical laws. So I still stand by my prior statement.
 
  • #252
friend said:
I think this is true of even deductive logic: you can not in practice write out every statement possible even though any statement that is written out can be proven true or false. And we know deductive logic is complete.
That's not quite the same thing. Deductive logic is complete in the sense that it is possible to write down every possible abstract form that comprises a true statement within the theory. Obviously there are an infinite number of ways of applying a particular abstract form, but there are only a finite number of abstract forms that are also true. The finite number of abstract forms within deductive logic is a consequence of its completeness.

friend said:
As I understand it, incomplete means that there are true statements that are inherently unprovable by the listed axiom of the system. So if a system of physical law is incomplete, then there are events that do occur but are not describle/reducible/provable with that list of physical laws. So when I say, "does NOT describe ALL possible physical events", I mean is not provable by the axiomized list of physical laws. So I still stand by my prior statement.
I suppose that's correct.
 
  • #253
Chalnoth said:
Let me put it this way: if it is possible to describe reality as a set of distinct but interrelated physical systems, then it is also possible to describe reality as one physical system. If, in one description of reality, some physical law changes with time, then in another description the physical laws remain unchanged while the apparent change is explained by the dynamics of the unchanging theory.

Basically, if there is a way that reality behaves, then there is a way to accurately describe that behavior. Because of this, it must be possible to narrow it all down to one single self-consistent structure (though that structure may be extremely complex).

I understand what you say but I actually I still disagree.

My point is that not all changes are decidable. You assume that all changes are predictable in the deductive sense, and thus can be expected. I arguet that the physical limits of encoding and computing expectations makes this not possible.

Chalnoth said:
Let me put it this way: if it is possible to describe reality as a set of distinct but interrelated physical systems, then it is also possible to describe reality as one physical system.

This is true, but I'm trying to explicitly acknowledge that any inference, and expectation is encoded by a physical system (observer), which means that any expectation only contains statements about it's own observable neigbourhood, and moreover only a PART of it, as all information about the environment can not possibly be encoded by an finite observer.

Chalnoth said:
If, in one description of reality, some physical law changes with time, then in another description the physical laws remain unchanged while the apparent change is explained by the dynamics of the unchanging theory.

Again, I partially agree with this. What you describe is a part of what happens also in my view, but you assume that there can be a localized expectation of ALL changes of the future. I don't think so. What you say only makes perfect sense when we study small subsystems where the experiment can be repeated over and over again, and that we have capacity to store all data.

What you say, is effectively true for particle physics because there this subsystem condition applies. But it fails for cosmological models, and it would also fail for an inside view of particle physics where one tries to "scale" the theory down to say a priton. This is IMO then also becomes related to the lack of unification.

Some parts of my arguments are also in these talks

- http://pirsa.org/08100049/ "On the reality of time and the evolution of laws" by Smolin, except I think Smolin is not radical enough

In here, Smoling talks about EVOLVING law in the darwinian sene, and a guy in the audience thinks just like you that - OK, if the law evolves they obviously isn't here is a meta law the describes how? - Smolin answers he doesn't know, but I think the answer must be no. And it's because such law would not be decidable in general.

But it's still true, in a constrained sense that what is undecidable to one observer, can be decidable to another (usually more complex) observer. This is how it works in particle physics. The observer is essentially the entire lab fram, and it's extremely complex nad effecticely "monitors" the entire environment of the volume where things happen.

So I think your suggest is partly right, but it can never be complete. And I think this is an important point.

- http://pirsa.org/10050053/, "Laws and time in cosmology", by Unger

These guys talk about cosmo laws, but if you combine this with the search for a theory of how laws scale (like a replacement of RG) then this gets implications also for partile physics, but there the implication isn't that laws evolve from our perspective (they don't, at least not effectively so) but the evolution is relative to the particles, and understanding this might help the unification program. (or so I think, but it's just my opinon of course)

Chalnoth said:
Because of this, it must be possible to narrow it all down to one single self-consistent structure (though that structure may be extremely complex).

Ok this is a good point. It's actually because it's soo extremely complex that it, at the end of the day in fact ISN'T possible for a finite observer. ALSO, what you suggest seems to only work in retrospect. Ie. "record history" if it fits into your memory, and call the recorded pattern a law. If the future violates that pattern, record the further future and "extend the law". I think it should be clear why such approach is bound to be sterile.

/Fredrik
 
  • #254
Chalnoth said:
But no, finding a theory of everything would not be a discovery that we are in equilibrium. The two are completely and utterly different things. I can make neither heads nor tails of what you mean by equilibrium in your post, but it clearly has nothing whatsoever to do with the thermodynamic meaning.

We can drop that discussion as it gets to many focuses in one thread, but what I mean is equilibration between interacting systems whose actions are ruled be expectations following from expected laws. When these two systems have different expectations there is a conflict.

Unger who talks works in social theory, and there analogies are clear. Social laws are negotiated laws. You can break them, but at a price. Also the laws are always evolving,but not in a way that is predictable to any player. Wether someone "a god or so" outside the game "in principle" could predict it, is in fact irrelevant to the game itself.

/Fredrik
 
  • #255
Fra said:
My point is that not all changes are decidable. You assume that all changes are predictable in the deductive sense, and thus can be expected. I arguet that the physical limits of encoding and computing expectations makes this not possible.
Possibly. As I argued previously, I strongly suspect that whatever fundamental theory there is, that fundamental theory is likely to be computable. If the Church-Turing thesis is correct, then setting up a system and later measuring the result is a form of computation identical to Turing computation, which would mean that the fundamental theory must be computable in the Turing sense. From this, if we had the fundamental theory, and we had a complete description of the system and everything it interacts with, then, given sufficient computer power, we could compute how the system changes in time.

In this sense, all changes would be perfectly predictable and decidable. However, in practice we could never determine the initial state of the system in question perfectly, so there would always be room for error.
 
  • #256
Chalnoth said:
In this sense, all changes would be perfectly predictable and decidable. However, in practice we could never determine the initial state of the system in question perfectly, so there would always be room for error.

You describe here the current scheme of physics, that Smolin in that talk referred to as "the Newtonian scheme", that doesn't mean it's classical, because even QM and GR adheres to this scheme.

The scheme is the initial or boundary conditions + deductive system => predctions. ALL uncertainty is relayed to initial conditions. What I suggest is that we generall have.

initial conditions + inference system (that is not deductive but inductive) -> expectation. And the inference systems itself also evolves, just like a learning algorithm. And therfor there is an uncertainty not only in the premise, but in the infereences themsevesl (the deductive system).

So I do not accept, or even find it plausible that the physical processes can be abstracted as perfect deductions, where all uncertainty is relayed to smearing into a predetermined large statespace. This SCHEME is exactly what I think is not right, and it's this scheme that Smolin also attacks in his talk.

I also think the turing definition of computable isn't the best for physics. I also somehow like the computational analogy, but, computational effiency is important too. What is computable given infinite time and infinite resources seems like a not very useful classification.

I see two areas where more work is needed...

1. To try to see what physical constraints on the set of possible actions, that we can infer from only considering algorithms that are computable with given effiency and resources. And consider how these constraints SCALE with the same.

2. To try to see how the encoding structure of an algorithm is revised in the light of evidence that suggests it's expectations are off. Sometimes a state change isn't possible, sometimes the statespace itself needs to deform as well if there is no consistent revision possible withing the given "axiomatic system". But to just always keeps expanding it, like a gödel expansion also doesn't work because this entire process is bound tobe constrained by current resources. So adding complexity, requires us to remove some complexiy elsewhere unless we manage to increase the total complexity. I think this relates to generation of mass.

3. Essentially we are looking for howto abstract the optimal learning algorithm. But of course by the same logic, no such tihngs is static, as it also keeps evolving. Here minor inconsistencies are just potential for improvement and development, and somehow I expect thet that inconsistences are even directing the deformation required as to define some arrow of time. The arrow of time, or computation, is possible as to always decrease and resolve inconsistencies. But this this is a dynamical thing, at any time, there is bound to exists inconsistencies.

/Fredrik
 
  • #257
That frame of mind is useful for nearly all of physics (nearly all of science, actually, because all we have today are effective theories). It is not, however, useful when considering a theory of everything. If we can narrow down possible theories of everything through demanding self-consistency (and perhaps computability) to the point that we can definitively determine which theory applies to our reality, then we can genuinely consider what you call the Newtonian scheme.
 
  • #258
Chalnoth said:
It is not, however, useful when considering a theory of everything

I think that remains to be seen, which scheme that scores :)

Chalnoth said:
to the point that we can definitively determine which theory applies to our reality

It's just that I do not see that this will ever happen. I don't think it's possible even in principle. At best, we can find an EXPECTATION, that we THINK is reality, and as long as our expectations are consistently met, then it's an effective model and corresponds to a kind of equilibrium as I see it.

But anyway, I don't think the major quest is to characterize the utopia here, it's to try to find a rational way forward. At least my understanding is that applying the Newtonian scheme to "TOE" (I think we are all agree what the TOE is here: unification of all KNOWN interactions), consistently leads to absurdly large landscapes of various kinds. In an evolving model, the state space (even the theory landscape) is evolving, and is never larger than necessarily for flexibility. Too large flexibility leads to detrimental responsiveness and too much complexity.

/Fredrik
 
  • #259
Did you check Smolins and Ungers arguments against this "Newtonian scheme"?

I personally don't think their arguments are the best of it's kind, but at least the talks contains some good grains and it's accessible online, and it's worth listening to.

Like I mentioned before I see two main objections to these scheme

1. It makes sense only if you have unlimited computational resources and computing time (something that clearly is NOT a sensible premise IMHO, it may do for philosophical or logic papers, but not for physics).

2. Even given infinite computation time, the result will be infinitely complex and it could be no way to encode physically these scheme. So the scheme is bound to be truncated. This means that the "optimal algorithm" itself gets truncated, and then it's no longer necessarily optimal anymore! Since the optimization now has the constraints of finite complexity of result and certain effiency of computations - this is exactly why we need to understand why "optimal inference" needs to the "scaled" between different observers. This will not be a simple deterministic scaling, since parts of it contains negotiations, and time dimension. due to inertia of opinons, negotiations also take time (processing time). I think we need both decidable expectations and darwin style evolution components to understand this.

There is one think I think it's important that Smoling does not even mention. Smoling mostly refers to obvious thinks that phsical law as known by HUMAN SCIENCE has evolved. This is true, but this is quite obvious. I think the interesting perspective is when you consider how interactions evolve from the point of view of a general system. This has impacts to unification and emergence and breaking of symmetry in physics. This is where Smolings arguments are the weakest... but then this is new thinking... I think there is a lot more developlemt to expect here.

All this I associated to when you said that the wave function collapse is nonsense. I think it's because your analysis seems to work in the Newtonian mode. I still insist that there are other quite promising (of course I think they are far more promising:) ways to view that.

/Fredrik
 
Last edited:
  • #260
Fra said:
1. It makes sense only if you have unlimited computational resources and computing time (something that clearly is NOT a sensible premise IMHO, it may do for philosophical or logic papers, but not for physics).
Nah. It just means that the fundamental theory (whatever it may be) is unlikely to be useful for doing most calculations. Just to name an example, we still routinely use Newtonian physics even though we know it's wrong.

A more important consequence of discovering a fundamental theory would be deriving more general results from it, such as an effective theory of quantum gravity, or an effective theory of quantum electrodynamics that doesn't have the infinities of the current theory.

Fra said:
2. Even given infinite computation time, the result will be infinitely complex and it could be no way to encode physically these scheme.
Very complex, possibly. Infinitely complex, certainly not. Unless you are actually talking about trying to do exact calculations with the wavefunction of the entire universe. But that is a fool's errand.

Fra said:
All this I associated to when you said that the wave function collapse is nonsense. I think it's because your analysis seems to work in the Newtonian mode. I still insist that there are other quite promising (of course I think they are far more promising:) ways to view that.
Huh? If your framework doesn't count a failure to describe a certain physical behavior as a strike against a physical theory when we have a competing theory that fully explains the physical behavior in question, then your framework is worth about as much as used toilet paper.
 
  • #261
Chalnoth said:
Huh? If your framework doesn't count a failure to describe a certain physical behavior as a strike against a physical theory when we have a competing theory that fully explains the physical behavior in question, then your framework is worth about as much as used toilet paper.

The collapse that is removed be decoherence style approached, is simply that you consider a new LARGER system (and a DIFFERENT, larger observer), that consists of the original system + observer. I know of Zurek's papers etc. Zureks has some very good views, and that perspective is PART of the truth. But it is not enough.

The expected evolution of that system has no collapse, sure. This does not contradict that there is collapses in other views.

So if this is what you refer to as the solution, it's not a solution to the original problem. OTOH, I don't think the original "problem" IS a problem.

In particular this prescription of consider a new larger system that incorporates the observer is subject to the issues we just discussed. It indeed does work! but only as long as we constrain ourselves to relatively speaking small subsystems (where small means low comlpexity relative to the observing system).

/Fredrik
 
  • #262
Fra said:
The collapse that is removed be decoherence style approached, is simply that you consider a new LARGER system (and a DIFFERENT, larger observer), that consists of the original system + observer. I know of Zurek's papers etc. Zureks has some very good views, and that perspective is PART of the truth. But it is not enough.

The expected evolution of that system has no collapse, sure. This does not contradict that there is collapses in other views.

So if this is what you refer to as the solution, it's not a solution to the original problem. OTOH, I don't think the original "problem" IS a problem.

In particular this prescription of consider a new larger system that incorporates the observer is subject to the issues we just discussed. It indeed does work! but only as long as we constrain ourselves to relatively speaking small subsystems (where small means low comlpexity relative to the observing system).

/Fredrik
I have no idea what you're trying to say.
 
  • #263
I think what should be the exploit here, is that the fact that from the inside perspective there ARE collapses, does get observable consequences of the behaviour of matter. Ie. it has observable consequences for other observers; this understanding will increase the predictive power, not decrease it. We're not giving anything up as I see it by considering the collapse, just acknowledging how things probably work, and also acknowledging that we do learn ongoingly.

/Fredrik
 
  • #264
Chalnoth said:
I have no idea what you're trying to say.

Hmm... ok, maybe I jumped into conclusions. I was basing my response on what I thought you would say.

So maybe we take a step back. What did you refer to with

"when we have a competing theory that fully explains the physical behavior in question"

I based my response of what Ithought you meant, but maybe I was mistaken.

/Fredrik
 
  • #265
Have we talked about whether completeness equates to determinism? Is the TOE deterministic?
 
  • #266
We don't have a TOE, so who knows? Quantum mechanics isn't deterministic. Radioactive decay is, as far as we know, purely random. Even Newtonian mechanics isn't deterministic.
 
  • #267
D H said:
We don't have a TOE, so who knows? Quantum mechanics isn't deterministic. Radioactive decay is, as far as we know, purely random. Even Newtonian mechanics isn't deterministic.

So the question is whether indeterminism is proof of incompleteness.
 
  • #268
friend said:
So the question is whether indeterminism is proof of incompleteness.
Not at all, for many reasons. Since we don't have a TOE, it is a bit silly to ask whether a TOE is deterministic. Who knows -- it might come up with a deterministic (in the sense of quantum determinism) explanation for radioactive decay.

Secondly, lack of determinism does not mean "incomplete". They are completely separate concepts.

Thirdly, physicists do not cares whether a TOE is deterministic or complete (complete in the sense of Gödel's incompleteness theorems). You continue to misrepresent what a TOE would be. A TOE would describe all interactions. Period. Nobody claims it will describe all outcomes.
 
  • #269
D H said:
Not at all, for many reasons. Since we don't have a TOE, it is a bit silly to ask whether a TOE is deterministic. Who knows -- it might come up with a deterministic (in the sense of quantum determinism) explanation for radioactive decay.
It's not silly to ask if a TOE is deterministic. It may be that this is one of the defining characteristics of the TOE so that this is how we know when we have achieved it.

D H said:
Secondly, lack of determinism does not mean "incomplete". They are completely separate concepts.
Do you expect me to take your word for it? Or do you have some reasoning for this statement? At this point I am not at all sure that determinism does not equate to completeness.

D H said:
Thirdly, physicists do not cares whether a TOE is deterministic or complete (complete in the sense of Gödel's incompleteness theorems). You continue to misrepresent what a TOE would be. A TOE would describe all interactions. Period. Nobody claims it will describe all outcomes.
At this point I'm not representing anything. I'm only asking questions. And just exactly how would we know that we have described "ALL" interactions?
 
  • #270
Well, I think determinism is more likely linked to computability than anything else.
 
  • #271
Fra said:
Hmm... ok, maybe I jumped into conclusions. I was basing my response on what I thought you would say.

So maybe we take a step back. What did you refer to with

"when we have a competing theory that fully explains the physical behavior in question"

I based my response of what Ithought you meant, but maybe I was mistaken.

/Fredrik
What I meant is that quantum decoherence fully explains the appearance of collapse, and reduces to the Copenhagen interpretation in the limit of complete decoherence. Thus the many worlds interpretation makes the same predictions as the Copenhagen interpretation in all experiments far from the boundary of collapse. But what's more, because the description of the appearance of collapse is exact, decoherence makes predictions about experiments at the boundary of collapse, while the Copenhagen interpretation does not.
 
  • #272
Chalnoth said:
What I meant is that quantum decoherence fully explains the appearance of collapse, and reduces to the Copenhagen interpretation in the limit of complete decoherence. Thus the many worlds interpretation makes the same predictions as the Copenhagen interpretation in all experiments far from the boundary of collapse. But what's more, because the description of the appearance of collapse is exact, decoherence makes predictions about experiments at the boundary of collapse, while the Copenhagen interpretation does not.

Ok, that was exactly what I thought you meant.

So clearly we disagree about our views on this. I certainly understand decoherence and it is partly right, I mean there is nothing more wrong about decoherence than anything else, but it does not answer the same question, to which the collapse is the answer. This is what I tried to say above.

Maybe I might try to explain again, but OTOH, I am not sure if it helps if we simply disagree.

Let me put it like this, if you accept the environment as an infintie information sink etc then sure decoherence sort of does resolve the collapse, but that construction doesn't help, if the actual observer is part of the system, which it is. So I am convince that those who are satisfied with decoherence, really do not see the same problem as I do.

I'm not saying decoherence is baloney, it's obviously not. The decoherence mechanism as such is correct, but it is posing a difference question, but yet pretends to answer the original one, which it didn't.

/Fredrik
 
  • #273
Fra said:
Let me put it like this, if you accept the environment as an infintie information sink etc then sure decoherence sort of does resolve the collapse,
The environment is certainly not an infinite information sink. But it is enough of one that it might as well be infinite for the majority of situations, as even for moderately-sized interacting systems interference times rapidly grow beyond the age of the universe.

Fra said:
but that construction doesn't help, if the actual observer is part of the system,
Huh? The whole reason why decoherence is able to say anything at all about the appearance of collapse is precisely because the observer is part of the system: when decoherence occurs, the observer loses information about all but one component of the wavefunction, which looks like collapse.
 
  • #274
Chalnoth said:
The whole reason why decoherence is able to say anything at all about the appearance of collapse is precisely because the observer is part of the system
Exactly. But when you allow yourself to put the observer in, then you behave exactly as anyone using Copenhagen interpretation. In other words, decoherence better hide the problem you saw with CI, but does not solve it.
 
  • #275
Lievo said:
Exactly. But when you allow yourself to put the observer in, then you behave exactly as anyone using Copenhagen interpretation. In other words, decoherence better hide the problem you saw with CI, but does not solve it.
This is incorrect. In the many worlds interpretation, the observer is completely irrelevant. The appearance of collapse merely stems from interactions between systems. So the way this is dealt with is you set up an experiment that slowly turns on an interaction but doesn't perform any sort of measurement using that interaction. Later measurements are performed to see whether or not the wavefunction collapsed.

In the Copenhagen interpretation, the result is ambiguous, because it is completely unspecified whether turning on an interaction without performing a measurement will do anything to the wave function. But in the many worlds interpretation, the result is definite and exact.
 
  • #276
Chalnoth said:
This is incorrect. In the many worlds interpretation, the observer is completely irrelevant. The appearance of collapse merely stems from interactions between systems. So the way this is dealt with is you set up an experiment that slowly turns on an interaction but doesn't perform any sort of measurement using that interaction. Later measurements are performed to see whether or not the wavefunction collapsed.

In the Copenhagen interpretation, the result is ambiguous, because it is completely unspecified whether turning on an interaction without performing a measurement will do anything to the wave function. But in the many worlds interpretation, the result is definite and exact.

Maybe you can help me. I've run across several papers (e.g. by Adler, Kent, etc.) claiming proofs that no variation of many worlds or decoherence can account for the Born probability rule. I haven't found any papers referencing these that claim to answer them in full. Do you know of any?

Thanks.
 
  • #277
Chalnoth said:
The whole reason why decoherence is able to say anything at all about the appearance of collapse is precisely because the observer is part of the system
Chalnoth said:
In the many worlds interpretation, the observer is completely irrelevant.
*cought* *cought*

Chalnoth said:
In the Copenhagen interpretation, the result is ambiguous, because (...)
In the many-world interpretation, the result is ambiguous, http://www-physics.lbl.gov/~stapp/bp.PDF" ... and basically this is the very same problem (although now better disguised, I have to admit this).

Anyway, my favorite (to date) is Rovelli's interpretation, so I'll stop arguing this.
 
Last edited by a moderator:
  • #278
PAllen said:
Maybe you can help me. I've run across several papers (e.g. by Adler, Kent, etc.) claiming proofs that no variation of many worlds or decoherence can account for the Born probability rule. I haven't found any papers referencing these that claim to answer them in full. Do you know of any?

Thanks.
See here:
http://arxiv.org/PS_cache/arxiv/pdf/0906/0906.2718v1.pdf
 
  • #279
The reason I say the observer is irrelevant in the MWI is because decoherence occurs as a result of arbitrary interactions, not just observer interactions. This effects what we observe because in order to perform an observation, we have to physically interact with the system we are observing.
 
  • #280
Chalnoth said:
Huh? The whole reason why decoherence is able to say anything at all about the appearance of collapse is precisely because the observer is part of the system: when decoherence occurs, the observer loses information about all but one component of the wavefunction, which looks like collapse.

I think the problem is that you do not take encoding of the theory as seriously as I do. Your explanation required more complexity thatn the original observer has control of. So is what your answer, or new theory, lives not on the original observer domain. Therefor it does not address the question.

I hear what you say, about dechoerence. I don't argue with what decoherence does, I'm trying to say that I think you are missing the point I'm trying to make. Or that you simply doesn't see the point in my point so to speak, but it's the same thing.

When you consider observer+system then the environment or a big part of it, IS the observer as it monitors O+S. So the konwledge about O+S is ENCODED in the environment. Then of course with respect to this environment, or other observers that somehow has arbitrary access to the entire environments information, the observer-system interaction can be described without collapses. But you have more than one observer. Clearly there is nothing unique about subsystems. Any subsystem is any observer, but whenever you compute and expectaion and encode a theory, a single observer is used. Question posed by this observer, can not be answers by a different observer. But yes, the different observer can "explain" why the first observer asks this question and how it perceives that answer.

The expectations observer B has, on observers A interacting with system X, is obviously different than observers A intrinsic expectations. All I am saying is that expectations of observer B (corresponding to you decoherence view) does not influence the action of observer A, unless B is interacting with A; and then again you have a DIFFERENT collapse, that's not the original one.

/Fredrik
 

Similar threads

Replies
10
Views
4K
Replies
20
Views
3K
Replies
1
Views
519
Replies
24
Views
3K
Replies
6
Views
3K
Replies
60
Views
6K
Replies
1
Views
3K
Back
Top