Measurement Problem, Quantum Gravity & Unification

In summary: This article from 2015 seems to back up my reasoning:"Quantum gravity is plagued by a notorious problem: it doesn't seem to be able to describe the spacetime geometry. In fact, it's not clear how gravity could ever work on a spacetime that isn't curved.Now, a team of researchers from the University of Geneva and the University of Oxford has proposed a solution to this problem: quantum gravity should be based on a new type of space, which they call a "diffeomorphism spacetime."The idea is that the space we experience is just an
  • #1
Varon
548
1
Does anybody else besides Fra believe that quantum mechanics' Measurement Problem, Quantum Gravity, and Unification Program of all Forces are somehow related? Why and why not (pls. state or elaborate why you think or don't think so)?
 
Last edited:
Physics news on Phys.org
  • #2
Well those 4 are the big ideas of what we are looking for, although, QG is combination of qm and relativity, the measurement problem pertains to qm only not gr and then unification is perhaps a property of theories such as string/m/f theory. They are all big on their own and solving either one of those is a huge step forward and will give a lot of insights into the other 3 although it can't be said for sure whether all of them are linked. But we'll just have to wait and see, won't we? in the end everything could be related.
 
  • #3
Well when you think of it, all the forces have to be related, because they're interacting with each other and constantly affecting the values of the forces. The more precise you need to , the greater the effect. A single unified force is just a higher dimensional aggregate version of separate forces in 4d.
 
  • #4
Well, I am convinced that QG is related to observation, or more precisely to the observer. The logic goes as follows:

Every real experiment is an interaction between a system and an observer, and the result depends on the physical properties of both. In particular, it depends on the observer's mass. The predictions of neither GR nor QFT depend on this quantity, which means that some tacit assumptions have been made:

In GR, the observer's heavy mass is assumed small, so it does not disturb the fields.

In QFT, the observer's inert mass is assumed large, so the observer knows where he is at all times. In particular, the observer's position and velocity commute at equal times.

Clearly these assumptions are contradictory. This leads to the successful postdiction that GR and QFT are incompatible.

The resolution is to introduce a physical observer with a finite, non-zero mass into the theory. Alas, to carry out this program in practise turned out to be not so simple.
 
  • #5
Thomas Larsson said:
The resolution is to introduce a physical observer with a finite, non-zero mass into the theory. Alas, to carry out this program in practise turned out to be not so simple.

I agree. The large observer mass in QFT is a different way of expressing something closely related to what Smolin phrased as that QM applies to small subsystems; this means that the system in question is "small" realtive to the observer; ie the observer is massive in comparasion. The laboratory frame "IS" more or less infinitely massive relative to whatever happens in a collider. when this assymmetry breaks down, we don't know how the theory scales.

Your way of putting it indicates another way of seeing but I rougly agree.

/Fredrik
 
  • #6
Thomas Larsson said:
The resolution is to introduce a physical observer with a finite, non-zero mass into the theory. Alas, to carry out this program in practise turned out to be not so simple.

One problem here is this:

When we say that there is always an observer interacting with the system in each experiment, we must not be tempted to think that THIS description of "observer <-> system" can be made WITHOUT observer, that would be a fallacy into structural realism.

Instead, to describe how one observer interacts with a system, from the "outside" can only be made by another real observer (which again then has a finite, non-zero mass), constraining the total picture.

This apparently leads to circularities, as all there is are inside views of parts from different perspective, but there is no objective description of this equivalence class of views.

This is a conceptual problem, that either you face it and try to understand it (meaning this apparent circularity is rather seen as evolutionary mechanisms, what we try to describe to some extent is the evolutionary mechanisms), or deny it (meaning you usually subscribed to structural realism, and insist that if such objective equivalence classes doesn't exist, then this can't be answered by science).

There are people on both sides, and some mid-ways.

/Fredrik
 
  • #7
As I think I have mentioned before, I consider observer dependence as implemented by Taylor expansion. A Taylor series does not only depend on the function being expanded, but also on the choice of expansion point, i.e. on the observer's position. You can consider expansions around different points, but in order to formulate a Taylor series, you must to commit to a definite observer. There is nothing circular about this. No need to worry about internal or external expansion points, etc.

Let me explain where I'm coming from, and why I consider this viewpoint inevitable. Long ago I realized that QG can not possibly make sense unless the spacetime diffeomorphism algebra acquires an extension, analogous to the Virasoro algebra in CFT. The extension ("diff anomaly") is necessary for locality, in the sense of correlation functions depending on separation. In 1D spacetime this follows from the unitary discrete series in CFT, and it is a theorem also in higher dimensions that the unextended diff algebra does not possesses any non-trivial unitary reps of quantum (lowest-energy) type.

So I set out to discover the multi-dimensional Virasoro algebra. The fact that it does exist proves IMO that the underlying logic is essentially correct.

In 1D, the off-shell representations of the Virasoro algebra can be considered as QFT; they act on Fock spaces built from quantum fields. In higher dimensions this is no longer true. Instead, they act on Fock spaces built from histories in the space of Taylor series, or jets (a p-jet is locally the same a Taylor series truncated at order p). In particular, in all representations the extension is a functional of the observer's trajectory. Since this does not exist in QFT, neither does the extension. This is in accordance with a theorem that states that in QFT, there are diff anomalies in 4D, although the diffeomorphism algebra admits a Virasoro extension.

QG hence requires us to go beyond QFT to a theory of quantized jets, denoted QJT (quantum jet theory), which naturally takes observer dependence into account.
 
  • #8
I have some faint memory of you describing your view before but I don't remember it all, some questions:

Thomas Larsson said:
As I think I have mentioned before, I consider observer dependence as implemented by Taylor expansion. A Taylor series does not only depend on the function being expanded, but also on the choice of expansion point, i.e. on the observer's position. You can consider expansions around different points, but in order to formulate a Taylor series, you must to commit to a definite observer. There is nothing circular about this. No need to worry about internal or external expansion points, etc.

It's not circular, it's rather from my perspective a realist picture.

What I mean is this:

You are picturing the set of observers, as a set of "points". What is the inferential status of this set. Either you just take this set as a starting point (this is the realist stance), or you say that the only inference of this set that can be made, is with respect to another inside observer. But in that case, it's not a priori obvious, that the infered picture IS a classical set, maybe it's something more complicated. With classical set I mean observer invariant (ie. the set exists regardless of observation).

In my picture, each observer has a different "view" of what you call the "set of observers". And it's no a priori obvious (unless you assume it as a structural realist trait) that all the different views fit together as if they were taylor expansions on a "real" manifold or set?

Comments? Maybe I'm missing something. But it sounds like you add more realism to the picture that I am willing to accept. In my view, there is no objective (observer independent) meaning in the concept of the "point" in the set of all observers. I think the process whereby two observer try to agree about the identity of a "point" is rather something complex.

/Fredrik
 
  • #9
Thomas, I see you have plenty of Quantum Jet Theory papers on arxiv.

For example:
"Quantum Jet Theory, Observer Dependence, and Multi-dimensional Virasoro algebra"
http://arxiv.org/abs/0909.2700

And my first question, appers on your first scentences:

"Every experiment is an interaction between a system and an observer, and the result depends on the physical properties of both. In particular, a real observation depends on the observer’s mass M and charge e."

I sort of completely agree BUT, the question is: Which observer describes the "system + observer"? Do you, or do you not here require a second observer? Then, doesn't this "second choice" also possibly influence the description, such as M and e?

If yes, this is what I refer to as the "apparent circularity", because aren't we back at the same problem you set out to solve?

If no, then what you have done is to replace the original observer, with an a realist picture where you picture from above the set of all observers. This seems to me, to be at face with the heart of what I take to be the heart of measurement theory.

I'm just curious if you can confirm that I got this view right?

/Fredrik
 
  • #10
I'm trying to understand your logic.

> "make the observer into a physical entity with quantum dynamics"

Does this or does it not mean that there exists a "QM" describtion with the the quantum state of the observer evolving in a hilbert space?

/Fredrik
 
  • #11
Fra. There is this so called Correspondence Principle which wiki mentions "used more generally, to represent the idea that a new theory should reproduce the results of older well-established theories in those domains where the old theories work." But how do you tie up Correspondence Principle with emergence. For example. Using your Many Interacting Observers view. Is it possible to get a new degree of freedom such that someday United Parcel Service can just teleport the entire package from say Europe to United States in a sec without transfering quantum states but the object itself? Is this is possible in your model. When how does Correpondence Principle cut in. I mean. In Newtonian physics. We can't teleport objects.. however, in Newtonian physics.. we can't transfer quantum state either. So how to tell what is the limits that can be crossed and those that is totally impossible between Correspondence Principle and Emergence?
 
  • #12
Regarding the correspondence principle and emergence in the many evolving interacting observer view, the idea is that correspondence to known fixed laws of physics is recovered as an equilibrium condition. This is why most laws of physics as we know it, are more to be seen almost like state equations.

It's already been suggested albeit vaugely by several people such as Ted Jacobsson that Einsteins equations are may to be understood as an equation of state, rather than a hard forcing constraint. I think this is right, and I think this applies not only to GR, but also to the standard model of particle physics.
(See http://arxiv.org/abs/gr-qc/9504004)

We need to get off-equilibrium to understand the selection of the equilibrium. IMHO, this "off-equilibrium" corresponds to removing the realism most people hold implicit in "physical law". I consider evolving law, and an evolutionary understanding is required, rather than a strict "entropic understanding" relative to a fixed state space.

So to be more specific: Quantum mechanics and QFT, should be recovered in the large observer complexity limit; the idea here is that the observers information capacity is infinitely large, relative to the system under observation. This also corresponds to "classical observers". QM as we know, requires a classical observer. The generalisation I envision doesn't, but current QM must be recovered in the classical observer limit.

Compare with smolins view here, what I write is much more radical, but SMolins view still points you in the right direction (See http://arxiv.org/abs/arXiv:1104.2822)

Smolin writes:
"Quantum mechanics applies to small subsystems of the universe which come in many copies. Thus, it applies to hydrogen atoms and ammonia molecules, but not to cats or people or the universe as a whole. Quantum mechanics is hence an approximation to an unknown cosmological theory."

Now I disagree about the "copies", but the bold things I agree with.

In fact what I'm suggesting is a combination of a new renormalisation theory and evolution of law. The essence of renormalization theory is that two observers, should make the same physical predictions, even if the representation turns out different. From this, and from the relation between the obsever, we derive stuff like the renormalization group equaitons.

But here I take a different turn. I claim that thte "physical predictions" are in fact just "expectations", and these do NOT need to agree between observers when we are off equlibrium. The case where there IS agreement (which admittedly is most of hte time) corresponds to the equlibrium I talk about. Off equilibrium the disagreement translates into deforming and selective mechanisms in evolution, causing changes in the population of observers.

/Fredrik
 
  • #13
Another way of putting the last thing is that "renormalisation" as well as general "observer-observer" relations (think also parallelltransport in GR) are in fact physical evolution processs where the laws of physics form and guide this process in a feedback mechanism, rather than "just" beeing a hard constraint.

When we apply this as constraints, symmetry principles etc, it corresponds to an assumption, namely that we have equiblirium. But to understand unification properly, I think we need to look outside the equilibrium points to see the bigger picture.

/Fredrik
 
  • #14
Well, as I said I did not succeed in completing the program. There are a number of points where I got stuck.

1. I did not find was a nice notion of time conjugation, because a Taylor coefficient and its dual do not transform in the same way. This does of course not mean that such a notion does not exist, although perhaps not off-shell, but only that I was unable to find it. Without time conjugation, the Hilbert space does not have an inner product, let along a positive-definite one, so it is just a vector space. But as such it exists.

2. Non-local integrals such as the Hamiltonian have no nice description in terms of Taylor coefficients. However, there is a natural off-shell operator which generates rigid reparametrizations of the observer's trajectory, which is a kind of time translations. It is the L_0 of a Virasoro algebra, as hence something you expect should be quantized algebraically.

3. It is well known that a foliation breaks the space-time diffeomorphism symmetry of GR. There are formulations of phase space that are almost covariant, but not quite so, and this is a problem if you approach QG via the space-time diffeomorphism algebra.

So here is where I stand today. I have the off-shell quantum reps of the complexified diffeomorphism algebra on a vector space without inner product, dynamics with some overcounting, and a physical interpretation of observer dependence. There are also some conditions for the extension to remain finite when we pass from p-jets to infinite jets. The natural solution to these conditions implies that space-time has four dimensions, but it also seems to suggest the wrong field content, so the value of this is unclear.
 
  • #15
Maybe this is a point to which you didn't yet reach, but given your initial argument, I wonder roughly how the idea was to let the observers mass M transform your hilbert space (constructed as some fock space of jets histories etc).

It's clear that we think quite differently, but my expectation would be that the observers mass M, would somehow constrain the hilbert space (because in my view, the hilbert space information is encoded in the observer; so the hilbert space would have to scale with M). This would even make me think that finite M would prevent p -> infinity. I even take the observer complexity to be a key parametera of htte observer, this complexity constrains all strucures, it also servers as a natural information cutoff.

What more exactly your idea where observer mass M enter the theory?

/Fredrik
 
  • #16
Consider for simplicity a definite space-time split. This will break the spacetime diffeo symmetry which is my main interest, but spatial diffeos are still valid. Expand the field at time t=0 in a multi-dimensional Taylor series:
[tex] \phi(x) = \sum_m {1 \over {m!}} \phi_m (x-q)^m [/tex]
The idea is just to build a Fock space from the jet data [itex]\phi_m, q^i[/itex] rather than from the field data [itex]\phi(x)[/itex]. This looks like a straighforward recipe: introduce canonical conjugates [itex]\pi^m, p_i[/itex] and a Fock vacuum which is annihilated by the operators
[tex]a_m = \pi_m + i \phi_m.[/tex]
However, one problem is that the momentum has a multi-index upstairs, so we must introduce a metric in jet space. I derived this metric somewhere, but it is a purely formal expression involving nonsense such as derivatives of the delta function at the origin. One can derive the Hamiltonian expressed in Taylor coefficients involves this nonsensical metric (or its inverse, I don't remember).

There are now two places where the observer's mass enters:

1. The field Lagrangian only determines the dynamics of the Taylor coefficients [itex]\phi_m[/itex]. One must give dynamics also to [itex]q^i[/itex]. In GR, the natural choice is to add a term to the Lagrangian describing a point particle with mass M coupled to gravity.

2. From the CCR for [itex]q^i, p_j[/itex] follows, if we non-relativistically assume that [itex]p_j = M \dot q_j[/itex]:
[tex] [ q^i, \dot q_j ] = \frac {i\hbar} M \delta^i_j. [/tex]
So the observer's trajectory [itex]q^i(t)[/itex] is subject to quantum fluctuations. Unless [itex]\hbar=0[/itex] or [itex]M = \infty[/itex]. We can get rid of 1 by assuming that [itex]M = 0[/itex], and 2 by assuming that [itex]M = \infty[/itex]. Clearly these assumptions are mutually exclusive, and this problem is unique for gravity.

Let me just briefly indicate how the diffeo algebra is broken by anomalies. In jet space there is a privileged regularization, which amounts to replacing infinite jets by p-jets, i.e. truncating Taylor series at order p. This is the unique regularization which is exactly compatible with diffeo symmetry. Since the space of p-jets is finite-dimensional, we have a QM problem and no anomalies seem possible.
However, the p-jet regularization is not compatible with dynamics. When acting on a p-jet momentum the Hamiltonian gives a (p+2)-jet if the EOM are second order. All we can say is that [itex][ H, \pi^m(t) ] = \dot \pi^m(t)[/itex] if [itex]|m| = p, p-1[/itex]. So the full phase space contains all [itex]\phi_m(t), \pi^m(t)[/itex] for these m, and not just the values for t=0. It is thus infinite-dimensional, and anomalies arise with normal ordering.
 
  • #17
Thomas, thanks for elaborating.

Although we agree that the observer role is important, and that the observers mass (somehow) needs to enter the descriptions, what you attempt is very different from what I had in mind.

As you mention, there are probably several various technical issues that may or may not be solvable in your view, but if I were to just comment on what sticks out the most to me in your line of reasoning - judging your scheme from my own bias it's this:

1) You still use the same structure of QM; meaning you imagine the information about the observer with mass M and a position interacting with a field (that you choose to "reparametrize") beeing encoded in a hilbert space. My point is that it's THIS hilbert space that belongs to the observers inside view, and it's THIS hilbert space that is constrained (somehow) but the observers mass - my picture is that this works like a truncation that is REAL. This is why I hoped that you had used the observers mass to constrain the limiting pricess of p-jet space, that might have caught more of my interest.

I'd claim that in your picture you have TWO observers. One with mass M, and still one with infinite mass (this is the implicit one that you never mention; and the fact that you doesn't suggest that you don't think of it as an observer, you probably see it as part of the structure of physical law (structural realism)).

2) I would even be tempted to say that you overestimate the EXPECTED uncertainty as inferred by the inside observer, when you do it like that. The difference I expect this to make is when one tries to construct from rational action, how the action of the systems are. Then the "possibilities" seen by the external (infinitley massive observer, or by the realist abstractions) can not influence the "sume over possibilities" of hte inside observers action. One one of putting it might be that you over/mis count the possibilites.

If we ask how the counting from the inside looks like, I think the observer simply isn't AWARE of the fact that he is "flucutating" relative to another external picture. Instead the observer will innocently act AS IF (ie if we picture the naked nonrenormalized action) it's on solid ground, even thought it's not. This should yield predictions on the action of the system. Ie. you can tell from the systems action, "wether it's aware of the extra possibilities, or not".

The extra possibilities should rather come from the renormalized action, as seen by this external observer.

These are a couple of things that are different relative to my thinking.

My approach tries to understand the naked action of a general observer, in terms of "rationality". Then consider what happens when to such observer complexes interact, and how this mutual interaction causes them both to change and reach various negotiated agreements (which can be seen as symmetries). How that's also very difficult, and I don't work on this because it's the easiest way, but because it seems to be it's the only rational way. Unfortunately it's so different in thinking that it distorts all classical notions of timeless state spaces, lagrangians etc.

/Fredrik
 
  • #18
Hej Fredrik,

Maybe we should just agree to disagree. But before doing that, there are some points that I would like to emphasize:

1. What I have in mind is in fact very conservative. In particular, I believe the QM is exactly right. I have doubts about QFT, but it is the F rather than the Q that I question. The main consequence of a finite M is that the observer's position becomes fuzzy. However, in 0+1 D there is no position, and M is effectively infinite. This does not pose any problem for GR, because there is not gravity in 0+1 D.

2. I am no philosopher, and find it hard to tell if there is substance in your critique, or if it just a matter of interpretation and formulation. I claim that what I do is substantially different from standard QFT, and thus have a chance of overcoming problems with merging GR and QFT. The differentce between a diff anomaly and no diff anomaly is substantial, no matter how you interpret it.

3. A diff anomaly is a gauge anomaly, which according to standard wisdom must vanish. Whereas this standard wisdom is patently false (the free subcritical string can be quantized with a ghost-free spectrum, despite its conformal gauge anomaly), it is almost true. In particular, I demand that the anomaly be finite. A lesson from the free string is that the anomaly is typically proportional to the number of dofs. Hence only finitely many, necessarily delocalized, dofs contribute to the diff anomalies.

/Thomas
 
  • #19
Hej Thomas :)

> I believe the QM is exactly right.

Yes, here we can agree to disagree. I'll just add that I don't think QM is wrong in it's domain of applicability (no more then Newtons mechanics is "wrong", I am however convinced that the structure of QM, fails to be the framework we need to make progress in unification.

> The main consequence of a finite M is that the observer's position becomes fuzzy.

The problem I have here is that an observer can not measure it's own ignorance. If it could, it would instantly increase it's own information. The uncertainty you refer to, is measured by a second observer.

The mass of THIS observer you do not mention. I'd suggest that if you account also for this obsevers finite mass, then what happens is that we apparently loose decidability. But this is I think how nature works and it's the way to realize the evolutionary perspective on physical law.

About diff anomalies, in my picture one shouldn't ultimately start with any classical symmetries in the first place, so my take on the entire renormlization and symmetry business is radically different. So I don't ask the questions like you do, so I have no direct comments. But I certainly think that diff symmetry is bound to be broken in quantum gravity if you put it that way, because of the conflict between inferential status and observer invariance. The inferences are made only after choosing an observer.

So the question isn't how and if symmetries are broken, because they don't exist in the first place. I instead ask how effective observer invariant symmetries emerge and evolve.

A symmetry principles comes with hard constraints and high predictivity. But the understanding of WHICH symmetry is the correct one, and wether it's even timeless at all, is left out, replaced by ad hoc tricks. This is cured by the evolving picture, by trying to understand the origina of all symmetries in terms of rational inferences

If you for example try to see diffeomorphism symmetry as an inferred symmetry, it's clear that the only way to infer it is to constrain your study to a small subsystem of your environment, which you can completely monitor. You also need enough complexity (mass) to encode and process all information.

Diffeomorphiism symmetry in classical GR is NOT an inferred symmetry. It's a constraint added by the idea that the laws of physics must be invariant with respect to the choice of observers (which are all generated by diff transforamtions). However this expectation I claim lack rational basis if you think about it. The fact that you DO loose decidability if you abandon realist symmetries is correct, but this is no rational basis for thinking they must exist.

Still there may be a good reason of course why diffeomorhism invariance seems to hold at cosmo scale, this is to be understood as an equilibrium conditions where large bodies dominate interacting with gravity. Internal interactions are canceled out. But it's a fallacyt IMO to expect this symmetry to hold at all scales. To me it isn't rational. I just follows for somewhat naive extrapolations of properties of well established theories from completey different contexts.

/Fredrik
 
Last edited:
  • #20
In a general way I am with Fra, though likely differ significantly in specifics. Now I am expected to justify that :eek:

Well fundamentally I am prejudicial to realism, but know enough to worry about that. Neumaier here has provided an interpretation that I think is operationally valid, though I want to push the boundaries even farther. This is where QM, the [strike]Measurement Problem[/strike] Uncertainty Principle, Gravity, and Unification become more important.

I struck through the Measurement Problem because I only see it as an imagined problem. I will justify that first, followed by a quick discussion. First is the demonstration that theory requires macroscopic objects to be in a state of superposition, then demonstrate that wavefunction collapse does not collapse the wavefunction, then talk about the unrealistically idealized particle picture that created the need for such a collapse.

The same QM that gives us that the photon interference in double slit experiments tells us that we could do the same thing with baseballs. The only thing stopping us is that the baseballs would have to move so slow that the experiment would take longer than the age of the Universe. Now the measurement problem assumes there is something wrong with the baseball experiment we cannot perform. That the baseball somehow collapses the wavefunction in the process of interacting. So let's start with small uncollapsed quantum systems and see what it takes to collapse them.

We have a series of separable quantum qubits (i,j,k,...n). Now the idea is that if i interacts with j, or some set of states, it collapses the wavefunction. Only as far as k is concerned the i and j did not collapse, they merely became entangled, the wavefunctions were simply superimposed on each other. If k interacts with i and j then, as far as the next state is concerned i, j, and k are a superimposed (uncollapsed) superimposed wavefunction. You can continue all the way till you have a baseball, and as long as that baseball remains causally separable from you it is just one big superimposed wavefunction with a tiny wavelength. This is why Schrodinger talked about a live and dead cat, and not because it was actually so.

You might ask, if the wavefunction does not collapse, why in quantum computers is it so hard to prevent the wavefunction from collapsing, making quantum computers so hard to build? Well, not really. What they have a hard time doing is keeping the wavefunctions causally separated, so that the wavefunctions do not get superimposed on each other till they are ready to perform the calculation. The noncommutative property requires these superposition of states to occur in a certain order for it to work.

So where did this idea of collapse come from? It comes from the idea that quantum particles are point particles that carry the fields that determine its (weird) properties. So we had a distinct point that had to be two places at once in our interference experiments. Where did this come from historically? In the days of classical physics if we were some distance from a planet we described the planet as "a" particle, which existed as "a" single point right at the center of the planets mass. Now this is obviously not right, but it worked fine if you were far enough from the planet. When we started working with quantum particles we did exactly the same thing, and labeled the point right at the center of the fields properties the single point where the particle existed. Now by assuming the particle is a point at the center, rather than just the the field itself, you need something to collapse to bring the particle back that is supposed to be at the center of the field. But what if the field is all there is, and the center-point is no more a particle than the center-point of a tornado is? That does not mean quantum fields work anything like tornadoes but collapsing wavefunctions to these points to get the single particle back to that point is just as pointless in a field theory.

This is where Neumaier's thermodynamic interpretation excels, when as he states that the "expectation values" of the field are provided the ontological status. Neumaier's interpretation does not deal with the ontological status of any substructure to that but he is for good cause sticking to standard QM. It is an interpretation, not an extension to well accepted QM.

Yet I like to look deeper and find the interface between Hilbert space and quantum (probabilistic) fields interesting, and important to the OP question. But this post is getting too long so I will fill in later.
 
  • #21
Fra said:
> The main consequence of a finite M is that the observer's position becomes fuzzy.

The problem I have here is that an observer can not measure it's own ignorance. If it could, it would instantly increase it's own information. The uncertainty you refer to, is measured by a second observer.

The mass of THIS observer you do not mention. I'd suggest that if you account also for this obsevers finite mass, then what happens is that we apparently loose decidability. But this is I think how nature works and it's the way to realize the evolutionary perspective on physical law.

Just to mention briefly how I implement the observer complexity constraint:

I consider that all the observers measures - including the measures that are expectations of the future evolution of the environment, as observed through the "event horizon" that is the communcation channels to the environment - is physically encoded in the observer as system of internally interacting and related counter complexes.

To just simplify a little bit, and consider how the observer can encode counters of absolute frequencies of a set of distinguishable events defined on the event horizon, then the role of the observes mass is to TRUNCATE the counters. Ie. there is actually a physical limit to how far the observer can count - and thus measure - anything. This relates also to renormalization in the extension as it pretty much serves as a realistic natural regularization. The regularization automatically follows from the observers finite complexity (loosely speaking I associate this to the mass).

Edit: This is also why the M -> inf limit really is unphysical, and at the limit where it effectively makes sense corresponding to lab observer observing subsystem more than anything corresponds only to say an asymptotic S-matrix. Such a limit can of course be taken and be interesting and will correspond to Scattering but the point is that there are plenty of physics to be understood where this limit can't be taken.

Edit2: note that it's easy to also understand why in mainstream framework this limit seems to be taken. Because in a laboratory context, what IS the mass of the observer? Is it the mass of the guy watching the data on screen? the mass of the lab building. the mass of earth? or even the mass of the entire universe? I'd say in a sense it can be all of them! But it depends on how the acqusition process takes place. Also some data that is converted into classical data at macroscale, can be communicate classicaly between macroobject. This seems ambigous so the most non-ambigous measure is to take the M->inf limit. (the s-matrix), but then we also loose tracking of the natural regulation - this is one source of confusion IMO.

The action of any observer, is like following the law of minimal resistance. From the inside the observer always walks a straight line. However, a second observer sees that it's in fact a random walk (like brownian motion) where a light observer makes more detours and fluctuates more highly than a massive one. But I think the point is that the inside observer itself is completely UNAWARE of these detours. It does, to the best of it's capability, follow the shortest path.

I think the main difference between your view is that in my construction it is of great importance that the INSIDE observer is UNAWARE of it's own "fluctuations". A key to understanding the emergence of the ACTION (note that I don't strat with a classical lagrangian or symmetries) is that:

The observer behaves AS IF it thinks that it follows a straight path. Ie. the evaluation of the action of the observer (the naked action that is) does not inclue summing over the "possible paths" that are distinguishable only to an external observer. My idea is (remains to prove) that this difference is a key to get the correct emergent action.

The rational action would be different wether you account for these or not.

/Fredrik
 
Last edited:
  • #22
Fra said:
The problem I have here is that an observer can not measure it's own ignorance. If it could, it would instantly increase it's own information.

I don't think in terms of information, but I think everything in QJT can in principle be measured locally by a single observer.
There are three kinds of (partial) observables, all of which the observer can measure locally (notation as in previous post):

1. Time t, measured by a clock.

2. The Taylor coefficients [itex]\phi_m[/itex], measured by a local detector.

3. The base point q, measured e.g. by a GPS receiver. This is a somewhat weak point, since a GPS receiver does not function properly unless there are distant GPS satellites transmitting signals. Nevertheless, the actual measurement is local.

So all partial observables can be measured by local devices, living on the observer's worldline. From these we can construct complete observables [itex]\phi_m(t)[/itex] and q(t), which are operators whose time evolution is predicted by the dynamics.
Fra said:
The uncertainty you refer to, is measured by a second observer.

I disagree. q(t) is an operator, to which the standard rules of QM apply. If we keep reading off the GPS receiver and the clock, there will be some fuzziness unless we are in an eigenstate.

However, the problem with a second observer does arise when we make contact with QFT. To this end, we must evaluate the Taylor series at some point x, i.e. construct the quantum field [itex]\phi(x,t)[/itex]. Here we must introduce an second, external observer, with two remarkable superpowers:

1. He is everywhere simultaneously, probing [itex]\phi(x,0)[/itex] for all x when t = 0.

2. Nevertheless, he knows exactly where he is, without any quantum fuzziness (x is a c-number).

Such a superobserver is clearly unphysical, but only needs to be introduced when we ignore the physical observer and work with the observer-independent fields. The intrinsic jet formulation does not deal with the x's.

Despite dealing only with objects on the observer's trajectory, QJT should be almost as predictive as QFT, to the extent that we can identify fields with infinite Taylor series. One may guess that cases where the Taylor series does not converge, in some suitable operator sense, may well be in regions into which an observer cannot see, e.g. beyond an event horizon. Even if such regions are not unphysical per se, they are invinsible to the observer, and as such beyond the necessary scope of a physical theory.
Fra said:
A symmetry principles comes with hard constraints and high predictivity. But the understanding of WHICH symmetry is the correct one, and wether it's even timeless at all, is left out, replaced by ad hoc tricks. This is cured by the evolving picture, by trying to understand the origina of all symmetries in terms of rational inferences

The symmetry principle is really dictated by the correspondence principle. In the classical limit, QG must reduce to GR, and its symmetry principle must reduce to spacetime diffeomorphisms. The quantum symmetry must thus be some extension/anomaly of the classical symmetry. What else can it be? This is completely analogous to the classical bosonic string. It has an infinite conformal symmetry, which acquires an extension and becomes a Virasoro algebra in the quantum theory. If the quantum string had some entirely unrelated symmetry, or no symmetry at all, how could the classical limit come out right?

One can make an analogous discussion of Yang-Mills theory. The usual (inconsistent) QFT anomalies correspond to the so-called Mickelsson-Faddeev algebra. However, the algebra of gauge transformations possesses a second, observer-dependent type of extension, called the "central extension" in chapter 4 of Pressley-Segal "Loop groups" (it does not commute with diffeomorphisms, though). The two types of algebra extensions are discussed and contrasted in http://arxiv.org/abs/math-ph/0501023.
 
  • #23
Hello Thomas,


Thomas Larsson said:
I don't think in terms of information
Yes I see. I do, that's why I guess why certain things stick out more from my angle.
Thomas Larsson said:
I think everything in QJT can in principle be measured locally by a single observer.
I like this ambition. This is something I take very seriously in my reconstruction. I also try to take seriously, not ONLY how they are registered and processed, but also how they are retained encoded (here enters the limiting information capacity; mass in My view). One conclusion is that an observer can never physically retain all information that it "principcally" measures during history. There is a process lossy retention of information, otherwise the observer would be like a growing black hole.
Thomas Larsson said:
3. The base point q, measured e.g. by a GPS receiver. This is a somewhat weak point, since a GPS receiver does not function properly unless there are distant GPS satellites transmitting signals. Nevertheless, the actual measurement is local.
I have to agree it's a weak point. I personally think it's too weak. This relates to why I see it as you need a "second observer" ie. think your sattelites etc. It's essentially the same critique. I just consider this to maybe be more crucial than you do.

/Fredrik
 
  • #24
Thomas Larsson said:
The symmetry principle is really dictated by the correspondence principle. In the classical limit, QG must reduce to GR, and its symmetry principle must reduce to spacetime diffeomorphisms. The quantum symmetry must thus be some extension/anomaly of the classical symmetry. What else can it be?

I've tried to explain previously, not sure if it was in this thread but - it can be seen as an equiblirium, (probably asymptotic equilibrium, in the sense that it's never realized perfectly) in the distribution space of theories populating the universe. Diff symmetry would be encoded not because it logically the only way, but because it's an equiblirium point.

The symmetry thus corresdponds to a nash type equilibrium. All systems encode this symmetry, but not due to logical reason, because it has so evolved.

Meaning that, instead of thinking that there is a symmetry that is "broken" in quantum mechanics, it's better to think of it as not existing in the first place and is instead emergent in the classical limit as an "equilibrium condition" between "interacting theories" - here I mean not theories that contain interactions, I mean a picture where two theories are interacting with each other (as encoded in two physical observers that is).

The effective result is the same, so there is no violation of correspondence principle, but it's a different way of thinking and may give a completely difference guidance.

In particular the difference is, do we think of say Einsteins equation and diffemorphism symmetry as constraint of nature, or an equation of state corresponding to some sophisticated level of equilibrium point in the space of possible physical laws?

The main difference is that thinking of broken symmetries, requires more information to encode than to consider emergent symmetries.

This is why the standard model gets complicated as encoded in the laboratory frame, while it must be extremely simple as encoded by the subatomic participants. What's in betwen is all about renormalizing the theory, but unlike the standard view, I think the renormalization is a physical process and it's nut just an equation predicting how couplings scale in fixed theory space. This is again an "external view" of the renormalization process. The "inside view" IS the unification.

The unified picture of interactions is what you get when - not just going to the high energy limit (as seen from a laboratory) but scaling the THEORY itself (ie the encoding context of the theory), which then means looking at the naked actions from the inside. There is a difference between scaling the observations made from the lab-frame, with energy scale, and scaling the THEORY itself, as INFERRED at different scales. This distinction does not exist in current RG theory. It suggests a new way of thinking about this that in a different way than you suggested in the taylot expansion, accounts for the observers mass as constraing the encoding of the theory itself.

/Fredrik
 

FAQ: Measurement Problem, Quantum Gravity & Unification

What is the Measurement Problem in quantum mechanics?

The Measurement Problem is a fundamental challenge in quantum mechanics, where the act of measurement or observation appears to affect the outcome of a quantum system. This is in contradiction with the classical understanding of cause and effect, and has been a topic of debate and research for many years.

What is Quantum Gravity?

Quantum Gravity is a theoretical framework that seeks to unify the principles of quantum mechanics and general relativity. It aims to provide a single theory that can explain the behavior of gravity at both the microscopic and macroscopic levels.

Why is the unification of quantum mechanics and general relativity important?

The unification of quantum mechanics and general relativity is important because it would provide a more complete and accurate understanding of the universe. Currently, these two theories are incompatible and cannot both be applied simultaneously in certain situations, leading to limitations in our understanding of the fundamental laws of nature.

What is the current progress in solving the Measurement Problem and achieving quantum gravity?

There has been significant progress in both understanding the Measurement Problem and developing theories of quantum gravity. However, there is still no consensus on a definitive solution to the Measurement Problem, and the search for a complete theory of quantum gravity is ongoing. Various approaches, such as string theory and loop quantum gravity, are being explored and refined.

What are some potential applications of a successful unification theory?

A successful unification theory could have a wide range of applications, from improving our understanding of the fundamental laws of the universe to potentially leading to new technologies and innovations. It could also have implications for fields such as cosmology, high-energy physics, and quantum computing.

Similar threads

Replies
12
Views
2K
Replies
15
Views
2K
Replies
1
Views
2K
Replies
3
Views
781
Replies
6
Views
2K
Back
Top