Signs LQG has the right redefinition (or wrong?)

  • Thread starter marcus
  • Start date
  • Tags
    Lqg
In summary, there will be the 2011 Zakopane QG school the first two weeks of March. Rovelli has 10 hours of lecture, presumably to present his current understanding of the theory at a level for advanced PhD students and postdocs wanting to get into LQG research. This will be, I guess, the live definitive version.
  • #1
marcus
Science Advisor
Gold Member
Dearly Missed
24,775
792
LQG was redefined in 2010. The essentials are summarized in two December papers,
http://arxiv.org/abs/1012.4707
http://arxiv.org/abs/1012.4719
What indications do you see that this was the right (or wrong) move?
How do you understand the 2010 reformulation? How would you characterize it?
What underlying motivation(s) do you see?
 
Physics news on Phys.org
  • #2
As a footnote to that, there will be the 2011 Zakopane QG school the first two weeks of March. Rovelli has 10 hours of lecture, presumably to present his current understanding of the theory at a level for advanced PhD students and postdocs wanting to get into LQG research. This will be, I guess, the live definitive version.

People who coming fresh to this subject should realize that the LQG redefintion relies heavily on analogies with QED and QCD---Feynman diagram QED and lattice gauge QCD. N chunks of space instead of N particles. The graph truncation. The 2-complex ("foam") analog of the 4D lattice.

Also that the formulation does not depend on a smooth manifold or any such spacetime continuum. The graph need not embedded in a continuum (although it may optionally be so at times to accomplish some mathematical construction). To me, the graph represents a restriction of our geometric information----symbolically to a finite set of instruments/readings, or a finite set of chunks of space that we know about.

Or when talking about much smaller scale, a finite set of geometric elements we can infer something about (if not directly probe with macro instuments.)

A 2-complex ("foam") is just the one-higher-dim. analog of a graph. Instead of being a combination of 0 and 1-dimensional nodes and links, a 2-complex is the analogous combination of 0 and 1 and 2-dimensional vertexes, edges, and faces.

A graph can serve as a boundary of a 2-complex (or "foam"). If the graph comes in two separate components, the initial and the final. Then the foam can describe a possible way that the initial graph component evolves into the final. Presumably one of many possible evolutionary paths or histories.

What we are talking about is the evolution of geometric information. Probably the simplest way of talking about this that one can imagine.

There is no smooth manifold in the picture, in part simply to establish that a smooth manifold exists would require an uncountable infinity of physical measurements. It is too great an assumption to make about the world. The spirit of quantum mechanics is to concentrate on what we can actually observe and measure---the interrelationships between pieces of information. and how these evolve.

This is probably the reason that QG has gradually settled down to a manifoldless definition. In the redefined LQG there is no spacetime (in and of itself) there is only "what we can say about it"--a web of geometric info. Some measured or inferred volumes, areas, angles...

Now QED, for instance, needs to be redefined on this web of geometry---no longer should it be defined on a manifold. Information should be located on information, and by information. What we can say, not what "is".

It is this redefinition which one sees beginning to happen in the other December paper I mentioned, called "Spinfoam Fermions" http://arxiv.org/abs/1012.4719
 
Last edited:
  • #3
Do you think LQG will require a principle of relative locality? Do you think this

http://arxiv.org/abs/1101.3524

has anything to do with that?
 
  • #4
I think the job of the theorist is to develop testable theories which are possible to be right.

Put yourself in Freidel's place. It is not his job to "believe" theories (whatever that means.)

The January "Rel. Loc." paper argues that Rel. Loc. is testable. It can be falsified if one finds that the momentum algebra is flat. It is very interesting. Extremely.

Also LQG has changed enormously in the past year, or several years, and is extremely interesting. It is also falsifiable.

Trying now to reconfigure LQG so that it would fit into the Rel Loc philosophy is AFAICS premature speculation. What makes sense to me, now, is develop and test them both so that we have a better idea of how reality is structured. Maybe one or the other can be falsified!

have to go
 
Last edited:
  • #5
So, it was just a coincidence that he used a restriction in the phase space, in the new paper, right?
 
  • #6
MTd2 said:
So, it was just a coincidence that he used a restriction in the phase space, in the new paper, right?

Please help me out with more specifics. Page references even! You must be talking about Freidel and the Rel Loc paper. I haven't studied it. Point me to a paragraph on some page, or to some equation in the Rel. Loc. paper.

My eyes get tired looking thru stuff to find what somebody is talking about. :biggrin:

I think Freidel is great and I am waiting to hear his online March 1 seminar talk about Relative Locality.
And since the International LQG Seminar connects a halfdozen places around the world and they can all ask questions I am waiting to hear what questions Freidel gets from people in PennState, Perimeter, Marseille, Nottingham, Warsaw...

The only trouble is 1 March is also the first day of the Zakopane school and both Ashtekar and Rovelli are scheduled to give 2 hour lectures on that same day. So there is a huge time conflict. The school is important and also Freidel's talk is. How they ever managed to schedule it like that is beyond my comprehension.
 
  • #7
Look at the abstract of

http://arxiv.org/abs/1101.3524

"We discretize the Hamiltonian scalar constraint of three-dimensional Riemannian gravity on a graph of the loop quantum gravity phase space. ... This fills the gap between the canonical quantization and the symmetries of the Ponzano-Regge state-sum model for 3d gravity."

http://arxiv.org/abs/1101.0931

p.2
"Physics takes place in phase space and there is no invariant
global projection that gives a description of processes in
spacetime. From their measurements local observers can
construct descriptions of particles moving and interacting
in a spacetime, but different observers construct different
spacetimes, which are observer-dependent slices of phase
space."

Sounds like that LQG makes sense only with relative locality, sort of, at least in 3d.
 
  • #8
I see what you are driving at. Thanks for the detailed reference. I'm not going to agree or disagree yet because I don't understand Relative Locality well enough. The way I imagined it, in the Rel Loc paper the momentum space that was critical was that of matter particles. The crucial question was whether or not material momenta added in a flat vectorspace way. Was the matter momentum space curved or not? The phase space that was at issue in Rel Loc included matter. That is how I was thinking.
In the 3D paper you cited, the topic is pure gravity, no matter. Or am I missing something?
The connection is too tenuous for me to follow, at this point. Maybe someone else can respond more helpfully.
 
  • #9
Yes, no matter. But you were talking about redefinitions, I took it as being general redefinitions about the fundamentals of the theory! Lol, I guess I went off topic. I don't know, maybe a new thread is required? I don't know how to put it.
 
  • #10
I suppose what you are asking about is relevant (at least eventually). I am simply not prepared to respond in any useful way. It's natural to ask in what way is LQG compatible with the Rel. Loc. principle? Is it even compatible at all? Or are they in spirit quite close? It seems to me natural that people would be asking such questions at the ILQGS on March 1, if indeed Freidel gives the scheduled talk, and if the others are available to listen and comment.

If you can't get a satisfactory discussion here and now, then you or I must start a thread about this issue after Freidel's talk (presumably 1 March).

Ultmately it comes down to empirical tests. Rel Loc is testable by testing the addition of particle momenta and suchlike stuff. LQG is testable because of its robust prediction of a cosmological bounce, some bearing on inflation, related features of Cmb. But although neither can be assumed a priori true, their mathematical (in)compatibility is surely an interesting question.
===================================

Instead of talking about Rel Loc now, what I want to do is quote some of the 4707 paper where he points out analogies with QED and QCD. He gives the full definition of LQG in three equations and half a page, and then he starts with some motivation:

This is the theory. It is Lorentz invariant [18]. It can be coupled to fermions and Yang-Mills fields [19], and to a cosmological constant [20, 21], but I will not enter into this here. The conjecture is that this mathematics describes the quantum properties of spacetime, reduces to the Einstein equation in the classical limit, and has no ultraviolet divergences. I now explain more in detail what the above means.

A. Quantum geometry: quanta of space
A key step in constructing any interactive quantum field theory is always a finite truncation of the dynamical degrees of freedom. In weakly coupled theories, such as low-energy QED or high-energy QCD, we rely on the particle structure of the free field and consider virtual processes involving finitely many, say N, particles, described by Feynman diagrams. These processes involve only the Hilbert space HN = ⊕n=1,N Hn , where Hn is the n-particle state space.

In strongly coupled theories, such as confining QCD, we resort to a non-perturbative truncation, such as a finite lattice approximation. In both cases (the relevant effect of the remaining degrees of freedom can be subsumed under a dressing of the coupling constants and) the full theory is formally defined by a limit where all the degrees of freedom are recovered.

The Hilbert space of loop gravity is constructed in a way that shares features with both these strategies. The analog of the state space HN in loop gravity is the space

HΓ = L2[SU(2)L/SU(2)N]. (4)

where the states ψ(hl) live. Γ is an abstract (combinatorial) graph, namely a collection of links l and nodes n, and a “boundary” relation that associates an ordered couple of nodes (sl, tl) (called source and target), to each link l. (See the left panel in Figure 1.) L is the number of links in the graph, N the number of nodes, and the L2 structure is the one defined by the the Haar measure. The denominator means that the states in HΓ are invariant under the local SU(2) gauge transformation on the nodes

ψ(Ul)→ψ(Vsl UlVtl−1), V ∈ SU(2), (5)

the same gauge transformation as in lattice Yang-Mills theory.

States in HΓ represent quantum excitation of space formed by (at most) N “quanta of space”. The notion of “quantum of space” is basic in loop gravity. It indicates a quantum excitation of the gravitational field, in the same sense in which a photon is a quantum excitation of the electromagnetic field. But there is an essential difference between the two cases, which reflects the difference between the electromagnetic field in Maxwell theory, and the gravitational field in general relativity: while the former lives over a fixed (Minkowski) metric spacetime, the second represents itself spacetime.

Accordingly, a photon is a particle that “lives in space”, that is, it carries a momentum quantum number ⃗k, or equivalently a positions quantum number ⃗x, determining the spatial localization of the photon with respect to the underlying metric space. The quanta of loop gravity, instead, are not localized in space. Rather, they define space themselves, as do classical solutions of the Einstein equations.

More precisely, the N “quanta of space” are only localized with respect to one another: the links of the graph indicates “who is next to whom”, that is, the adjacency link that defines the spatial relations among the N quanta. (See the right panel in Figure 1.) Thus, these quanta carry no quantum number such as momentum ⃗k or position ⃗x.

Rather, they carry quantum numbers that define a quantized geometry, namely a quantized version of the information contained in a classical (three- dimensional) metric. The way this happens is elegant, and follows from a key theorem due to Roger Penrose, called the spin-geometry theorem, which is at the root of loop gravity [22]. I give here an extended version of this theorem,...

 
Last edited:
  • #11
I don't think that LQG has been redefined.

Rovelli states that it is time to make the next step from the construction of the theory to the derivation of results. Nevertheless the construction is still not complete as long as certain pieces are missing. Therefore e.g. Thiemann's work regarding the Hamiltonian approach (which is not yet completed and for which the relation to spin foams is still not entirely understood) must still back up other programs

There are still open issues to be solved:
- construction, regularization and uniqueness of the Hamiltonian H
- meaning of "anomaly-free constraint algebra" in the canonical approach
- relation between H and SF (not only kinematical)
- coarse-graining of spin networks, renormalization group approach
- nature and value of the Immirzi parameter
- nature and value of the cosmological constant
- nature of matter and gauge fields (on top, emergent, ...); yes, gauge fields!
And last but not least: If a reformulation is required (which would indicate that the canonical formalism is a dead end), then one must understand why it is a dead end! We don't know yet.

My impression that Rovelli's new formulation does not address all these issue. His aim is more to develop calculational tools to derive physical results in certain sectors of the theory.

Let's look at QCD: there are several formulations of QCD (PI, canonical, lattice, ...), every approach with its own specific benefits and drawbacks. But nobody would ever claim that QCD has been reformulated (which sounds as if certain approaches would be out-dated). All approaches are still valid and are heavily used to understand to understand QCD vacuum, confinement, hadron spectroscopy, QGP, ... There is not one single formulation of QCD.

So my conclusion is that a new formulation of LQG has been constructed, but not that LQG has been reformulated.
 
  • #12
marcus said:
...What we are talking about is the evolution of geometric information. Probably the simplest way of talking about this that one can imagine.

There is no smooth manifold in the picture, in part simply to establish that a smooth manifold exists would require an uncountable infinity of physical measurements. It is too great an assumption to make about the world. The spirit of quantum mechanics is to concentrate on what we can actually observe and measure---the interrelationships between pieces of information. and how these evolve.

This is probably the reason that QG has gradually settled down to a manifoldless definition. In the redefined LQG there is no spacetime (in and of itself) there is only "what we can say about it"--a web of geometric info. Some measured or inferred volumes, areas, angles...

Now QED, for instance, needs to be redefined on this web of geometry---no longer should it be defined on a manifold. Information should be located on information, and by information. What we can say, not what "is"...

To inject a shallow note into this deep thread:

What, in this context, can we say about "Nothing at all" --- the Vacuum, about which Peacock made the comment (in his " Cosmological Physics):

"It is perhaps just as well that the average taxpayer, who funds research in physics, is unaware of the trouble we have in understanding even nothing at all" ?

In Loop Quantum Gravity abstract graphs are often sketched of vertices (drawn as dots) connected by edges (drawn as lines) that represent "what we can say" about the dimensional circumstances we live in. An example is Fig. 1 of Rovelli's Christmas review that was linked to in the original post of this thread.

The simplest thing we can say about the vacuum seems to be that it is quite symmetric; here is the same as there, and now is no different from then, as far as the vacuum is concerned. That's why we expect the laws of physics to be covariant in what we call spacetime.

Yet abstract graphs that are drawn, like Rovelli's, show no symmetry at all. They're lopsided and skew, as well they might be when gravitating matter or interacting fermions are involved. If they were drawn to represent the Vacuum (or perhaps a time average of it) wouldn't these graphs be more symmetric, perhaps even lattice-like? Lots of symmetries to explore then. Which brings me to ask: if this is so, what is it that makes or keeps the Vacuum so symmetric and, in the absence of localised mass/energy, spatially flat? Non-localised energy that can't be detected? Or something else that everybody except me understands?
 
  • #13
tom.stoer said:
...
And last but not least: If a reformulation is required (which would indicate that the canonical formalism is a dead end), then one must understand why it is a dead end! We don't know yet.

Let's look at QCD: there are several formulations of QCD (PI, canonical, lattice, ...), every approach with its own specific benefits and drawbacks. But nobody would ever claim that QCD has been reformulated (which sounds as if certain approaches would be out-dated). All approaches are still valid and are heavily used to understand to understand QCD vacuum, confinement, hadron spectroscopy, QGP, ... There is not one single formulation of QCD.

So my conclusion is that a new formulation of LQG has been constructed, but not that LQG has been reformulated.

I think I see now the distinction you are making between a new formulation
and a re formulation.

Personally I do not suspect that the Hamiltonian approach is a dead end. We cannot know the future of research, but my expectation is that people will continue to work on completing the Hamiltonian approach and it will ultimately prove equivalent.

It might (at that future point in history) look different, of course. There might, for example, be no smooth manifold, no continuum, the spinnetworks (if they remain in the Hamiltonian formulation) might not be embedded. Or they might be. I don't see us as able to predict how the various versions of the theory will look.

But as an immediate sign that the Ham. approach is not yet a dead end, there is the Freidel paper that was just posted two days ago.

MTd2 said:
http://arxiv.org/abs/1101.3524

The Hamiltonian constraint in 3d Riemannian loop quantum gravity

Valentin Bonzom, Laurent Freidel
(Submitted on 18 Jan 2011)
We discretize the Hamiltonian scalar constraint of three-dimensional Riemannian gravity on a graph of the loop quantum gravity phase space. This Hamiltonian has a clear interpretation in terms of discrete geometries: it computes the extrinsic curvature from dihedral angles. The Wheeler-DeWitt equation takes the form of difference equations, which are actually recursion relations satisfied by Wigner symbols. On the boundary of a tetrahedron, the Hamiltonian generates the exact recursion relation on the 6j-symbol which comes from the Biedenharn-Elliott (pentagon) identity. This fills the gap between the canonical quantization and the symmetries of the Ponzano-Regge state-sum model for 3d gravity.

Plus, some of the other things you mentioned remain interesting and important open problems (in whatever formulation one confronts them), such as:

tom.stoer said:
...
- nature and value of the Immirzi parameter
- nature and value of the cosmological constant
 
Last edited:
  • #14
oldman said:
...
The simplest thing we can say about the vacuum seems to be that it is quite symmetric; here is the same as there, and now is no different from then, as far as the vacuum is concerned. That's why we expect the laws of physics to be covariant in what we call spacetime.

Yet abstract graphs that are drawn, like Rovelli's, show no symmetry at all. They're lopsided and skew, as well they might be when gravitating matter or interacting fermions are involved. If they were drawn to represent the Vacuum (or perhaps a time average of it) wouldn't these graphs be more symmetric,...

I suppose that one reason for the power of General Rel is that it is general. One can have solutions with no recognizable symmetry at all.

To be a satisfactory quantum version of GR, Loop must imitate that basic feature.

Of course it is technically possible to confine LQG to an approximately flat sector. This has been done in the "graviton propagator papers" circa 2007.
====================

Had to leave abruptly to take care of something else, before finishing. Back now.
The thing about your post is that it raises intriguing questions.

BTW you mentioned the Christmas review paper. That gives one formulation of the theory, in 3 equations. He says clearly there are other formulations and he is just giving his understanding of what LQG is---so in that sense he seems to agree with Tom Stoer. Indeed the paper goes over OTHER formulations in a later section, fairly extensively----BF theory, GFT, canonical Hamiltonian style, versions using manifolds and so on.

But I find it makes discussion simpler to focus on the one current formulation. Which you may have in mind since you mentioned the recent review paper (1012.4707).

In that case one should observe that the graphs are purely combinatorial. It doesn't matter how they are drawn---with long curly lines or short wiggly lines---or lopsided with all the nodes but one off by themselves in a corner. The visual characteristics of the graph are for the most part inconsequential.

I guess the important things to communicate is that a graph is purely combinatorial and quite general. It could have 2 nodes and 4 links, or it could have billions of nodes and billions of links. It has no special symmetry. The way of treating it mathematically is supposed to be the same whether it has 2 nodes or a trillion nodes.

Combinatorial means it consists of two finite sets and two functions.
NODES = {1,2,3,...N}
LINKS = {1,2,3,...L}
s: LINKS ->NODES
t: LINKS -> NODES

The auxilliary functions s and t are the source and target functions that, for each link, tell you where that link starts from and where it ends up.
For a given link l, the two nodes that link connects are s(l) and t(l).

It's like the minimum math info that could define an oriented graph. The symbol for that simple combinatorial info is gamma Γ.

What i think is the great thing about it is that it allows you to define a Hilbertspace HΓ and do non-trivial stuff. The Hilbertspace has gauge symmetries specified by Γ

Remember that gauge symmetries are symmetries in our information, how it is presented, they are not real material symmetries of a physical situation.

The graph Γ is very much about how we sample the geometric reality of nature (or so I think anyway). It is about what degrees of geometric freedom we capture. (and which others we perhaps overlook.) My interpretation could be quite wrong---it is certainly not authoritative.

There is another interpretation----nodes as "exitations of geometry". N nodes is analogous to a Fock space where there are N particle, say N electrons. In that case the "real" universe would correspond to a graph with a HUGE number of nodes and links. But we develop the math to treat any number. And we deal with examples of small N. You can find that interpretation clearly presented in the Christmas summary paper.

Either way, there is no need for small example graphs to look like anything in particular.
I think they should be, if anything, arbitrary and irregular---to suggest the generality.
 
Last edited:
  • #15
marcus said:
I suppose that one reason for the power of General Rel is that it is general. One can have solutions with no recognizable symmetry at all.

To be a satisfactory quantum version of GR, Loop must imitate that basic feature...

... one should observe that the graphs are purely combinatorial. It doesn't matter how they are drawn---with long curly lines or short wiggly lines---or lopsided with all the nodes but one off by themselves in a corner. The visual characteristics of the graph are for the most part inconsequential.

I guess the important things to communicate is that a graph is purely combinatorial and quite general. ...there is no need for small example graphs to look like anything in particular.
I think they should be, if anything, arbitrary and irregular---to suggest the generality.

Thanks for this. I guess I was being too fussy about the RHS of figure 1 in Rovelli's paper, with it's superimpsed "grains of space". It reminded me of overinterpreted representations of atoms with whirling electrons trailing smoke. I liked when you earlier said it's all about:'What we can say, not what "is" ' . Just as Niels Bohr believed.
 
  • #16
It still isn't completely clear to me how to think of LQG, but it is getting clearer. I'm glad it is so for you as well. The December review paper is well written, I think.

Here is another enlightening short paragraph. It comes on page 6 after he has finished describing the theory (by stating 3 equations on page 2 and then discussing what they mean, with background etc. Then when that is all done, he says:

This concludes the definition of the theory. I have given here this definition without mentioning how it can be “derived” from a quantization of classical general relativity. This is like defining QED by giving its Hilbert space of free electrons and photons and its Feynman rules, without mentioning either canonical or path integral quantization. A reason for this choice is that I wanted to present the theory compactly. A second reason is that one of the main beauties of the theory is that it can be derived with rather different techniques, that involve different kinds of math and different physical ideas. The credibility of the end result is reinforced by the convergence, but each derivation “misses” some aspects of the whole. In chapter IV below I will briefly summarize the main ones of these derivations. Before this, however, let me discuss what is this theory meant to be good for.​

It seems significant to me that no single "derivation" is perfect. The various roads to the present formulation converge but none are complete. The final form of the theory, he seems to be saying, is an educated guess.

Different roads up the mountain, all converging towards the peak...but none quite reaching, so in the end one takes the helicopter. The "derivations" have been valuable to give heuristic guidance, motivation, understanding...but one should not be too tied to the rituals. To repeat a key comparison:

This is like defining QED by giving its Hilbert space of free electrons and photons and its Feynman rules, without mentioning either canonical or path integral quantization.​

Well, perhaps that would have been all right! Not only as an essay's expository plan but as an alternative historical line of development. :biggrin: Perhaps the canonical and path integral quantization could have been skipped and then reconstructed after the fact. If by some fluke the Feynman rules had been discovered first. A not entirely serious speculation.

In case anyone is new to the discussion, the recent review of LQG (December 2010) is http://arxiv.org/abs/1012.4707
 
Last edited:
  • #17
tom.stoer said:
...
- nature and value of the Immirzi parameter
- nature and value of the cosmological constant.

Earlier, Tom gave us a good list of unresolved (or only partially resolved) issues in LQG.

I think there are signs that the theory has the right (or a right) redefinition, as given in the December 2010 overview paper http://arxiv.org/abs/1012.4707

I will mention a few of the signs I see of this, but first to mention one very positive sign that just appeared: this is in response to the Lambda issue, the cosmological constant issue, that Tom indicated.

http://arxiv.org/abs/1101.4049
Cosmological constant in spinfoam cosmology
Eugenio Bianchi, Thomas Krajewski, Carlo Rovelli, Francesca Vidotto
4 pages, 2 figures
(Submitted on 20 Jan 2011)
"We consider a simple modification of the amplitude defining the dynamics of loop quantum gravity, corresponding to the introduction of the cosmological constant, and possibly related to the SL(2,C)q extension of the theory recently considered by Fairbairn-Meusburger and Han. We show that in the context of spinfoam cosmology, this modification yields the de Sitter cosmological solution."

This paper finds a nice natural place for the cosmo constant, and does not resort to the quantum group or q-deformation.

Note that it partly addresses the classical limit issue, since spinfoam cosmology uses the full theory and it is now giving a familiar DeSitter universe as largescale limit.
 
Last edited:
  • #18
"Equation (10) is the Friedmann equation in the presence of a cosmological constant , which is solved by de Sitter spacetime."

Why can we assume that an equation which has the same form as the Friedmann equation has the same meaning - ie. as a solution of an equation for a spacetime metric?
 
  • #19
The idea is always the same: one enhances LQG models on the level of the classical action / on the level of spin networks via quantum deformation / on the level of the intertwiners = as a generalization of the spin foams algebraically to produce a cc term.

Doing this in the quantum theory directly has no benefit. It shows that it can be done cosnsistently, but it does not explain this term. There are always the same questions: what is the reason for
- the cc term, in te EH action
- the quantum deformation of SU(2)
- the generalization of the intertwiner

Sorry, but Rovelli only shows that it can be done, not what it means.
 
  • #20
atyy said:
Why can we assume that an equation which has the same form as the Friedmann equation has the same meaning - ie. as a solution of an equation for a spacetime metric?

I don't think there is any problem, Atyy. They already showed the derivation in the March 2010 paper by bianchi, rovelli, vidotto. Equations 32-44 or thereabouts. They go all the way to the Friedmann equation there. The present paper just follows, with the same notation.

The Friedmann equation is an ordinary diff eqn for the scale factor a(t). The Fr. eqn. does not give you a spacetime metric, it gives you this time-varying dimensionless number a(t), using which you can make a metric if you have a manifold and the other ingredients. But a(t) itself is just a realvalued function of one real variable.

Well, the spinfoam model can give you a(t) too. At least that is how it looks to me when I go over the March 2010 paper. See what you think.
 
Last edited:
  • #21
tom.stoer said:
... It shows that it can be done consistently, but it does not explain this term...

We're seeing a number of signs that new formulation is good.

It was introduced in March/April 2010

1. right away we got spinfoam cosmology (the March Bianchi Rovelli Vidotto paper)
2. the technical analogies with Feynman diagram QED and lattice QCD
3. we get the Spinfoam Fermion paper in December, another sign the format is OK
4. we get cosmogical constant papers, especially this January 2011 one
5. we see deSitter classic universe come out of spinfoam.
6. Battisti Marciano verify the bounce in spinfoam cosmology
7. we see a manifestly covariant version exhibited

these are all signs that the format is working out really well.

Sure, you can ask "what's the explanation of the CC?"

But what I'm looking for is signs that the new manifoldless combinatorial spinfoam is a good format.
I see a lot of things happen in a short time. I see people learning how to use the format and doing some things that weren't done before or weren't done so nicely.

This is what I mean by the thread topic title. I will worry about explanations later.
 
Last edited:
  • #22
Why should the cc be explained at all?
 
  • #23
I agree that the concept how to introduce the cc seems to be physically convincing and mathematically consistent; it provides a rough understanding of the large-scale structure / deSitter space; it brings LQG and CDT closer together; it may even point towards an understanding why the cc must be positive.

But it does not explain what the cc is and why the three parameters G, ß and Lambda (which appear on the same footing in a classical Lagrangian) are so much different when looking at their quantum counterparts in LQG/SF and when comparing the treatment with AS.
 
  • #24
I cannot understand this: why should ASQG be similar to LQG/SF?
 
  • #25
MTd2 said:
I cannot understand this: why should ASQG be similar to LQG/SF?
In AS a renormalization group approach a la Kadanoff is used. Therefore AS is a "meta-theory" (or better: a method) defined on the theory-space of Riemann-geometry consisting of all possible scalar invariants R, R², ... which can be used to define an action. AS tells you something about a renormalization group flow, relevant and irrelevant operators and all that.

Now assuming that AS is correct (at least as an effective theory) it is clear that any fundamental theory of QG should not only reproduce classical GR but AS results as well (at least within a certain regime). Therefore if AS tells us something regarding physical values of G and Lambda then this theory seems to make a prediction regarding Lambda!

If this is true then we should expect that a fundamental theory like LQG should be able to make a prediction regarding Lambda as well - instead of fixing Lambda algebraically / as an input.
 
  • #26
The discussion of http://arxiv.org/abs/1003.3483 says "In detail, we have studied three approximations: (i) cutting the theory to a finite dimensional graph (the dipole), (ii) cutting the spinfoam expansion to just one term with a single vertex and (iii) the large volume limit. The main hypothesis on which this work is based is that the regime of validity in which these approximations are viable includes the semiclassical limit of the dynamics of large wavelengths. “Large” means here of the order to the size of the universe itself."

So all the divergences are removed by ignoring them. Is this derivation or hypothesis?
 
  • #27
atyy said:
... Is this derivation or hypothesis?

I think what you are talking about is simply how people do physics. Typically they start with a "first order approximation" of something. It is neither strictly speaking, neither rigorous derivation nor pure hypothesis. As a biologist you may be expecting a dichotomy, an either or. I don't know cultures and mentalities differ.

We need to be fair and objective, too, not let judgments be too much colored by set animosity.

The March 2010 paper is doing something quite new---working cosmology with spinfoam tools. So they derive partly by guesswork and simplifying assumptions, and see if they get something that looks right. In later papers they can gradually remove simplifying assumptions and guesswork premises---make the derivations more rigorous---analogous to "second order" or higher loop.

Indeed there has been already some followup to the March 2010 "Towards Spinfoam Cosmology" paper. Notice the "towards" it is meant to get a research move started, and it has.


αβγδεζηθικλμνξοπρσςτυφχψω...ΓΔΘΛΞΠΣΦΨΩ∏∑∫∂√∧± ÷←↓→↑↔~≈≠≡≤≥½∞⇐⇑⇒⇓⇔∴∃ℝℤℕℂ⋅
 
  • #28
Good. "Towards" would be the correct word to use for the "large scale limit". It has not given the right large scale limit yet.
 
  • #29
tom.stoer said:
If this is true then we should expect that a fundamental theory like LQG should be able to make a prediction regarding Lambda as well - instead of fixing Lambda algebraically / as an input.

I still do not see the problem in fixing that algebraically, seriously. Can you explain it?
 
  • #30
Compare and contrast.

http://arxiv.org/abs/1003.3483 "In detail, we have studied three approximations: (i) cutting the theory to a finite dimensional graph (the dipole), (ii) cutting the spinfoam expansion to just one term with a single vertex and (iii) the large volume limit. The main hypothesis on which this work is based is that the regime of validity in which these approximations are viable includes the semiclassical limit of the dynamics of large wavelengths. “Large” means here of the order to the size of the universe itself."

http://arxiv.org/abs/1007.2560 "A key feature to appreciate here is that, unlike in standard (quantum-)cosmological treatments, this description is the outcome of a nonperturbative evaluation of the full path integral, with everything but the scale factor (equivalently, V3(t)) summed over".
 
Last edited:
  • #31
MTd2 said:
I still do not see the problem in fixing that algebraically, seriously. Can you explain it?
If AS is right to some extend then Lambda is running and you simply can't fix it algebraically! So either you allow for "dynamical q-deformation in quantum groups" or you apply the Kadanoff block spin transformation to the spin networks and derive a kind of renormalization group equation for "intertwiner coarse graining".

It is clear that you don't see the problem of fixed Lambda in the large-distance / cosmological limit; it is this limit where we observe "fixed Lambda" in nature. But in a fully dynamical setup you can't expect that one bare parameter remains fixed. If this were true then LQG must explain the reason for that, e.g. a special kind of symmetry protecting Lambda from running. Up to now it's mysterious.
 
  • #32
tom.stoer said:
If AS is right to some extend then Lambda is running and you simply can't fix it algebraically! So either you allow for "dynamical q-deformation in quantum groups" or you apply the Kadanoff block spin transformation to the spin networks and derive a kind of renormalization group equation for "intertwiner coarse graining".

It is clear that you don't see the problem of fixed Lambda in the large-distance / cosmological limit; it is this limit where we observe "fixed Lambda" in nature. But in a fully dynamical setup you can't expect that one bare parameter remains fixed. If this were true then LQG must explain the reason for that, e.g. a special kind of symmetry protecting Lambda from running. Up to now it's mysterious.

Wether lambda may run or not is an intersting question.

I don't have much to say except to throw in that I specualted by a long shot connection to my own thinking between the E-H action and information divergence (which is very similar to an action; as extremal action and extremal information divergences are at minimum very closely related principles, both conceptually and mathematically).
https://www.physicsforums.com/showthread.php?t=239414

When I posted that I afterwards realized that it was too tenous for anyone else to connect to.

My conclusion was that it's likely the the constant will run, but not as much with observational scale, but more with the observer complexity scale. My take on theory scaling is that unlike what I think is commonly common practice there has two be TWO energy scales. First there is the scale of where you look, ie, how you zoom in using a microscope or a accelerator. The other energy scale is where the information is coded. In common physics, does not NOT scale, it's somehow quasi-fixed by our "earth-based lab-scale".

My point is that we SHOULD consider indepedently "zooming a microscope" and scaling the microscope itself, because there is a difference. Somehow the latter scale, puts a BOUND to how far the former scale can run.

If anyone knows anyone that takes this seriously and has some references I'd be extremely interested in that. What I suggest is that the very nature of RG may also need improvement. Because the theory scaling as we konw it konw has fixed one scale; the Earth based scale. Nothing wrong with that per see as an effective perspective, but I think a deeper understanding may come if we acknowledge both scales.

/Fredrik
 
  • #33
tom.stoer said:
It is clear that you don't see the problem of fixed Lambda in the large-distance / cosmological limit; it is this limit where we observe "fixed Lambda" in nature.

Yes, that one. The paper with cc is barely out. I guess you are asking too much...
 
  • #34
Alright, what a coincidence,

http://arxiv.org/abs/1101.4788

it seems exists to find the correct order of magnitude of the cosmological constant, for LQG, as well as that it also has a UV behavior just like AS...
 
  • #35
MTd2 said:
Yes, that one. The paper with cc is barely out. I guess you are asking too much...
No no. I don't want to criticize anybody (Rovelli et al.) for not developping a theory for the cc. I simply want to say that this paper does not answer this fundamental question and does not explain how the cc could fit into an RG framework (as is expected for other couplings).

---------------------

We have to disguish two different approaches (I bet Rovelli sees this more clearly than I do).
- deriving LQG based on the EH or Holst action, Ashtekar variables, loops, ... extending it via q-deformation etc.
- defining LQG using simple algebraic rules, constructing its semiclassical limit and deriving further physical predictions

The first approach was developped for decades, but still fails to provide all required insights like (especially) H. The second approach is not bad as it must be clear that any quantization of a classical theory is intrinsically incomplete; it can never resolve quantization issues, operator ordering etc. Having this in mind it is not worse to "simply write down a quantum theory". The problem with that approach was never the correct semiclassical limit (this is a minor issue) but the problem to write down a quantum theory w/o referring to classical expressions!

Look at QCD (again :-) Nobody is able to "guess" the QCD Hamiltonian; every attempt to do this would break numerous symmetries. So one tries (tried) to "derive" it. Of course there are difficulties like infinities, but one has a rather good control regarding symmetries. Nobody is able to write down the QCD PI w/o referring to the classical action (of course its undefined, infinite, has ambiguities ..., but it does not fail from the very beginning). Btw.: this hasn't changed over decades, but nobody cares as the theory seems to make the correct predictions.

Now look at LQG. The time for derivations may be over. So instead of derived LQG (which by may argument explained above is not possible to 100%) one may simply postulate LQG. The funny thing is that in contradistinction to QCD we seem to be able to write down a class of fully consistent theories of quantum gravity w/o derivation, w/o referring to classical expressions, w/o breaking of certain symmetries etc. The only (minor!) issue is the derivation of the semiclassical limit etc.

From a formal perspective this is a huge step forward. If this formal approach is correct, my concerns regarding the cc are a minor issue only.
 
Back
Top