Signs LQG has the right redefinition (or wrong?)

  • Thread starter marcus
  • Start date
  • Tags
    Lqg
In summary, there will be the 2011 Zakopane QG school the first two weeks of March. Rovelli has 10 hours of lecture, presumably to present his current understanding of the theory at a level for advanced PhD students and postdocs wanting to get into LQG research. This will be, I guess, the live definitive version.
  • #36
What is a semiclassical limit for you?
Why fitting cc could fit into an RG framework would be a fundamental question? :confused:
 
Physics news on Phys.org
  • #37
@Tom
post #35 gives an insightful and convincing perspective. Also it leaves open the question of what will be the definitive form(s) of the theory. Because you earlier pointed out that at a deeper level a theory can have several equivalent presentations.

I had a minor comment about that. For me, the best presentation of the current manifoldless version is not the absolute latest (December's 1012.4707) but rather October's 1010.1939. And I would say that the notation differs slightly between them, and also that (from the standpoint of a retired mathematician with bad eyesight) their notation is inadequate/imperfect.

If anyone wants to help me say this, look at 1010.1939 and you will see that there is no symbol for a point in the group manifold SU(2)L = GL = G x G x ... x G
Physicists think that they can write down xi and have this mean either xi or else the N-tuple (x1, x2,...,xN)
depending on context. This is all right to a certain extent but after a point it becomes confusing.

In many ways I think the presentation in 1010.1939 is the clearest, but it is still deficient.
Maybe I will expand on that a bit, if it will not distract from more meaningful discussion.

============

BTW, in line with what Tom said in the previous post, there are obviously several different ways LQG can fail, not just one way. One failure mode is mathematical simplicity/complexity. To be successful a theory should (ideally) be mathematically simple.
As well as passing the empirical tests.

One point in favor of the 1010.1939 form is that it "looks like" QED and QCD, except that it is background independent and about geometry, instead of being about particles of matter living in fixed background. Somehow it manages to look like earlier field theories. The presentation on the first page uses "Feynman rules".

These Feynman rules focus on an amplitude ZC(h)
where C is a two-complex with L boundary or "surface" edges, and h is a generic element of SU(2) and h is (h1, h2,...,hL), namely a generic element of SU(2)L

The two-complex C is the "diagram". The boundary edges are the "input and output" of the diagram---think of the boundary as consisting of two separate (initial and final) components so that Z becomes a transition amplitude. Think of the L-tuple h as giving initial and final conditions. The notation h is my notational crutch which I use to keep order in my head. Rovelli, instead, makes free use of the subscript "l" which runs from 1 to L, and has no symbol for h.

The central quantity in the theory is the complex number ZC(h) and one can think of that number as saying

Zroadmap(boundary conditions)
 
Last edited:
  • #38
The thing I like about LQG is that although the ideas may be incorrect or the redefinition for that matter, they are making progress and aren't afraid to delve into these unique concepts. I've never seen so many original papers come out in a year in one specific research program!

All I see now from String Theory research programs is [tex]AdS_5 \times S^5[/tex] and holographic superconductors, they haven't really ventured into other ideas. Is [tex]AdS/CFT[/tex] even a physical theory at this point, is it possible in our universe? I don't know, but many interesting things are going on in LQG and it's relatives such as CDT that appear much more interesting then the plateau that ST is facing, what the "heck" is a holographic superconductor anyways?

I think the real notion that must be addressed is the nature of space-time itself. I feel that all of our ideas in Physics rely on a specific space-time backgrounds and therefore having a quantum description of space-time at a fundamental level is a more clear approach - which LQG does. Does ST address this idea, is [tex]AdS/CFT[/tex] a valid idea? Anyways enough with the merits of ST, what is LQG lacking?
 
Last edited:
  • #39
Kevin_Axion said:
...
I think the real notion that must be addressed is the nature of space-time itself.

I think that is unquestionably correct. The issue is the smooth manifold, invented by Bernie Riemann around 1850 and introduced to mathematicians with the help and support of Carl Gauss at Gottingen around that time. It is a continuum with a differential structure---technically the general idea is called "differentiable manifold".

The issue is whether or not is is time to replace the manifold with something lighter, more finite, more minimal, more "informatic" or information-theoretical.

If the historical moment is ripe to do this, then Rovelli and associates are making a significant attempt which may show the way. If the historical moment is not ripe to replace the manifold (as model of spacetime) then they will be heading off into the jungle to be tormented by savages, mosquitoes and malaria.

At the present time the proposed minimalist/informatic structure to replace manifold is a 2-complex. Or, ironically, one can also work with a kind of "dual" which is a full-blown 4D differential manifold which has a 2-complex of "defect" removed from it and is perfectly flat everywhere else.
A two-complex is basically just like a graph (of nodes and links) except it has one higher dimensionality (vertices, edges, faces). A two-complex is mathematically sufficient to carry a sketch of the geometric information (the curvatures, angles, areas between event-marked regions,...) contained in a 4D manifold where this departs from flatness. A two-complex provides a kind of finite combinatorial shorthand way of writing down the geometry of a 4D continuum.

So we will watch and see how this goes. Is it time to advance from the 1850 spacetime manifold beachhead, or not yet time to do that?

marcus said:
...

The central quantity in the theory is the complex number ZC(h) and one can think of that number as saying

Zroadmap(boundary conditions)
 
  • #40
So essentially quantum space-time is nodes connecting to create 4D tetrahedrons?
 
  • #41
Kevin_Axion said:
So essentially quantum space-time is nodes connecting to create 4D tetrahedrons?
I'm agnostic about what nature IS. I like the Niels Bohr quote that says physics is not about what nature is, but rather what we can say about it.

Also another favorite is the Rovelli quote that QG is not about what spacetime is but about how it responds to measurement.

(there was a panel discussion and he was trying to say that arguments about whether it is really made of chainlink-fence, or tinkertoy, or lego-blocks, rubberbands, or tetrahedra, or the 4D analog of tets, called 4-simplices, or general N-face polyhedra...are not good arguments. How one sets up is really just a statement about how one intends to calculate. One calculates the correlations between measurements/events. The panel discussion was with Ashtekar and Freidel, at PennState in 2009, as I recall. I can get the link if anyone is interested. It told me that QG is about geometric information, i.e. observables. not about "ontology". So I liked that and based my agnosticism on it.)

BTW I think human understanding grows gradually, almost imperceptibly, like a vine up a wall. Nothing works if it is too big a step, or jump. Therefore, for me, there is no final solution, there are only the small steps that the human mind can take now. The marvel of LQG, for me, is that it actually seems as if it might be possible to take this step now, and begin to model spacetime with something besides a manifold, and yet still do calculations (not merely roll the Monte Carlo simulation dice of CDT and Causets.)

But actually, Kevin, YES! :biggrin: Loosely speaking, the way almost everyone does speak, and with the weight on "essentially" as I think you meant it, in this approach spacetime essentially is something like what you said!
 
Last edited:
  • #42
tom.stoer said:
The problem with that approach was never the correct semiclassical limit (this is a minor issue) but the problem to write down a quantum theory w/o referring to classical expressions!

In the past two years I have repeatedly tried to stimulate a discussion on this issue with no such luck, everybody seems to be happy or just accept that. I have never seen any good thread on this issue because it seems to be sacrilegious to talk about it.

Moreover, I think the real culprit is differential equations, they are inherently a guess work, the technique is always to "add terms" to get it to fit experiment, not to mention its limited relating points to the neighbors and the notorious boundary condition requirement. It has served us well for a long time,but No fundamental theory should be like that.

As for LQG, the original idea was just the only option to make GR look like the quantum and to "see what happens", only for rovelli to conclude that spacetime and matter should be related. But how, LQG is giving hints which has not been capitalized on. I still think spacetime is ""unphysical ""and must be derived from matter and not the other way around.
 
  • #43
Kevin_Axion said:
So essentially quantum space-time is nodes connecting to create 4D tetrahedrons?

Just a little language background, in case anyone is interested: The usual name for the analogous thing in 4D, corresponding to a tet in 3D, is "4-simplex"

Tedrahedron means "four sides" and a tetrahedron does have four (triangular sides). At tet is also a "3-simplex" because it is the simplex that lives in 3D. Just like a triangle is a 2-simplex.

The official name for a 4-simplex is "pentachoron" choron means 3D room in Greek. the boundary of a pentachoron consists of five 3D "rooms"---five tetrahedrons.

To put what you said more precisely

So essentially quantum space-time is nodes connecting to create pentachorons?

Loosely speaking that's the right idea. But we didn't touch on the key notion of duality. It is easiest to think of in 2D. Take a pencil and triangulate a flat piece of paper with black equilateral triangles. Then put a blue dot in the center of each triangle and connect two dots with a blue line if their triangles are adjacent.

The blue pattern will look like a honeycomb hexagon tiling of the plane. The blue pattern is dual to the black triangulation. Each blue node is connected to three others.

Then imagine it in 3D where you start by triangulating regular 3D space with tetrahedra. Then you think of putting a blue dot at the center of each tet, and connect it with a blue line to each of the 4 neighbor blue dots in the 4 adjacent tets.

In some versions of LQG, the spin networks---the graphs that describe 3D spatial geometry--- are restricted to be dual to triangulations. And in 4D where there are foams (analogous to graphs), only foams which are dual to triangulations are allowed.

These ideas---simplexes, triangulations that chop up space or spacetime into simplexes, duals, etc.---become very familiar and non-puzzling. One gets used to them.

So that would be an additional wrinkle to the general idea you expressed.

Finally, it gets simpler aqain. You throw away the idea of triangulation and just keep the idea of a graph (for 3D) and a foam thought of either as 4D geometry, or as the evolution of 3D geometry. And you let the graphs and foams be completely general, so no more headaches about the corresponding dual triangulation or even if there is one. You just have general graphs and two-complexes, which carry information about observables (area, volume, angle,...)
===============================

Kevin, one could say that all this stuff about tetrahedrons and pentachorons and dual triangulations is just heuristic detail that helps people get to where they are going, and at some point becomes extra baggage---unnecessary complication---and gets thrown out.

You can for instance look at 1010.1939. In fact it might do you good. You see a complete presentation of the theory in very few pages and no mention of tetrahedrons :biggrin:

Nor is there any mention of differentiable manifolds. So there is nothing to chop up! There are only the geometric relations between events/measurements. That is all we ever have, in geometry. Einstein pointed it out already in 1916 "the principle of general covariance deprives space and time of the last shred of objective reality". Space has no physical existence, there are only relations among events.

We get to use all the lego blocks we want and yet there are no legoblocks. Something like that...
 
Last edited:
  • #44
At any rate, let's get back to the main topic. There is this new formulation, best presented in http://arxiv.org/abs/1010.1939 or so I think, and we have to ask is it simple enough and also wonder if it will be empirically confirmed. It gives Feynman rules for geometry leading to a way of calculating a transition amplitude a certain complex number, which I wrote

Zroadmap(boundary conditions)

the amplitude (like a probability) of going from initial to final boundary geometry following the Feynman diagram roadmap of a certain two-complex C.

A twocomplex is a finite list of abstract vertices, edges, faces: vertices where the edges arrive and depart and faces bordered by edges (the list says which connect with which).

Initial and final geometry details come as boundary edge labels which are elements of a group G = SU(2). There is some finite number L of boundary edges, so the list of L group elements labeling the edges can be written h = (h1, h2,...,hL).

So, in symbols, the complex number is ZC(h). The theory specifies a formula for computing this, which is given by equation (4) on page 1 of http://arxiv.org/abs/1010.1939 , the paper I mentioned.

Here is an earlier post that explains some of this:
marcus said:
@Tom
post #35 gives an insightful and convincing perspective. Also it leaves open the question of what will be the definitive form(s) of the theory. Because you earlier pointed out that at a deeper level a theory can have several equivalent presentations.

I had a minor comment about that. For me, the best presentation of the current manifoldless version is not the absolute latest (December's 1012.4707) but rather October's 1010.1939. And I would say that the notation differs slightly between them, and also that (from the standpoint of a retired mathematician with bad eyesight) their notation is inadequate/imperfect.

If anyone wants to help me say this, look at 1010.1939 and you will see that there is no symbol for a point in the group manifold SU(2)L = GL = G x G x ... x G
Physicists think that they can write down xi and have this mean either xi or else the N-tuple (x1, x2,...,xN)
depending on context. This is all right to a certain extent but after a point it becomes confusing.

In many ways I think the presentation in 1010.1939 is the clearest, but it is still deficient.
Maybe I will expand on that a bit, if it will not distract from more meaningful discussion.

============

BTW, in line with what Tom said in the previous post, there are obviously several different ways LQG can fail, not just one way. One failure mode is mathematical simplicity/complexity. To be successful a theory should (ideally) be mathematically simple.
As well as passing the empirical tests.

One point in favor of the 1010.1939 form is that it "looks like" QED and QCD, except that it is background independent and about geometry, instead of being about particles of matter living in fixed background. Somehow it manages to look like earlier field theories. The presentation on the first page uses "Feynman rules".

These Feynman rules focus on an amplitude ZC(h)
where C is a two-complex with L boundary or "surface" edges, and h is a generic element of SU(2) and h is (h1, h2,...,hL), namely a generic element of SU(2)L

The two-complex C is the "diagram". The boundary edges are the "input and output" of the diagram---think of the boundary as consisting of two separate (initial and final) components so that Z becomes a transition amplitude. Think of the L-tuple h as giving initial and final conditions. The notation h is my notational crutch which I use to keep order in my head. Rovelli, instead, makes free use of the subscript "l" which runs from 1 to L, and has no symbol for h.

The central quantity in the theory is the complex number ZC(h) and one can think of that number as saying

Zroadmap(boundary conditions)
 
Last edited:
  • #45
The way the equation (4) works is you let boundary information ( h ) percolate into the foam from its outside surface, and you integrate up all the other labels that the twocomplex C might have compatible with what is fixed on the surface.

The foam is like an information-sponge, with a certain welldefined boundary surface (actually a 3D hypersurface geometry, think initial + final) and you paint the outside of the sponge with some information-paint h
and the paint seeps and soaks into the inside, and constrains what colors can be there to some extent. Then you integrate out, over all what can be inside, compatible with the boundary.

So in the end the Z amplitude depends only on the choice of the unlabeled roadmap C, a pure unlabeled diagram, plus the L group element labels on the boundary graph.

If the group-labeled boundary graph happens to have two connected components you can call one "initial geometry" and one "final geometry" and then Z is a "transition amplitude" from initial to final, along the twocomplex roadmap C.

BTW Etera Livine just came out with a 90-page survey and tutorial paper on spinfoam. It is his habilitation, so he can be research director at Lyon, a job he has already be performing from the looks of it. Great! Etera has posted here at PF Beyond sometimes. His name means Ezra in the local-tradition language where he was raised. A good bible name. For some reason I like this. I guess I like the name Ezra. Anyway he is a first-rate spinfoam expert and we can probably find this paper helpful.

http://arxiv.org/abs/1101.5061
A Short and Subjective Introduction to the Spinfoam Framework for Quantum Gravity
Etera R. Livine
90 pages
(Submitted on 26 Jan 2011)
"This is my Thèse d'Habilitation (HDR) on the topic of spinfoam models for quantum gravity, which I presented in l'Ecole Normale Supérieure de Lyon on december 16 2010. The spinfoam framework is a proposal for a regularized path integral for quantum gravity, inspired from Topological Quantum Field Theory (TQFT) and state-sum models. It can also be seen as defining transition amplitudes for the quantum states of geometry for Loop Quantum Gravity (LQG)."

It may interest you to go to page 61 where begins Etera's Chapter 4 What's Next for Spinfoams?
 
Last edited:
  • #46
Awesome, thanks for the detailed explanation marcus! I'm in grade 11 so the maths only makes partial sense to me but the words will be good enough for now. About connecting the points in the center of the triangles, so you always have an N-polygon with three N-polygons meeting at each vertex, what is the significance of that, will you have more meeting at each vertex with pentachorons (applying the same procedure) because there exist more edges?
 
Last edited:
  • #47
Kevin_Axion said:
... About connecting the points in the center of the triangles, so you always have an N-polygon with three N-polygons meeting at each vertex, what is the significance of that, will you have more meeting at each vertex with pentachorons (applying the same procedure) because there exist more edges?
My writing wasn't clear Kevin. The thing about only three meeting was just a detail I pointed out about the situation on the plane when you go from equilateral triangle tiling to the dual, which is hexagonal tiling. I wanted you to picture it concretely. That particular aspect does not generalize to other polygons or to other dimensions. I was hoping you would draw a picture of how there can be two tilings each dual to the other.

It would be a good brain-exercise, I think, to imagine how ordinary 3D space can be "tiled" or triangulated by regular tetrahedra. You can set down a layer of pyramids pointing up, but then how do you fill in? Let's say you have to use regular tets (analogous to equilateral triangles) for everything.

And when you have 3D space filled with tets, what is the dual to that triangulation? This gets us off topic. If you want to pursue it maybe start a thread about dual cell-complexes or something? I'm not an expert but there may be someone good on that.
 
  • #48
The Wiki article is good: "The 5-cell can also be considered a tetrahedral pyramid, constructed as a tetrahedron base in a 3-space hyperplane, and an apex point above the hyperplane. The four sides of the pyramid are made of tetrahedron cells." - Wikipedia: 5-cell, http://en.wikipedia.org/wiki/Pentachoron#Alternative_names
Anyways, I digress. I'm sure this is slightly off-topic.
 
  • #49
Oh good! You are on your own. I googled "dual cell complex" and found this:
http://www.aerostudents.com/files/constitutiveModelling/cellComplexes.pdf

Don't know how reliable or helpful it may be.
 
  • #50
I understand some vector calculus and that appears to be what the math being used is. Thanks I'm sure that will be useful!
 
Last edited:
  • #51
marcus said:
It would be a good brain-exercise, I think, to imagine how ordinary 3D space can be "tiled" or triangulated by regular tetrahedra. You can set down a layer of pyramids pointing up, but then how do you fill in? Let's say you have to use regular tets (analogous to equilateral triangles) for everything.

And when you have 3D space filled with tets, what is the dual to that triangulation? This gets us off topic. If you want to pursue it maybe start a thread about dual cell-complexes or something? I'm not an expert but there may be someone good on that.


Regular tetrahedra can not fill space. Tetrahedra combined with octahedra can fill space. See isotropic vector matrix or octet-truss.

...and I think the dual is packed rhombic dodecahedra
 
Last edited:
  • #52
marcus said:
Oh good! You are on your own. I googled "dual cell complex" and found this:
http://www.aerostudents.com/files/constitutiveModelling/cellComplexes.pdf

Don't know how reliable or helpful it may be.

The dual skeleton is defined quite nicely on p31 in this paper http://arxiv.org/abs/1101.5061"

which you identified in the bibliography thread.
 
Last edited by a moderator:
  • #53
sheaf said:
The dual skeleton is defined quite nicely on p31 in this paper http://arxiv.org/abs/1101.5061"

which you identified in the bibliography thread.

Thanks! I checked page 31 of Etera Livine's spinfoams paper and it does give a nice understandable presentation. That paper is like a little introductory textbook!
I will quote a sample passage from page 31:

==quote Livine 1101.5061 ==

Starting with the simpler case of a three-dimensional space-time, a space-time triangulation consist in tetrahedra glued together along their triangles. The dual 2-skeleton is defined as follows. The spinfoam vertices σ are dual to each tetrahedron. Those vertices are all 4-valent with the four attached edges being dual to the four triangles of the tetrahedron. Each edge e then relates two spinfoam vertices, representing the triangle which glues the two corresponding tetrahedra. Finally, the spinfoam faces f are reconstructed as dual to the triangulation’s edges. Indeed, considering an edge of the triangulation, we go all around the edge and look at the closed sequences of spinfoam vertices and edges which represent respectively all the tetrahedra and triangles that share that given edge. This line bounds the spinfoam face, or plaquette, dual to that edge. Finally, each spinfoam edge e has three plaquettes around it, representing the three triangulations edges of its dual triangle. To summarize the situation:

3d triangulation ↔ spinfoam 2-complex
___________________________________
tetrahedron T ↔ 4-valent vertex σ
triangle t ↔ edge e
edge ↔ plaquette f

The setting is very similar for the four-dimensional case. The triangulated space-time is made from 4-simplices glued together at tetrahedra. Each 4-simplex is a combinatorial structure made of 5 boundary tetrahedra, glued to each other through 10 triangles. Once again, we define the spinfoam 2-complex as the dual 2-skeleton:
...
==endquote==
 
Last edited by a moderator:
  • #54
Helios said:
Regular tetrahedra can not fill space...

I think that is right, Helios. The dihedral angle of a regular tet is about 70.5 degrees,

Suppose I allow two kinds of tet. Can it be done? Please tell us if you know.


[This may not be absolutely on topic, because all we need to accomplish what Etera is talking about is some sort of tetrahedral triangulation of space, which I'm pretty sure exists (if we relax the regularity condition slightly). But it's not a bad exercise for the imagination to think about it. Helios might be a good teacher here.]
 
  • #55
Helios said:
Regular tetrahedra can not fill space.

But irregular tetrahedra can!
 
  • #56
MTd2 said:
But irregular tetrahedra can!

Indeed, only slightly irregular. The construction I was vaguely remembering was one in Loll's 2001 paper. I'll get the reference. (Loll Ambjorn Jurkiewicz 2001). they are doing 2+1 gravity so spacetime is 3D. The basic idea is simple layering. They have two types of tets, red and blue. Both look almost regular but slightly distorted. The red have an equilateral base but the wrong height (slightly taller or shorter than they should be). They set them out in a red layer covering a surface (a plane say) with little trianglebase pyramids.
Now where each pyramid meets its neighbor there is a kind of V-shaped canyon.
(I could be misremembering this, but you will, I hope, see how to correct me.)

The blue tets are also nearly regular but slightly stretched in some direction. They have a dihedral angle so that they precisely fit into that V-shape canyon. You hold the tet with one edge horizontal like the keel of a little boat. It fits right in. The top will be a horizontal edge rotated at right angles.

So now you have the upsidedown picture of a blue layer with upsidedown pyramid holes. So you put in red tets with their flat equilateral bases directed upwards. Now you have a level ground again, made of their bases, and you can start another layer.

I could be wrong. I am just recalling from that paper by Renate Loll et al. I haven't checked back to see. Please correct me if I'm wrong about how they do it. Let me get the reference. This is the best introduction to CDT I know. It is easy, concrete, and does not gloss over anything. If anyone knows a better introduction, please say.

http://arxiv.org/abs/hep-th/0105267
Dynamically Triangulating Lorentzian Quantum Gravity
J. Ambjorn (NBI, Copenhagen), J. Jurkiewicz (U. Krakow), R. Loll (AEI, Golm)
41 pages, 14 figures
(Submitted on 27 May 2001)
"Fruitful ideas on how to quantize gravity are few and far between. In this paper, we give a complete description of a recently introduced non-perturbative gravitational path integral whose continuum limit has already been investigated extensively in d less than 4, with promising results. It is based on a simplicial regularization of Lorentzian space-times and, most importantly, possesses a well-defined, non-perturbative Wick rotation. We present a detailed analysis of the geometric and mathematical properties of the discretized model in d=3,4. This includes a derivation of Lorentzian simplicial manifold constraints, the gravitational actions and their Wick rotation. We define a transfer matrix for the system and show that it leads to a well-defined self-adjoint Hamiltonian. In view of numerical simulations, we also suggest sets of Lorentzian Monte Carlo moves. We demonstrate that certain pathological phases found previously in Euclidean models of dynamical triangulations cannot be realized in the Lorentzian case."
 
Last edited:
  • #57
I welcome disagreement and corrections, but I want to keep hitting the main topic. I think there are signs that LQG has made the right redefinition and has reached an exciting stage of development. Please disagree, either in general or on details. I will give some details.

First notice that CDT AsymSafe and Causets appear persistently numerical (not analytic)---they run on massive computer experiments instead of equations. This is a wonderful way to discover things, a great heuristic tool, but it does not prove theorems. At least so far, many of the other approaches seem insufficiently analytical and lack the symbolic equations that are traditional in physics.

As I see it, the QG goal is to replace the live dynamic manifold geometry of GR with a quantum field you can put matter on. The title of Dan Oriti's QG anthology said "towards a new understanding of space time and matter" That is one way of saying what the QG researchers's goal is. A new understanding of space and time, and maybe laying out matter on a new representation of space and time will reveal a new way to understand matter (no longer fields on a fixed geometry).

Sources on the 2010 redefinition of LQG are
introductory overview: http://arxiv.org/abs/1012.4707
concise rigorous formulation: http://arxiv.org/abs/1010.1939
phenomenology (testability): http://arxiv.org/abs/1011.1811
adding matter: http://arxiv.org/abs/1012.4719

Among alternative QGs, the LQG stands out for several reasons---some I already indicated---which I think are signs that the 2010 reformulation will prove a good one:

  • testable (phenomenologists like Aurelien Barrau and Wen Zhao seem to think it is falsifiable)
  • analytical (you can state LQG in a few equations, or Feynman rules, you can calculate and prove symbolically, massive numerical simulations are possible but not required)
  • similar to QED and lattice GCD (the cited papers show remarkable similarities---the two-complex works both as a Feynman diagram and as a lattice)
  • looks increasingly like a reasonable way to set up a background independent quantum field theory.
  • an explicitly Lorentz covariant version of LQG has been exhibited
  • matter added
  • a couple of different ways to include the cosmological constant
  • indications that you recover the classic deSitter universe.
  • sudden speed-up in the rate of progress, more researchers, more papers

These are just signs---the 2010 reformulation might be right---or to put it differently, there may be good reason for us to understand the theory, as presented in brief by the October paper http://arxiv.org/abs/1010.1939.

So I will copy my last substantive post about that and try to move forward from there.

marcus said:
@Tom
post #35 gives an insightful and convincing perspective. Also it leaves open the question of what will be the definitive form(s) of the theory. Because you earlier pointed out that at a deeper level a theory can have several equivalent presentations.

I had a minor comment about that. For me, the best presentation of the current manifoldless version is not the absolute latest (December's 1012.4707) but rather October's 1010.1939. And I would say that the notation differs slightly between them, and also that (from the standpoint of a retired mathematician with bad eyesight) their notation is inadequate/imperfect.

If anyone wants to help me say this, look at 1010.1939 and you will see that there is no symbol for a point in the group manifold SU(2)L = GL = G x G x ... x G
Physicists think that they can write down xi and have this mean either xi or else the N-tuple (x1, x2,...,xN)
depending on context. This is all right to a certain extent but after a point it becomes confusing.

In many ways I think the presentation in 1010.1939 is the clearest, but it is still deficient.
Maybe I will expand on that a bit, if it will not distract from more meaningful discussion.

============

BTW, in line with what Tom said in the previous post, there are obviously several different ways LQG can fail, not just one way. One failure mode is mathematical simplicity/complexity. To be successful a theory should (ideally) be mathematically simple.
As well as passing the empirical tests.

One point in favor of the 1010.1939 form is that it "looks like" QED and QCD, except that it is background independent and about geometry, instead of being about particles of matter living in fixed background. Somehow it manages to look like earlier field theories. The presentation on the first page uses "Feynman rules".

These Feynman rules focus on an amplitude ZC(h)
where C is a two-complex with L boundary or "surface" edges, and h is a generic element of SU(2) and h is (h1, h2,...,hL), namely a generic element of SU(2)L

The two-complex C is the "diagram". The boundary edges are the "input and output" of the diagram---think of the boundary as consisting of two separate (initial and final) components so that Z becomes a transition amplitude. Think of the L-tuple h as giving initial and final conditions. The notation h is my notational crutch which I use to keep order in my head. Rovelli, instead, makes free use of the subscript "l" which runs from 1 to L, and has no symbol for h.

The central quantity in the theory is the complex number ZC(h) and one can think of that number as saying

Zroadmap(boundary conditions)
 
Last edited:
  • #58
To recapitulate, there are signs the 2010 reformulation might be right---or to put it another way, good reasons for us to understand the theory, as presented in brief by the October paper http://arxiv.org/abs/1010.1939.

There is a relatively simple direct way to grasp the theory: understand equation (4) on page 1 of that paper. That equation defines the central quantity of the theory: a complex number ZC(h). It is a geometry evolution amplitude---the amplitude (related to probabliity) that the geometry will evolve from initial to final specified by boundary labels denoted h along a roadmap specified by the twocomplex ("foam") denoted C.

Zroadmap(boundary conditions)

There is no extra baggage, no manifold, no embeddings. Understanding comes down to understanding that equation (4)

I've made one change in notation from what you see in equation (4), namely introduced
a symbol h to stand for (h1, h2,...,hL), the generic element of SU(2)L. L is the number of boundary links in the network surrounding the foam. So h is an ordered collection of group elements helping to determine geometric boundary conditions.

One thing on the agenda, if we want to understand (4) is to see why the integrals are over the specified number of copies of the groups----why there are that many labels to integrate out, instead of some other number. So for example you see on the first integral the exponent 2(E-L) - V. We integrate over that many copies of the group. Let's see why it is that number. E and V are the numbers of edges and vertices in the foam C. So E-L is the number of internal edges.
 
Last edited:
  • #59
tom.stoer said:
The only (minor!) issue is the derivation of the semiclassical limit etc.

Why is this only a minor issue?

How about the classical limit?
 
  • #60
I think that the derivation of a certain limit is a minor issue compared to the problem that a construction of a consistent, anomaly-free theory (derived as quantization of a classical theory) is not available.
 
  • #61
@Tom
The post #35 which Atyy just now quote was one of the most cogent (convincing) ones on the thread. It is balanced and nuanced, so I want to quote the whole, as context. I think I understand how, when you look at it in the entire context, you can say that verifying some limit is a project of minor stature compared with postulating a QFT which is not "derived" from classic by traditional "tried-and-true" methods
tom.stoer said:
... I don't want to criticize anybody (Rovelli et al.) for not developping a theory for the cc. I simply want to say that this paper does not answer this fundamental question and does not explain how the cc could fit into an RG framework (as is expected for other couplings).

---------------------

We have to disguish two different approaches (I bet Rovelli sees this more clearly than I do).
- deriving LQG based on the EH or Holst action, Ashtekar variables, loops, ... extending it via q-deformation etc.
- defining LQG using simple algebraic rules, constructing its semiclassical limit and deriving further physical predictions

The first approach was developped for decades, but still fails to provide all required insights like (especially) H. The second approach is not bad as it must be clear that any quantization of a classical theory is intrinsically incomplete; it can never resolve quantization issues, operator ordering etc. Having this in mind it is not worse to "simply write down a quantum theory". The problem with that approach was never the correct semiclassical limit (this is a minor issue) but the problem to write down a quantum theory w/o referring to classical expressions!

Look at QCD (again :-) Nobody is able to "guess" the QCD Hamiltonian; every attempt to do this would break numerous symmetries. So one tries (tried) to "derive" it. Of course there are difficulties like infinities, but one has a rather good control regarding symmetries. Nobody is able to write down the QCD PI w/o referring to the classical action (of course its undefined, infinite, has ambiguities ..., but it does not fail from the very beginning). Btw.: this hasn't changed over decades, but nobody cares as the theory seems to make the correct predictions.

Now look at LQG. The time for derivations may be over. So instead of derived LQG (which by may argument explained above is not possible to 100%) one may simply postulate LQG. The funny thing is that in contradistinction to QCD we seem to be able to write down a class of fully consistent theories of quantum gravity w/o derivation, w/o referring to classical expressions, w/o breaking of certain symmetries etc. The only (minor!) issue is the derivation of the semiclassical limit etc.

From a formal perspective this is a huge step forward. If this formal approach is correct, my concerns regarding the cc are a minor issue only.

Postulating is the word you used. It may indeed be time to postulate a quantum understanding of space and time, rather than continue struggling to derive. After all I suppose one could say that Quantum Theory itself was originally "invented" by strongly intuitive people like Bohr and Heisenberg with the help of their more mathematically adept friends. It had to be invented de novo before one could say what it means to "quantize" some classical thing.

Or it may not yet be time to take this fateful step of postulating a new spacetime and a new no-fixed-manifold field theory.

So there is the idea of the stature of the problem. A new idea of spacetime somehow has more stature than merely checking a limit. If the limit is wrong one can often go back and fix what was giving the trouble. We already saw that in LQG in 2007. So it could be no big deal compared with postulating the right format in the first place. I can see the sense of your saying "minor".

αβγδεζηθικλμνξοπρσςτυφχψωΓΔΘΛΞΠΣΦΨΩ∏∑∫∂√±←↓→↑↔ ~≈≠≡ ≤≥½∞(⇐⇑⇒⇓⇔∴∃ℝℤℕℂ⋅)
 
Last edited:
  • #62
tom.stoer said:
I think that the derivation of a certain limit is a minor issue compared to the problem that a construction of a consistent, anomaly-free theory (derived as quantization of a classical theory) is not available.

Yes, there is no need, in fact no reason, to go from classical theory to quantum theory. But isn't the semiclassical and classical limits very important? We seek all quantum theories consistent with the known experimental data. This is the same sort of concern that string theory should be shown to contain the standard model of particle physics. We ask if there is more than one such theory so that future experiments and observatoins can distinguish between them.
 
  • #63
I agree that deriving this limit is important, but if there is a class of theories they may differ only in the quantum regime (e.g. by operator ordering or anomlies which may vanish in the classical limit) and therefore this limit doesn't tell us much about the quantum theory itself.
 
  • #64
continuing on bit by bit with the project I mentioned earlier of understanding equation (4)
marcus said:
...
One thing on the agenda, if we want to understand (4) is to see why the integrals are over the specified number of copies of the groups----why there are that many labels to integrate out, instead of some other number. So for example you see on the first integral the exponent 2(E-L) - V. We integrate over that many copies of the group. Let's see why it is that number. E and V are the numbers of edges and vertices in the foam C. So E-L is the number of internal edges.

I try to use only regular symbols and avoid going to Tex, so I cannot duplicate the fancy script Vee used for the total valence of all the faces of the two-complex C.
That is, you count the number of edges that each face f has, and add it all up.
Naturally there will be overcounting because a given edge can belong to several faces.
So this number is bigger than E the number of edges.

I see no specially good symbol so I will make a bastard use of the backwards ∃
to stand for the total edges of all the faces, added up.

Now in equation (4) you see there is the second integral which is over a cartesian product of ∃ - L copies of the group SU(2). Namely it is a Haar measure integral over SU(2)∃-L

How to think about this? We look at the total sides ∃ of all the faces and we throw away the boundary edges, and we keep only the internal edges in our count. Now this goes back to equation (2)! "a group integration to each couple consisting of a face and an internal edge." So that is beginning to make sense. BTW anyone who wants to help talk through the sums and integrals of equation (4) is heartily welcome!
 
  • #65
Just as QED does not replace classical but simply goes deeper---we still use the Maxwell equations!---so the job of LQG is not to replace the differentiable manifold (that Riemann gave us around 1850) but to go deeper. That's obvious, but occasionally reminding ourselves of it may still be appropriate. The manifold is where differential equations live---we will never give it up.

But this equation (4) of http://arxiv.org/abs/1010.1939 is (or could be) the handle on geometry deeper than the manifold. So I want to "parse" it a little. "Parse" is what one learns to do with sentences, in school. It means to divide up into parts.

You see that equation (4) is preceded by four Feynman rules
I'm going to explain more explicitly but one brief observation is that in (4) the second integration and the second product over edges together implement Rule 2.

The other portions of (4) implement Rule 3.

Let's see if we can conveniently type some parts of equation (4) without resorting to LaTex.
Typing at an internet discussion board, as opposed to writing on a blackboard, is an abiding bottleneck.

(SU2)∃-L dhef

Remember that e and f are just numbers tagging the edges and faces of the foam.
e = 1,2,...,E
f = 1,2,...,F
and the backwards ∃ is the "total valence" of all the faces, the number of edges of each face, added up. The paper uses a different symbol for that, which I cannot type. So anyway ∃-L is the total internal valence of all the faces. What you get if you add up the number edges which are not boundary that each face has. Recall that L is the number of boundary edges (those bordering only one face, the unshared edges.)

So let's see how the integral looks. It is a part of equation (4) that helps to implement Rule 2.
================

Well it looks OK. The integral is over the group manifold
(SU2)∃-L
consisting of ∃-L copies of the compact group SU2. It seems to read OK. If anyone thinks it doesn't, please say.

Then what goes into that integral, to implement geometric Feynman Rule 2, is a product over all the edges e bordering a given face f.
I'll try typing that too.
αβγδεζηθικλμνξοπρσςτυφχψωΓΔΘΛΞΠΣΦΨΩ∏∑∫∂√±←↓→↑↔ ~≈≠≡ ≤≥½∞(⇐⇑⇒⇓⇔∴∃ℝℤℕℂ⋅)
 
Last edited:
  • #66
To keep on track, since we have a new page, I will copy the "business part" of my last substantive post.
==quote==

As I see it, the QG goal is to replace the live dynamic manifold geometry of GR with a quantum field you can put matter on. The title of Dan Oriti's QG anthology said "towards a new understanding of space time and matter" That is one way of saying what the QG researchers's goal is. A new understanding of space and time, and maybe laying out matter on a new representation of space and time will reveal a new way to understand matter (no longer fields on a fixed geometry).

Sources on the 2010 redefinition of LQG are
introductory overview: http://arxiv.org/abs/1012.4707
concise rigorous formulation: http://arxiv.org/abs/1010.1939
phenomenology (testability): http://arxiv.org/abs/1011.1811
adding matter: http://arxiv.org/abs/1012.4719

Among alternative QGs, the LQG stands out for several reasons---some I already indicated---which I think are signs that the 2010 reformulation will prove a good one:

  • testable (phenomenologists like Aurelien Barrau and Wen Zhao seem to think it is falsifiable)
  • analytical (you can state LQG in a few equations, or Feynman rules, you can calculate and prove symbolically, massive numerical simulations are possible but not required)
  • similar to QED and lattice GCD (the cited papers show remarkable similarities---the two-complex works both as a Feynman diagram and as a lattice)
  • looks increasingly like a reasonable way to set up a background independent quantum field theory.
  • an explicitly Lorentz covariant version of LQG has been exhibited
  • matter added
  • a couple of different ways to include the cosmological constant
  • indications that you recover the classic deSitter universe.
  • sudden speed-up in the rate of progress, more researchers, more papers

These are just signs---the 2010 reformulation might be right---or to put it differently, there may be good reason for us to understand the theory, as presented in brief by the October paper http://arxiv.org/abs/1010.1939.
...
...
[To expand on the point that in 1010.1939 form] it "looks like" QED and QCD, except that it is background independent and about geometry, instead of being about particles of matter living in fixed background. Somehow it manages to look like earlier field theories. The presentation on the first page uses "Feynman rules".

These Feynman rules focus on an amplitude ZC(h)
where C is a two-complex with L boundary or "surface" edges, and h is a generic element of SU(2) and h is (h1, h2,...,hL), namely a generic element of SU(2)L

The two-complex C is the "diagram". The boundary edges are the "input and output" of the diagram---think of the boundary as consisting of two separate (initial and final) components so that Z becomes a transition amplitude. Think of the L-tuple h as giving initial and final conditions. The symbol h is my notational crutch which I use to keep order in my head. Rovelli, instead, makes free use of the subscript "l" which runs from 1 to L, and has no symbol for h.

The central quantity in the theory is the complex number ZC(h) and one can think of that number as saying a quantum probability, a transition amplitude:

Zroadmap(boundary conditions)

==endquote==

I added some clarification and emphasis to the last sentence.
 
Last edited:
  • #67
OK so part of equation (4) is an integral of a product of group characters which addresses Rule 2 of the list of Feynman rules.

(SU2)∃-L dhefe ∈ ∂f χjf(hef)

where the idea is you fix a face in the twocomplex, call it f, and you look at all the edges e that are bordering that face, and you look at their labels hef. These labels are abstract group elements belonging to SU(2). But what you want to integrate is a number. So you cook the group element hef down to a number χjf(hef) and multiply the numbers corresponding to every edge of the face, to get a product number for the face, and then start adding those numbers. That's it, that's the integral (the particular integral piece we are looking at.)

But what's the superscript jf on the chi? Well a set of nice representations of the group SU(2) are labeled by halfintegers j, and if you look back in equation (4) you see that there is a sum running through the possible j, for each face f. So there is a sum over the possible choices jf. And the character chi is just the dumbed-down version of the jf-rep. The trace of the rep matrix.

It is basically just a contraption to squeeze the juice out of the apples. You pull the lever and squeeze out the juice and add it up (the adding up part is the integral.)

There is another part of equation (4) that responds to geometric Feynman rule 3. I will get to that later hopefully later this afternoon.

I really like how they get this number Z. This quantum probability number ZC (h)

αβγδεζηθικλμνξοπρσςτυφχψωΓΔΘΛΞΠΣΦΨΩ∏∑∫∂√±←↓→↑↔ ~≈≠≡ ≤≥½∞(⇐⇑⇒⇓⇔∴∃ℝℤℕℂ⋅ ∈ )
 
Last edited:
  • #68
I accidentally lost most of this post (#68) while editing and adding to it. What follows is just a fragment, hard to understand without the vanished context
=======fragment========

Going back to ∫(SL2C)2(E-L)-V dgev I see that the explanation of the exponent 2(E-L)-V is to look at Rule 1 and Rule 4 together.

Rule 1 says for every internal edge you expect two integrals dgev
where the v stands for either the source or the target vertex of that edge.

Well there are L boundary edges, and the total number of edges in the foam is E. So there are E-L internal edges. So Rule 1 would have you expect 2(E-L) integrations dgev over SL(2,C).

Simple enough, but then Rule 4 says at each vertex one integration is redundant and is omitted.
So V being the number of vertices, that means V integrations are dropped. And we are left with
2(E-L) - V.

Intuitively what all those SL(2, C) integrations are doing is working out all the possible gauge tranformations that could happen to a given SU(2) label hef on an edge e of a face f.

Now we need to look at Rule 3 and see how it is implemented in equation (4)

Rule 3 says to assign to each face f in the foam a certain sum ∑jf
the sum is over all possible halfintegers j, since we are focusing on a particular face f we are going to tag that run of halfintegers jf.
And that sum is simply a sum of group character numbers (multiplied by an integer 2j+1 which is the dimension of the vectorspace of the j-th rep). Here's the sum:
jf (2jf+1)χγ(jf+1), jf (g)

Now the only thing I didn't specify is what group element that generic "g" stands for, that is plugged into the character χ.


jf (2jf+1)χγ(jf+1), jf (∏e ∈ ∂f (gese hef gete-1)εlf)



=====end fragment===

Since the notation when lost is hard to recover, I am going to leave this as it is and not try to edit it.
I will start a new post.

Found another fragment of the original post #68!
==quote==
Let's move on and see how equation (4) implements geometric Feynman Rule 3.
Now we are going to be integrating over multiple copies of a somewhat larger group, SL(2,C)

(SL2C)2(E-L)-V dgev


As before we take a rep, and since we are working with a halfinteger jf this time it's going to be tagged by a pair of numbers γ(jf+1), jf, and we plug in a group element, which gives a matrix. And then as before we take the TRACE of that matrix, which does the desired thing and gives us a complex number.

Here it is:
χγ(jf+1), jf (g)

That's what happens when we plug any old generic g from SL(2,C) into the rep. Now we have to say which "g" we want to plug in. It is going to be a PRODUCT of "g"s that we pick up going around the chosen face. And also, meanwhile going around, integrating out every possible SL(2,C) gauge transformation on the edge labels. Quite an elaborate circle dance!

Before, when we were implementing Rule 2, it was simpler. We just plugged a single group element hef into the rep, and that hef was what we happened to be integrating over.

For starters we can look at the wording of Rule 3 and see that it associates A SUM TO EACH FACE.
So there down in equation (4) is the sum symbol, and the sum clearly involves all the edges that go around the face. So that's one obvious reason it's more complicated.

==endquote==

As I said above ,I am going to leave this as it is and start a new post.

αβγδεζηθικλμνξοπρσςτυφχψωΓΔΘΛΞΠΣΦΨΩ∏∑∫∂√±←↓→↑↔ ~≈≠≡ ≤≥½∞ ⇐⇑⇒⇓⇔∴∃ℝℤℕℂ⋅∈
 
Last edited:
  • #69
For anybody coming in new to this thread, at the moment I am chewing over the first page of what I think is the best current presentation of LQG, which is an October 2010 paper
http://arxiv.org/abs/1010.1939

Accidentally trashed much of my earlier post (#68) so will try to reconstruct using whatever remains.

In post #67 I was talking about how equation (4) implements Feynman Rule 2.

Now let's look at Rule 3 and see how it is carried out.

There's one tricky point about Rule 3--it involves elements g of a larger group SL(2 C).
This has a richer set of representation, so the characters are not simply labeled by halfintegers.

As before, what is inside the integral will be a product of group character numbers of the form χ(g) where this time g is in SL(2,C). The difference is that SL(2,C) reps are not classified by a single halfinteger j, but by a pair of numbers p,j where j is a halfinteger but p doesn't have to be a halfinteger, can be a real, like for instance the immirzi number γ = .274... multiplied by a half integer (j+1). Clearly a positive real number, not a halfinteger.

χγ(jf+1), jf (g)Rule 3 says to assign to each face f in the foam a certain sum ∑jf
the sum is over all possible halfintegers j, since we are focusing on a particular face f we are going to tag that run of halfintegers jf.
And that sum is simply a sum of group character numbers (multiplied by an integer 2j+1 which is the dimension of the vectorspace of the j-th rep). Here's the sum:
jf (2jf+1)χγ(jf+1), jf (g)

Now the only thing I didn't specify is what group element that generic "g" stands for, that is plugged into the character χ. Well it stands for a kind of circle-dance where you take a product of edge labels going around the face.

e ∈ ∂f (gese hef gete-1)εlf

And when you do that there is the question of orientation. Each edge has its own orientation given by its source and target vertex assignment. And each face has an oriention, a preferred cyclic ordering of the edges. Since edges are shared by two or more faces, you can't count on the orientations of edges being consistent. So what the epsilon exponent does is fix that. It is either 1 or -1, whatever is needed to make orientation agree.

===========================
Now looking at the first integral of equation (4),
namely ∫(SL2C)2(E-L)-V dgev ,
we can explain the exponent 2(E-L)-V by referring back to Rule 1 and Rule 4 together.Rule 1 says for every internal edge you expect two integrals dgev
where the v stands for either the source or the target vertex of that particular edge e so gev stands for either
gese or gete

Well there are L boundary edges, and the total number of edges in the foam is E. So there are E-L internal edges. So Rule 1 would have you expect 2(E-L) integrations dgev over SL(2,C).

Rule 4 then adds the provision at each vertex one integration is redundant and is omitted.
So V being the number of vertices, that means V integrations are dropped. And we are left with
2(E-L) - V.

Intuitively what those SL(2, C) integrations are doing is working out all the possible gauge tranformations that could happen to a given SU(2) label hef on an edge e of a face f.αβγδεζηθικλμνξοπρσςτυφχψωΓΔΘΛΞΠΣΦΨΩ∏∑∫∂√±←↓→↑↔ ~≈≠≡ ≤≥½∞ ⇐⇑⇒⇓⇔∴∃ℝℤℕℂ⋅∈[/QUOTE]
 
Last edited:
  • #70
I see I made a typo error on the page above. It should be εef not εlf.

That's enough parsing of equation (4). It is the central equation of the LQG formulation we're talking about in this thread. Consider it discussed, at least for the time being. The topic question is whether it is the right redefinition or not, of the theory. I think it is, and gave some reasons.
marcus said:
As I see it, the QG goal is to replace the live dynamic manifold geometry of GR with a quantum field you can put matter on. The title of Dan Oriti's QG anthology said "towards a new understanding of space time and matter" That is one way of saying what the QG researchers's goal is. A new understanding of space and time, and maybe laying out matter on a new representation of space and time will reveal a new way to understand matter (no longer fields on a fixed geometry).

Sources on the 2010 redefinition of LQG are
introductory overview: http://arxiv.org/abs/1012.4707
concise rigorous formulation: http://arxiv.org/abs/1010.1939
phenomenology (testability): http://arxiv.org/abs/1011.1811
adding matter: http://arxiv.org/abs/1012.4719

Among alternative QGs, the LQG stands out for several reasons---some I already indicated---which I think are signs that the 2010 reformulation will prove a good one:

  • testable (phenomenologists like Aurelien Barrau and Wen Zhao seem to think it is falsifiable)
  • analytical (you can state LQG in a few equations, or Feynman rules, you can calculate and prove symbolically, massive numerical simulations are possible but not required)
  • similar to QED and lattice GCD (the cited papers show remarkable similarities---the two-complex works both as a Feynman diagram and as a lattice)
  • looks increasingly like a reasonable way to set up a background independent quantum field theory.
  • an explicitly Lorentz covariant version of LQG has been exhibited
  • matter added
  • a couple of different ways to include the cosmological constant
  • indications that you recover the classic deSitter universe.
  • sudden speed-up in the rate of progress, more researchers, more papers

These are just signs---the 2010 reformulation might be right---or to put it differently, there may be good reason for us to understand the theory, as presented in brief by the October paper http://arxiv.org/abs/1010.1939...

So can you think of any reasons to offer why the new formulation is NOT the right way to go? If you gave some arguments against this formulation which then got covered over by my struggling with the main equation, please help by bringing those arguments/signs forward here so we can take a fresh look at them.
 
Last edited:
Back
Top