QG five principles: superpos. locality diff-inv. cross-sym. Lorentz-inv.

  • Thread starter marcus
  • Start date
  • Tags
    Locality
In summary: This is called crossing symmetry.This means that the amplitude for a process defined by a boundary state (ψ) can be calculated by a simple integrals over all possible orientations and positions of the vertices in space. Rovelli calls this the “local Lorentz invariance” of the amplitudes.
  • #36
I also want to emphasize that this approach uses group manifolds, for example to do its gauge theory. It merely does not use a manifold to represent space or spacetime continua. In a quantum theory we can no more assume that a spacetime continuum geometry exists than we can assume that the trajectory of a particle exists.

My personal view is that this is good. Historically the LQG approach used manifolds to represent the continua, and embedded the graphs and 2-complexes in those continua. Now it is more abstract.

But we still can use Lie groups, differential geometry, and manifolds. Suppose we are working with a graph that has N nodes and L links. Then we can take N-fold and L-fold cartesian products of the group G---and have for example the group manifold GN consisting of all possible N-tuples of elements of G.

Right away in Rovelli's equation (3) on page 2, you can see how one uses any given N-tuple of group elements to twirl the gauge. At every given node one has chosen a group element to screw around with the links that are in-coming and out-going at that node.

Now I think about assignments of G-labels to the links of the graph. The group manifold GL. And I think about "wave-functions" defined on GL. Functions with values in the complex numbers. You can screw around with these functions simply by messing with the domain they are defined on, as described above and in equation (3).

We can define an equivalence between "wave-functions" defined on the group manifold. Two functions are equivalent if you can turn one into the other by screwing around with the domain it's defined on---that being GL---as described in equation (3).

That's what I meant by "twirling the gauge" simultaneously at each node of the graph.
Two wave functions might actually describe the same physical conditions. So they might have a certain percentage of "gauge" in them: spurious non-physically-significant content like the air whipped into cheap icecream. Screwing around with the domain they're defined on---using all possible GN assignments of group elements to the nodes---to see if you can make one equal the other, is a way to squeeze out the unphysical "air".

One nice thing is that ordinarily people might think of gauge theory only in the context of a differential geometry package like a bundle on a manifold. Here there is no manifold, there is only a graph of measurements you might imagine making in order to nail down the boundary conditions of your experiment---the geometric inputs outputs and such.
 
Last edited:
Physics news on Phys.org
  • #37
marcus said:
You say you don't like embedding spin-networks into a manifold. I agree in general. Sometimes people will use embedded spin-networks temporarily--to prove a theorem or show an equivalence. But it is a yoking together of new and old.

Yes, I also appreciate the value in connecting new abstractions to old ones. In this sense, embedding a discrete abstraction into a continuum one certainly shows the way how the old abstraction can be "emergent" from the new one.

But I think it's important when one has the ambition to explain something, to note the line of reasoning and not use what we want to show as motivation for the construction. That happens at times.

This is why own interest in LQG all along has been at it's outer edge. I've had clear objections to some of rovelli's reasoning. In particular the way he avoids analysing the foundations of QM and measurment theory itself - he just use it. I find that somehow incoherent and his initial reasoning somehow (to me at least) holds an higher ambition. But perhaps after all, there is some common touching points in the new development.


marcus said:
I think of a spin-network as an (idealized) web of preparation&measurment that one might imagine making.
...
Since as an experimenter one has only finite resources, one is dealing with a finite graph.
...
Now just to underline the distinction, the spin-network has no interaction vertices where something happens. It is the spinfoam that has the vertices where something happens.
...
The spin-network describes the boundary conditions that we control, the boundary surrounding the 4D bulk which we do not control.
...
Here there is no manifold, there is only a graph of measurements

This is the direction of abstracton I like!

But I think we could be more radical than rovelli is, and then we can not just assume like rovelli does, that all communication is perfectly described by the QM formalism.

The problem is that QM is an external description of communication, not an intrinsic one, this alone makes it unphysical - except for as Smolin points out, for studying subsystems.

I'm more interpreting the spin-network as a sub-set of the observers total "information state" - ie an inside view. This is always bounded, because as you say any experimenter or observer has finite resources to STORE information. It can be of arbitrary size, but each observer has a number associated to it which is it's complexity (in my view that is). But this would encode all events (all forces and "fields"), not just 4D spacetime events.

Then, a third observer, can imagine two other observers interaction. Where each of the observers has a certain microstructure. Then I expect the spin network to be emergent as they have equilibrated. Their common communication channels is subjectively indexed by spin networks, in a way that they are related by means of your equivalence transformations.

But, I see this as an equilibrium condition. The assumption that we must always have perfect equilibirum and perfect consistency I don't understand. In fact it does not match real life observations of any learning agent. Inconsistencies is what drives development, and it's the drive of evolution of time.

But then, the "residual" of the total event index structure, once we "substracted" the equilibrium spin network (or spacetimepart) should hopefully be further classified into the other forces (matter); since the internal structure of any real observer is supposedly made of matter.

Two pieces of matter, will _establish_ a space relation, ontop of which the residuals correspond to other fields. But I do not understand how there is a route to this, unless we admitt that space itself, (even discrete space) is emergent as a separation from the more general space encoded in a observer memory (matter).

/Fredrik
 
  • #38
marcus said:
Now just to underline the distinction, the spin-network has no interaction vertices where something happens. It is the spinfoam that has the vertices where something happens.

I envision that even the information state = memory records IS in fact a subjective, re-encoded HISTORY of actual events. So in a sense one can still talke about frozen events as existing in a memory record, and would classify events as externa and internal. External events is the real observations (ie. "collapses"), the internal events require no external interaction, they are internal reequilibrations, or internal recoding of history.

So I envision that the internal structure of the observer/matter (of which the spin-network would merely be a subset of) is to be thought of as a compressed (in the datacompression sense) history, where the compression algorithm has evolved for self preservation.

Thus, I expect the structure of this (be it spin networks, or some other structure, and it's vertex group rules) to be a result of a selection process to optimum representation.

Something along these lines has been the way I tried to understand LQG, but unfortunately it's been too different so far. As I see it, this is also related to unifying matter with it, so maybe if some new clever ideas come out of this my hope is that.

/Fredrik
 
  • #39
Fra said:
Something along these lines has been the way I tried to understand LQG, but unfortunately it's been too different so far.

The idea I had so far was to picture the spinnetwork edges, as defining "dataflow" between two or more different microstructures (representing different encoding algorithms) where the IN and OUT nodes then belong to two different sets. And the observers microstructures is then really sets of sets, where each set has a different compression algortihm. Some rules of the network would then simply be determined by the complexity constraint (assume the networks doesn't grow and acquire more complexity, which could also happen which would make it more difficult). Some other rules would also follow from the compression algorithms chosen. And this would be a result of evolution.

However, this is the general case, and it has still remained how to separate out of the main communication channel where local observers can agree (within some connection transformation) of it's state.

At any stage, there is defined a flow in the entropic sense. But since it's not a single microstrucure anymore, the dynamics is far more complex that simple dissipation.

My idea was always to exploit the complexity constraint, and start at zero complexity, because there things are finite and computable, and then make conclusions, and then find how these conclusions scale. Zero complexity meaning a very small network, which also constrains it's possible interactions just by constraining permutations.

/Fredrik
 
  • #40
Fra said:
The idea I had so far was to picture the spinnetwork edges, as defining "dataflow" ...

Do you mean spinfoam edges?

Or did you actually mean spinnetwork links? I find it is a big help to use the prevailing terms in the literature---not mixing up terminology helps me think straight. I was confused by your statement and could not tell which you meant---foams or networks?
============

BTW have you noticed in the standard LQG (1004.1780) treatment there is a kind of reciprocal interplay between boundary and bulk? It is interesting how the treatment of transition amplitudes goes back and forth between network (boundary state) and foam (bulk history). Like each scratching the other's back---or like for some jobs you need two hands.

You start out with (network) boundary states. Kinematics is defined, but still no dynamics---no amplitudes.

Then comes equation (43): you see that the amplitude of a network boundary state is going to be a sum over foam histories in the surrounded bulk.

Now each foam can be broken down to its constituent vertices. We need to define an amplitude for each foam vertex.* The amplitude for the whole history will be the product of all the amplitudes for the constituent vertices. Equation (44).

The most efficient way to define a single vertex's amplitude turns out to be to surround the individual vertex by a small private boundary, defining again a network! But this network is especially simple and turns out to have a natural and concise amplitude formula! Equation (45).

That then defines the individual vertex amplitude, and makes it computable.

So one has "walked" down a reductive path, stepping both with the "bulk foot" and the "boundary foot". From a large complex (network) boundary, to a sum over (foam) histories, each becoming a product over individual (foam) vertices, which were surrounded then by calculable individual (network) boundaries.

This is condensed into one equation, (52) on page 9. [tex] <W|\psi> = \sum_\sigma \prod_f d(j_f)\prod_v W_v(\sigma) [/tex]

Here d(jf) just stands for the vectorspace dimension of the representation jf. In other words, d(jf)=2jf + 1.

And [tex] W_v(\sigma) [/tex] is shorthand for the local vertex amplitude I was talking about. Equation (53) explains:

[tex] W_v(\sigma) = <W_v|\psi_v> [/tex]

You will recognize ψv as the small private boundary one can always construct around an individual vertex, and evaluate to get the vertex amplitude.

αβγδεζηθικλμνξοπρσςτυφχψω...ΓΔΘΛΞΠΣΦΨΩ...∏∑∫∂√ ...± ÷...←↓→↑↔~≈≠≡≤≥...½...∞...(⇐⇑⇒⇓⇔∴∃ℝℤℕℂ⋅)
========================
*In case someone new is joining us, in standard Lqg terminology, spinnetworks do not have vertices (they are made of nodes and links). If someone says "vertex" in the Lqg context you know they are talking about a spinfoam (made of vertices, edges, faces). It makes communication more economical and convenient to remember these simple distinctions.
 
Last edited:
  • #41
marcus said:
Do you mean spinfoam edges?

Or did you actually mean spinnetwork links? I find it is a big help to use the prevailing terms in the literature---not mixing up terminology helps me think straight. I was confused by your
...
*In case someone new is joining us, in standard Lqg terminology, spinnetworks do not have vertices (they are made of nodes and links). If someone says "vertex" in the Lqg context you know they are talking about a spinfoam (made of vertices, edges, faces). It makes communication more economical and convenient to remember these simple distinctions.

I meant spinnetwork links.

I'm sorry for the confusion, I guess I was confused what the standard terminology was in LQG, I don't follow the LQG development regularly. It was a couple of years ago when I looked into Rovelli's book and papers.

Not that wikipedia is a sensible reference but I see that also used the word edge and vertex even for spin network in the abstract sense.

marcus said:
BTW have you noticed in the standard LQG (1004.1780) treatment there is a kind of reciprocal interplay between boundary and bulk? It is interesting how the treatment of transition amplitudes goes back and forth between network (boundary state) and foam (bulk history). Like each scratching the other's back---or like for some jobs you need two hands.

Yes, just like we have transitions between quantum states in normal QM, we have transitions between spin networks, or equivalence classes of spin networks.

But what I was after is to suggest that the "structure" of ANY quantum state, may be seen as a state of a system of memory records. And that the information processing taking place in the observer MAY be represented abstractly like a system of distinguishable indexes, in between there is a possible directional communication obeying certain rules.

So take a regular time-history of events and picture this data physically stored, then you get a historical combinatorical probability. But then picture that to this record, one can increase the capacity by recoding the actual history, maybe doing an FFT to split the memory into both a historical probability and transformations of the same.

In this sense a HISTORY of events, should be related to inertia.

I'm struggling howto represent these things. As of right now my best idea is sets of sets of distinguishable events (microstructures), where each set in the set comes with a transformation which is interpreted as a lossy data compression. The overall complexity of the set of all sets are constrained by the observers recourses (memory capacity). Now if we could somehow count the set of intrinsically COMPUTABLE transformations, the number of possibilities for each such a construction would be finite, or even in the large complexity limit, countable. Then the laws of physics, coded as symmetries would correspond to the most proable one in the entropic sense. Thus all information processing rules would have an entropic origin.

Symbolically, one could represent internal recoding of the history, as directed links between different sets or more specifically between each element in the sets.

I was seeing to what extend the LQG spin network might fit in some remote connection there. I think the fit is more likely if matter is introduced. It doesn't seem out of the question.

marcus said:
You start out with (network) boundary states. Kinematics is defined, but still no dynamics---no amplitudes.

In my picture the boundary states and state space is defined by the lossy compression of history of interactions - in this sense there is no timeless state spaces. We need a history as I see it. And the only accesible history is the one implicit in the observer.

/Fredrik
 
  • #42
Fra said:
Symbolically, one could represent internal recoding of the history, as directed links between different sets or more specifically between each element in the sets.

As for the spinfoam, or the evolution of the spinnetwork, my take on that in the context of my proposed analogy here would be that the instability of the spinnetwork itself, defines a flow - a direction of change, which is the expected evolution. a generalization of 2nd law.
Ie. a static spinnetwork is simply not a likely solution, no more than a static universe is. This instability when quantified, defines a flow (not unlike GR of course - but constructed from more first princples).

The "quantum part" I expect to follow naturally from the generalized statistics that follow when you do probability not on a probability space, but a set of such sets, that has certain relations and are subject to constraints. In this sense even QM would be emergent. I guess that's a point where rovelli is at right angle. But maybe things can change.

/Fredrik
 
  • #43
Fra said:
...Yes, just like we have transitions between quantum states in normal QM, we have transitions between spin networks, or equivalence classes of spin networks.

But what I was after is to suggest that the "structure" of ANY quantum state, may be seen as a state of a system of memory records...

It is interesting that you are thinking in terms of what, in Computer Science, are called "data structures" used for storage and retrieval. Just for explicitness, I will mention that some examples of data structures are graphs, trees, linked lists, stacks, heaps. Not something I know much about. It is also intriguing that you mention a type of Fourier transform (the FFT).
====================

I think that primarily for pragmatic reasons the game (in QG) is now to find SOMETHING that works. Not necessarily the most perfect or complete, but simply some solution to the problem of a manifold-less quantum theory of geometry and matter.

If one could just get one manifold-less quantum theory that reduced to General Relativity in the large limit, that would provide pointers in the right direction---could be improved-on gradually, and so forth.

Actually I suspect it is likely that the LQG we now see will turn out to be such a theory--a "first" manifoldless QG+M. It is the projective limit of finite graph-based group field theories, so it gets the benefit of being operationally finite but incorporating the possibility of arbitrarily large and, in effect, infinitely complicated graphs.

It may be that in the present situation the graph-based path is the only profitable way to go for a QG+M theory. The graph represents our finite information about volume and adjacency---the essence of geometry is information about bits of volume and the areas through which neighbors communicate or across which they meet. So we see the labeled graph (the spin-labeled network) proving to be an increasingly fertile idea.

At the moment I cannot imagine anything simpler or more obviously serviceable than a labeled graph, if one wants a manifold-less data structure capturing the essence of geometry. So I draw two conclusions, which I'll toss out as suggestions:

1. 4d dynamics is to be formulated with spinfoams since spinfoams are the possible "trajectories" of labeled graphs.

2. Matter has to ride on graphs and therefore its motion will also be described by spinfoams.

==============

Fra, the next thing we should talk about is Section IV Expansions, which starts on page 10 of the paper which this thread is about.
In case anyone has not read the paper yet, it is http://arxiv.org/abs/1004.1780.

Little if any physics is possible without the series expansions which provide for finite calculations giving arbitrarily precise numbers. So Rovelli, having described the theory in the first 9 pages, goes on to say (in Section IV) how it leads to various kinds of approximations.

==============
John Norton has a good account of diff-invariance and the hole argument, in case we need it here:
http://www.pitt.edu/~jdnorton/papers/decades.pdf
 
Last edited:
  • #44
An important realworld process we can see and should try to understand is the explosive growth of LQG research in recent years, say since 2005.

Some of the growth has been in its application to cosmology ("LQC") but just in the past year papers by Ashtekar and Rovelli, with others, have merged the two effectively enough that we don't need to make the distinction. Some of the growth has been stimulated by the 2008 reformulation of core LQG.

In any case there has been a dramatic increase in job openings---including permanent hires---for LQG researchers, and also in the number of active research groups worldwide. The LQG research output has more than tripled since 2005 as well.
https://www.physicsforums.com/showthread.php?p=2839234#post2839234

I've suggested a reason that may partly explain this. LQG has come to be seen as a practical proposal for manifoldless QG+M.

A quantum geometric theory of gravity and matter that does not use a manifold to represent space or spacetime. It CAN use manifolds to represent space and indeed the new LQG developed from older versions which were continuum-based. But these are now just stepping-stones or scaffolding. At some point after the construction is finished one can throw away the spacetime continuum. The manifold is "gauge" in that sense.

This is the essential message of Rovelli's April 2010 survey of LQG which we are looking at in this thread.
 
Last edited:
  • #45
A few posts back when I was discussing crossing symmetry, I used the image of a freeway interchange.

Here is a picture of an interchange:
http://en.wikipedia.org/wiki/Interchange_(road)
This one happens to be in Dallas, Texas, and is known as "high five" interchange.

Back in that post I forgot several times and said vertex when I meant node, so I will have to re-do it sometime.
The main thing is that boundary state is expressed as a graph of nodes and links. Nodes are volume, links are area.
A foam consists of vertices, edges, faces. So if someone is speaking LQG consistently "vertex" always means foam vertex.

A foam describes a complex geometric process. A foam can be imagined as the trajectory of a graph showing its evolution---as nodes (volume chunks) appear and disappear and dance/travel around so that they constantly need to be reconnected in various ways. They change their "adjacency" relations as they churn about and the foam is the kind of minimal picture that diagrams that kind of graph evolution.

When you pass from the graph to the foam picture of evolution, "nodes become roads". The graph elments which carry volume, i.e. the nodes, become linear in the foam. Several of these converge like roads going into a vertex and then several others diverge out from it.
(Officially we call these roads edges---a foam consists of vertices edges faces.)
The vertices of the foam are elementary geometric processes or events that we can think of as highway interchanges where some roads come in and some roads go out.
 
Last edited:
  • #46
Taking a look at Rovelli's reference 46 I see it says "A spin foam model is a procedure to compute an amplitude from a triangulated manifold". So how is this manifoldless?
 
  • #47
atyy said:
Taking a look at Rovelli's reference 46 I see it says "A spin foam model is a procedure to compute an amplitude from a triangulated manifold". So how is this manifoldless?

Well it's certainly more general than that. What you are quoting is what Barrett said in February 2009. That's merely the limited way Barrett et al were thinking of them at that time in that context.
Later that year the restriction to triangulations (even for this limited case) was broken by Lewandowski. So that invalidated the words Barrett et al used in their introduction. It doesn't invalidate Barrett et al excellent valuable mathematical result! (Only the parochial way they were thinking about what they were doing.)

This is a mathematical subject. You cannot think verbally about it. If you just cherrypick some nonessential words that somebody says in the introduction to give a general perspective on what they are doing---that is fairly meaningless. It gets dated quickly and you can't believe it or carry it over from paper to paper. The mathematical result is the essential message and value. That carries over.
Barrett et al had a key result in that paper about foam vertex asymptotics as I recall.
That carries over even though what they suggested about foams limited to a triangulated manifold is not true.
 
Last edited:
  • #48
If you were to ask Barrett today about that I feel sure he would not make the same statement. Sure one thing about a spinfoam model is that it can be used with a triangulated manifold.

You just take the dual of the triangulation and that gives a 2 complex and that's your spin foam---so apply the model and calculate. That's one thing you can do.

But I hardly think Barrett would tell you that this is the only use of a spinfoam model :biggrin:. They also apply to manifolds that are not triangulated, but are divided up more generally. And they apply where you do not have a manifold at all!
 
  • #49
marcus said:
So that invalidated the words Barrett et al used in their introduction. It doesn't invalidate Barrett et al excellent valuable mathematical result! (Only the parochial way they were thinking about what they were doing.)

Good, just wanted to make sure you were being consistent.

In my view Barrett et al are not necessarily parochial. The view for manifoldless spin foams goes all the way back to at least '97 http://arxiv.org/abs/gr-qc/9712067 . Barrett et al are surely not ignorant of this, I think there are at least 3 strands of interpretation of spin foams - algebraic (Markopoulou and Smolin), geometric (Barrett), GFT which agains splits into at least 2 strands - one geometric, very close to Barrett, the other going for unification with matter (Livine and Oriti). Rovelli (or at least your interpretation of Rovelli) is striving here for a Markopoulou and Smolin viewpoint, but the key result is via a Barrett interpretation, and I'm not sure you can easily keep the result and throw away the interpretation.
 
  • #50
atyy said:
... and I'm not sure you can easily keep the result and throw away the interpretation.

That's mathematics for you. You keep a result and throw away the intuitive picture that led up to it. That's why a result is put out in the form of a theorem with explicit assumptions from which the theorem is proven.

It makes it portable so it can be taken into new contexts. If Barrett's result needed strengthening to apply in Rovelli's context we would have heard of it and some people would have already gotten to work on it. I haven't heard anything. Have you?

I do know that Barrett and a bunch of his co-authors took a trip to Marseille later that year (2009) to give seminar talks on their results to the Marseille team. (Team is what they call it :biggrin: équipe de gravité quantique.)

BTW you quoted ref. 46 and I believe that was actually superseded by Rovelli's ref. 45 ( http://arxiv.org/abs/0907.2440 ). That is probably the paper we should be looking at and quoting if we are interested in the details of how Barrett's 2009 work supports Rovelli's 2010 formulation. The July paper.
 
Last edited:
  • #51
Strictly speaking yes, you are right, but there is no finished theory yet is there? So intuition is still important (and really, I mean, how can you throw away the intuition that led to the proof, even after the finished theory!)

On the other hand, I've never understood intuitively why large spin should be the semi-classical limit, so maybe that intuition will be a red herring.
 
  • #52
Of course this is work in progress Atyy. I've been watching it and have a sense of the people and the momentum. You may have a different feel. Either way we both know parts definitely still have to be nailed down!

Just for nuance, I will quote Rovelli's section on page 12 where he cites the good work of Barrett group:

== http://arxiv.org/pdf/1004.1780 ==
The analysis of the vertex (49) as well as that of its euclidean analog (55) in this limit has been carried out in great detail for the 5-valent vertex, by the Nottingham group [26, 27, 45, 46]. The remarkable result of this analysis is that in this limit the vertex behaves as

Wv ∼ e iSRegge

where SRegge is a function of the boundary variables given by the Regge action, under the identifications of these with variables describing a Regge geometry. The Regge action codes the Einstein equations’ dynamics. Therefore this is an indication that the vertex can yield general relativity in the large distance limit. More correctly, this result supports the expectation that the boundary amplitude reduces to the exponential of the Hamilton function of the classical theory.
==endquote==

Supports, does not yet prove.

And we are still just looking at a 5-valence vertex. Which BTW is in line with your mention of the triangulated manifold picture, because a 4-simplex has 5 sides (the dual replaces it with a vertex and replaces each of its 5 sides by an edge). My hunch is that graduate students can extend the result to higher-valence vertices. It's how I'm used to seeing things go, but who knows? You think not?
 
Last edited:
  • #53
In fact someone WAS working on a missing detail Rovelli describes right after what I quoted on page 12. The two terms in the vertex amplitude. A Marseille postdoc and a couple of grad students. They posted in April soon after the survey paper appeared.
http://arxiv.org/abs/1004.4550
"We show how the peakedness on the extrinsic geometry selects a single exponential of the Regge action in the semiclassical large-scale asymptotics of the spinfoam vertex."

Barrett's group left it with both a +iRegge and a -iRegge. One wanted to get rid of or suppress the negative exponential, and just have a single exponential term. So Bianchi et al took care of that.

There's been a kind of stampede of results in the past 6 months or year, bringing us closer to what appears may be a satisfactory conclusion.
 
Last edited:
  • #54
marcus said:
My hunch is that graduate students can extend the result to higher-valence vertices. It's how I'm used to seeing things go, but who knows? You think not?

I don't knoow - what I would like to see aesthetically is that it's a GFT, and that GFT renormalization is essential, and that matter must somehow come along automatically. But Barrett et al's, and also Conrady and Freidel's are the most intriguing results I have seen from the manifold point of view. But in which case, I think there must be Asymptotic Safety somehow, and a link via what Dittrich et al are saying.
 
  • #55
As further motivation for the move towards manifoldless QG+M, I should quote (again) that passage from Marcolli's May 2010 paper. Marcolli mentions the view of Chamseddine and Connes. This is section 8.2 page 45.

==quote http://arxiv.org/abs/1005.1057==
8.2. Spectral triples and loop quantum gravity.

The Noncommutative Standard Model, despite its success, still produces an essentially classical conception of gravity, as seen by the Einstein–Hilbert action embedded in eq. (8.2). Indeed, the authors of [36] comment on this directly in the context of their discussion of the mass scale Λ, noting that they do not worry about the presence of a tachyon pole near the Planck mass since, in their view, “at the Planck energy the manifold structure of spacetime will break down and one must have a completely finite theory.

Such a view is precisely that embodied by theories of quantum gravity, including of course loop quantum gravity—a setting in which spin networks and spin foams find their home. The hope would be to incorporate such existing work toward quantizing gravity into the spectral triple formalism by replacing the “commutative part” of our theory’s spectral triple with something representing discretized spacetime.

Seen from another point of view, if we can find a way of phrasing loop quantum gravity in the language of noncommutative geometry, then the spectral triple formalism provides a promising approach toward naturally integrating gravity and matter into one unified theory.
==endquote==

More discussion of the Marcolli May 2010 paper in this thread:
https://www.physicsforums.com/showthread.php?t=402234
 
Last edited by a moderator:
  • #56
I guess one way to put the point is to observe that LQG is amphibious.

The graph description of geometry (nodes of elemental volume, linked to neighbors by elemental area) can live embedded in a manifold and also out on its own---as combinatoric data structure.

In the April 2010 status report and survey of LQG, the main version presented is the manifoldless formulation which Rovelli calls "combinatorial". But he also, in small print, describes earlier manifoldy formulations using embedded graphs. In my view those are useful transitional formulations. They can be used to transfer concepts and to prove large limits and to relate to classical GR. Stepping stones, bridges, scaffolding.

It's not unusual to prove things in two or more stages--first prove the result for an intermediate or restricted case, then show you can remove the restriction. But as I see it, the manifoldless version is the real McCoy.
 
  • #57
So what's the manifoldless take on renormalization?

In the manifoldy view it has to happen somewhere, since one started with a triangulation of the manifold.
 
  • #58
atyy said:
...In the manifoldy view it has to happen somewhere, since one started with a triangulation of the manifold.

Historically, the LQG of the 1990s did not start with a triangulation of a manifold. It started with loops, which were superseded by slightly more complicated objects: spin networks. These have nothing to do with triangulations.

Spin networks can be embedded in a manifold. But the matter fields, if they enter the picture, are defined on the spin network---by labeling the nodes and links.

So what's the manifoldless take on renormalization?

Nodes carry fermions. Links carry Yang-Mills fields. Geometry is purely relational. The basic description is a labeled graph. The graph carries matter fields and there are no infinities.
See the statement of problem #17 on page 14 of the April paper. This points to what i think is now the main outstanding problem---going from QG to QG+M---including dynamics in what is (so far at best) a kinematic description of matter and geometry.
 
  • #59
Thiemann uses an old-fashioned version of LQG here, but it gives the general idea:

http://arxiv.org/abs/gr-qc/9705019
QSD V : Quantum Gravity as the Natural Regulator of Matter Quantum Field Theories
Thomas Thiemann
(Submitted on 10 May 1997)
"It is an old speculation in physics that, once the gravitational field is successfully quantized, it should serve as the natural regulator of infrared and ultraviolet singularities that plague quantum field theories in a background metric. We demonstrate that this idea is implemented in a precise sense within the framework of four-dimensional canonical Lorentzian quantum gravity in the continuum. Specifically, we show that the Hamiltonian of the standard model supports a representation in which finite linear combinations of Wilson loop functionals around closed loops, as well as along open lines with fermionic and Higgs field insertions at the end points are densely defined operators. This Hamiltonian, surprisingly, does not suffer from any singularities, it is completely finite without renormalization. This property is shared by string theory. In contrast to string theory, however, we are dealing with a particular phase of the standard model coupled to gravity which is entirely non-perturbatively defined and second quantized."
 
  • #60
marcus said:
Historically, the LQG of the 1990s did not start with a triangulation of a manifold. It started with loops, which were superseded by slightly more complicated objects: spin networks. These have nothing to do with triangulations.

But aren't we talking about spin foams?

Also, if we take the Barrett result seriously, they only get to something like the Regge action. That needs a continuum limit to look like GR - that's why Loll et al - who started with the Regge! - try to link to Asymptotic Safety or some hopefully well defined theory in the continuum limit.
 
Last edited:
  • #61
atyy said:
But aren't we talking about spin foams?

Not [EDIT: exclusively] as far as I know, Atyy. I was being careful to say "nodes" to indicate that I was talking about spin-networks.

Also, if we take the Barrett result seriously,
I hope you are not mistaking Barrett et al for the final word. They only considered vertices of valence 5. And Bianchi-Magliaro-Perini have already improved on them. What we are talking about is work in (rapid) progress. So it is something of a moving target of discussion.

As a general philosophical point, we have no indication that spacetime exists (George Ellis has given forceful arguments that it does not.) The spacetime manifold is a particular kind of interpolation device. (Like the smooth trajectory of a particle, which QM says does not exist.)
Since the 4D continuum does not exist we do not need to triangulate it :biggrin: and in fact spinfoams should not be viewed as embedded in or as triangulating a 4D continuum. They are histories depicting how an unembedded spin-network could evolve. Each spinfoam gives one possible evolutionary history.

Like the huge set of possible paths in a Feynman path integral.

Also a general spinfoam could not possibly correspond to a triangulation (you must realize this since you have, yourself, cited the Lewandowski 2009 paper "Spinfoams for all LQG")
So let's stop referring to spinfoams as dual to triangulations of some mythical 4D continuum :biggrin:

Fields live on graphs and they evolve on foams, as labels or colorings of those graphs and foams. That's the premise in the context of this discussion, and on which the LQG program will succeed or fail. We don't know which of course because it is in progress right now.
 
Last edited:
  • #62
Rovelli cites Barrett, and Barrett is talking about spin foams. Of course Barrett is not the final word, but where is is the indication that this is a reasonable line of research at all?

marcus said:
And Bianchi-Magliaro-Perini have already improved on them.

That too is a spin foam paper.

Edit: I missed an "else" above - ie. "where else is this" not "where is this"
 
Last edited:
  • #63
atyy said:
...

Edit: I missed an "else" above - ie. "where else is this" not "where is this"

I don't see where, but maybe it doesn't matter. In which post?
And I missed an "exclusively".

Basically you can't talk about foams without talking about networks and vice-versa. One is a path history by which the other might evolve. Or the foam is a possible bulk filling for a boundary network state.

What I suggested we stop talking about, and move on from, is foams that are dual to triangulations and foams which are embedded. Those are both too restrictive.
 
  • #64
OK, I see Rovelli has listed what I'm asking about as his open problem #6, where he refers to further studies along the lines of http://arxiv.org/abs/0810.1714, whose preamble goes "The theory is first cut-off by choosing a 4d triangulation N of spacetime, formed by N 4-simplices; then the continuous theory can be defined by the N --> infinity limit of the expectation values."

BTW, thanks for pointing out the Bianchi-Magliaro-Perini (BMP) paper - it helps me makes sense of what Barrett is doing by taking the large j limit as semiclassical - I always thought that should be the hbar zero limit - which is what BMP do.

So do you think one should take the N infinity limit first followed by hbar, or the other way? Would you like to guess now - and see in a couple of months, or however fast those guys are going to work - as to whether the Barrett result will hold up if the N infinity limit is taken first? :-p
 
Last edited:
  • #65
Atyy, let me highlight the main issue we are discussing. I think it is the manifoldless formulation of LQG. You seem reluctant to accept the idea that this is what Rovelli is presenting in the April paper (the subject of this thread.)

marcus said:
As further motivation for the move towards manifoldless QG+M, I should quote (again) that passage from Marcolli's May 2010 paper. Marcolli mentions the view of Chamseddine and Connes. This is section 8.2 page 45.

==quote http://arxiv.org/abs/1005.1057==
8.2. Spectral triples and loop quantum gravity.
...
at the Planck energy the manifold structure of spacetime will break down and one must have a completely finite theory.

Such a view is precisely that embodied by theories of quantum gravity, including of course loop quantum gravity—a setting in which spin networks and spin foams find their home. ..
==endquote==

marcus said:
I guess one way to put the point is to observe that LQG is amphibious.

The graph description of geometry (nodes of elemental volume, linked to neighbors by elemental area) can live embedded in a manifold and also out on its own---as combinatoric data structure.

In the April 2010 status report and survey of LQG, the main version presented is the manifoldless formulation which Rovelli calls "combinatorial". But he also, in small print, describes earlier manifoldy formulations using embedded graphs. In my view those are useful transitional formulations. They can be used to transfer concepts and to prove large limits and to relate to classical GR. Stepping stones, bridges, scaffolding...

atyy said:
So what's the manifoldless take on renormalization?
In the manifoldy view it has to happen somewhere, since one started with a triangulation of the manifold.

Maybe you are not, but you seem to have been stuck on the idea that because in some papers the type of spinfoam was restricted to be dual to a triangulation of a 4D manifold that somehow ALL spinfoams must not only live in manifolds (which is not true) but even must be dual to triangulations! This is far from the reality. As a convenience, to prove something, one can restrict to special cases like that (the preamble of a paper may give some indication of what special case is in play in that paper.)

marcus said:
...Nodes carry fermions. Links carry Yang-Mills fields. Geometry is purely relational. The basic description is a labeled graph. The graph carries matter fields and there are no infinities.
See the statement of problem #17 on page 14 of the April paper. This points to what i think is now the main outstanding problem---going from QG to QG+M---including dynamics in what is (so far at best) a kinematic description of matter and geometry.

Just from reading the April paper you can see (but you already know) that the way dynamics is handled is as a "path integral" over all possible spinfoams that fit the boundary.
So if nodes carry fermions and links carry Y-M fields, then when we go over to dynamics this means fermions travel along edges, Y-M fields along faces, and interactions occur at vertices.

OK, I see Rovelli has listed what I'm asking about as his open problem #6, where he refers to further studies along the lines of...

If you look at problem #6, you will see it is about equation (52). If you look at (52) you will see that manifolds are not involved. Unembedded spinfoams are involved.
He is asking about possible infrared divergences in equation (52) which is a manifoldless equation. Infrared means large j limit. The spin labels get big. That is, large volumes and areas. And check out equations (6-8): area and volume operators are also defined in a manifold-free way! The very concept of area is manifoldless. That's on page 2.

Because LQG tools are "amphibious" as I said, if somebody wants to prove something they can always restrict to some special case or consider embedded foams and networks as a help---getting a preliminary result. And indeed Rovelli refers to some 2008 work, on a preliminary result about large j divergences, that used a manifold. But you should be careful not to conclude that therefore problem #6 involves manifolds or embedded foams. It doesn't follow.

Indeed equation (52) and the whole core formulation is manifoldless---it is just supporting results that are drawn from alternative older formulations and stuff brought in for comparison (showing the convergence of different lines of development) as in section II-F.
 
Last edited by a moderator:
  • #66
atyy said:
...
BTW, thanks for pointing out the Bianchi-Magliaro-Perini (BMP) paper - it helps me makes sense of what Barrett is doing by taking the large j limit as semiclassical - I always thought that should be the hbar zero limit - which is what BMP do.

So do you think one should take the N infinity limit first followed by hbar, or the other way? Would you like to guess now - and see in a couple of months, or however fast those guys are going to work - as to whether the Barrett result will hold up if the N infinity limit is taken first? :-p

That's an intriguing proposal! As usual you are thinking way ahead of me. It sounds like you have visualized a way that they might proceed towards proving that both the largescale, and the semiclassical limits are OK.
At the moment I am not clear enough on how it might be done. And I have absolutely no idea about the timetable. I will take a look at the BMP paper and see if I can get some notion.

Do we measure time in months, or in generations of graduate students? Maybe in generations :biggrin: Will it be one of Rovelli's PhDs (e.g. Bianchi) or might it be a PhD of a PhD (e.g. someone advised by Bianchi). I find it bizarre to look into the future.
One thing they know how to do in LQG is attract and train smart people. And the effort is really focused---with a clear philosophy.

About philosophy, did you notice that Rovelli never showed any interest in the braid representation of matter? (Sundance B-T, Perimeter people, you remember.) Can you think of a reason? How can spin-network links be braided or have any kind of knots? To knot the links you must have it embedded in a manifold. But at short distances the manifold structure dissolves! Rovelli explained this in a series of slides at Strings 2008, depicting how a tangle can untangle. While mathematically appealing, the braid-matter idea was philosophically inconsistent with the program's main (manifoldless) direction---none of the Marseille alumni went for it.
 
Last edited:
  • #67
There is some possibility that the N infinity limit is not needed. Ashtekar et al found in a very particular case that "Thus, the physical inner product of the timeless framework and the transition amplitude in the deparameterized framework can each be expressed as a discrete sum without the need of a ‘continuum limit’: A countable number of vertices suffices; the number of volume transitions does not have to become continuously infinite.' http://arxiv.org/abs/1001.5147 This is one of the most confusing things I find.
 
  • #68
atyy said:
There is some possibility that the N infinity limit is not needed. Ashtekar et al found in a very particular case that "Thus, the physical inner product of the timeless framework and the transition amplitude in the deparameterized framework can each be expressed as a discrete sum without the need of a ‘continuum limit’: A countable number of vertices suffices; the number of volume transitions does not have to become continuously infinite.' http://arxiv.org/abs/1001.5147 This is one of the most confusing things I find.

Atyy, thanks for pointing me to this Ashtekar paper. I found what I think is the passage, on page 4:
==quote Ashtekar 1001.5147 ==
In LQC one can arrive at a sum over histories starting from a fully controlled Hamiltonian theory. We will find that this sum bears out the ideas and conjectures that drive the spin foam paradigm. Specifically, we will show that: i) the physical inner product in the timeless framework equals the transition amplitude in the theory that is deparameterized using relational time; ii) this quantity admits a vertex expansion a la SFMs in which the M -th term refers just to M volume transitions, without any reference to the time at which the transition takes place; iii) the exact physical inner product is obtained by summing over just the discrete geometries; no ‘continuum limit’ is involved; and, iv) the vertex expansion can be interpreted as a perturbative expansion in the spirit of GFT, where, moreover, the GFT coupling constant λ is closely related to the cosmological constant Λ. These results
were reported in the brief communication [1]. Here we provide the detailed arguments and proofs. Because the Hilbert space theory is fully under control in this example, we will be able to avoid formal manipulations and pin-point the one technical assumption that is necessary to obtain the desired vertex expansion: one can interchange the group averaging integral and a convergent but infinite sum defining the gravitational contribution to the vertex expansion(see discussion at the end of section III A). In addition, this analysis will shed light on some long standing issues in SFMs such as the role of orientation in the spin
foam histories [49], the somewhat puzzling fact that spin foam amplitudes are real rather than complex [31], and the emergence of the cosine cos SEH of the Einstein action —rather than eiSEH— in the classical limit [32, 33].
==endquote==

It's later now and I've had a chance to take a leisurely look. I didn't realize the interest of this paper before. It's going to be helpful to me, so am extra glad to have it pointed out. I can not address your remark right away but will read around in the paper and aim for a general understanding. Bringing LQC on board spinfoams is fairly new. I'll try to respond tomorrow.
 
Last edited:
  • #69
Marcus, thanks for your further comments. I've again been away for a few days and just got back.

marcus said:
It is interesting that you are thinking in terms of what, in Computer Science, are called "data structures" used for storage and retrieval. Just for explicitness, I will mention that some examples of data structures are graphs, trees, linked lists, stacks, heaps. Not something I know much about. It is also intriguing that you mention a type of Fourier transform (the FFT).
====================

I think that primarily for pragmatic reasons the game (in QG) is now to find SOMETHING that works. Not necessarily the most perfect or complete, but simply some solution to the problem of a manifold-less quantum theory of geometry and matter.

As for my own perspective, and it's association to LQG - I seek a NEW intrinsic measurement theory that is also built on an intrinsic information theory, where information is subjective and evolving transformations between observers, (rather than just relational with a structural realism view of the transformations as equivalence classes).

So key points to mer are

1. An INTRINSIC representation of information (ie. "memory" STORAGE)

2. Datacompression (different amounts of "information" can be stored in the same amount of memory, depending on the choice of compression - I suggest the compression algorithms are a result of evolution; the laws of physics "encode" compression algorithms of histories of intrinsic data).

3. The compression algorithms are also information. The coded data is meaningless if the coding systme is unknown.

4. Any given observer, has to evolve and test their own coding system. Only viable observers survive, and these have "fit" coding system. The only way to tell wether a coding system is "good" or "bad" is for the observer to interact with the environment and see wether it is fit enough to stay in business. So there is no objective measure of fitness.

marcus said:
I think that primarily for pragmatic reasons the game (in QG) is now to find SOMETHING that works. Not necessarily the most perfect or complete, but simply some solution to the problem of a manifold-less quantum theory of geometry and matter.

If one could just get one manifold-less quantum theory that reduced to General Relativity in the large limit, that would provide pointers in the right direction---could be improved-on gradually, and so forth.

Yes, that ambition fits my view of Rovelli's way of putting it too. I think he wrote somewhere that if we can just find ANY consistent theory that does the job, it would be a great step.

But I do not share that ambition. I think that acknowledging ALL issues with current models that we can distinguish, will make it easier, rather than harder to find the best next level of understanding.

It's in THIS respect that I do not quite find the abstract network interpretation motivated. The MOTIVATION seems to come from the various triangulations or embedded manifold view. Then afterwards it's true that one can capture the mathematics and forget about the manifold motivation, but then the obvious question is, is this the RIGHT framework we are looking for? I am not convinced. Maybe it's related to it, but I still think, if we acknowledge all the obvious points that there should be a first principle construction of the "abstract view" in terms of intrinsic measurements and notions.

When you say getting rid of the manifold, I see several possible meanings here

a) just get rid of the OBJECTIVE continuum manifold

a') get rid of the subjective continuum because it's unphysical, it's more like an interpolated mathematical continuum abstraction around the physical core.

b) get rid of the notion of objective event index (spacetime is really a kind of indexed set of events) (ie. wether discrete or continous). This is already done in GR - the hole argument etc. Ie. the lack of OBJECTIVE reality to points in the event index (if I allow myself to translade the hole argument to the case of a "discrete manifold")

b') get rid of the notion of subjective event index (since we want the theory of be observer invariant; and only talk about EQUIVALENCE CLASSES of observers)

I think we need to do a + a´+ b , but b´ is not possible since it is the very context in which any inference lives. I think Rovelli tries to do also b´and replace it with structural realism of the equivalence classes.

If you understand my argument and quest for an intrinsic inference, this is a sin and unphysical itself. I'm suggesting that the notion of observer invariant equivalence classes itlsef is "unphysical". (some of the arguements are those of smolin/unger)

But I also think that if we really reduce the discrete set of events to the pure information theoretic abstraction, we also remove the 3D structure. All we have is an index, and how order and dimensional meausres emergets must be described also from first principle selforganising.

So I expect the abstract reconstruction of "pure measurements" to start from a simple distinguishable index, combined with datastructures representing coded information, and communication between such structures (where the communication is what generates the index first as histories, then as recoded compressed structures) (*)

(*) I think this is what is missing. The abstract LQG view, is MOTIVATED from the normal manifold/GR analogy, and therefore it doesn't qualify as a first principle relation between pure measurements in the sense I think we need.

/Fredrik
 
Last edited:
  • #70
Even when we do reduce the manifold to measurements, you still keep mentioning notions such as area and volume.

But from a first principle reconstruction - what do we really mean by "area" or "volume"?? I find it far from clear. I'd like to see the "geometric notions" (if they are even needed?) should be constructed more purely from information geometry than what is customer.

I think it needs to be rephrased into more abstract things such as capacity, amount of information, or channel bandwith etc. Then we also - automatically - can not distinguish matter and space of particular dimensions etc. This reconstruction seem to still be missing in LQG.

/Fredrik
 

Similar threads

Replies
14
Views
6K
Replies
7
Views
3K
Replies
14
Views
4K
Replies
1
Views
2K
Back
Top