Two World-theories (neither one especially stringy)

  • Thread starter marcus
  • Start date
In summary, the conversation discussed two quantum spacetime theories, Lorentzian DT and Loop, that show promise in understanding the quantum physics of gravitational interactions. Lorentzian DT was first proposed in 1998 and has seen a steady number of research papers published since then, while Loop has been around since the early 1990s and has a larger number of published papers. The main difference between the two theories is their treatment of area and volume operators, with no indication yet that Lorentzian DT has discrete spectra. Both theories do not use coordinate systems, with Lorentzian path integral being even more stripped down.
  • #71
Hi selfAdjoint,
before I forget I must say I am looking out for a paper by Bianca Dittrich and Renate Loll that applies CDT to Schwarzschild black holes.

It has not been posted yet. but curiously I just now encountered a citation to it in an article by Arundhati Dasgupta
http://arxiv.org/find/gr-qc/1/au:+Dasgupta_A/0/1/0/all/0/1

Right at the end of his recent article about black holes (mostly from LQG and related perspective), actually it is the very last sentence of his conclusions, he says:

" There is a discretisation of the Schwarzschild space-time using dynamical triangulation techniques in [20], it shall be interesting to obtain the entropy in that formalism. "

and his [20] is
"[20] B. Dittrich, R. Loll, Dynamical Triangulations of Black Hole Geometries, in preparation. "

However in the AJL article they give a different title, by coincidence it is also their reference [20]
"[20] B. Dittrich and R. Loll: Counting a black hole in Lorentzian product triangulations, preprint Utrecht, to appear. "

So far we know Bianca Dittrich mainly from her work with Thiemann, in particular on the Master Constraint programme.
 
Physics news on Phys.org
  • #72
selfAdjoint said:
Two responses to this:

I. Maybe CDT will wind up doing with LQG what the lattice does with the standard model: achieve geniune if spotty non-perturbative results while the lion's share of the physics is being done n-loop perturbative with the continuum theory. (n being a small integer).

II. Topology has lots of more general things than polyhedra (which is what triangulated manifolds are). CW-complexes, ANRs, lots of things in graduated sequences of generality. And there are all those lovely categories.

I am not so optimistic about LQG. I value its contribution enormously it has beat a track into the bush. unearthing interesting stuff like the Immirzi parameter and focusing attention on form theories of gravity as in the case of Freidel/Starodubtsev (who almost seem to have a background indep perturbative approach, resolving singularities like BH and BB, finding an automatic generic mechanism for inflation. many of its discoveries will probably persist and mutate in other contexts. BUT. my intuitive feeling is that CDT is now entering an exponential growth phase and remember

CDT HAS a hamiltonian, and looks like it HAS a reasonable chance of correct classical and semiclassical limiting behavior. and it also has the
DYNAMIC DIMENSION card.

and it looks more like a Feynman path integral to me. So I am apprehensive about the longrange prospects of Loop. I see that it can be a very valuable set of PILOT STUDIES, but I see a possibility that it could be eclipsed by CDT.

of course there is always the possibility of convergent evolution and the eventual proof of an equivalence theorem mapping one model onto the other.

(sorry, I have just been idly speculating, usually a waste of time----what we need to do, I suspect, is not speculate but quickly understand everything we can about CDT as it is right now)
 
  • #73
marcus said:
I am not so optimistic about LQG. I value its contribution enormously it has beat a track into the bush. unearthing interesting stuff like the Immirzi parameter and focusing attention on form theories of gravity as in the case of Freidel/Starodubtsev (who almost seem to have a background indep perturbative approach, resolving singularities like BH and BB, finding an automatic generic mechanism for inflation. many of its discoveries will probably persist and mutate in other contexts. BUT. my intuitive feeling is that CDT is now entering an exponential growth phase and remember

CDT HAS a hamiltonian, and looks like it HAS a reasonable chance of correct classical and semiclassical limiting behavior. and it also has the
DYNAMIC DIMENSION card.
If all there is so far is space without particles, how do they avoid scale invariance since there seems to be nothing with respect to measure distance with? What is small distance compared to large distance in a universe without particles to measure with respect to?
 
  • #74
Here's a new paper http://www.arxiv.org/abs/hep-th/0505165

by Mohammed Ansari & Fotini Markopoulo:

We rewrite the 1+1 Causal Dynamical Triangulations model as a spin system and thus provide a new method of solution of the model

with lots of pictures.
 
  • #75
selfAdjoint said:
Here's a new paper http://www.arxiv.org/abs/hep-th/0505165
by Mohammed Ansari & Fotini Markopoulo:
with lots of pictures.

selfAdjoint, thanks for catching this. I will add it to the surrogate sticky links thread. Good for Fotini for venturing into a new field herself and for getting her gradstudent Ansari into a promising research line at the ground floor

If you like pizza, look at footnote #3 on page 8.
 
Last edited:
  • #76
I checked the list of grad students at Perimeter. It includes two that have recently co-authored CDT papers:

Tomasz Konopka
Mohammad Ansari

and one we know of from some posts here by John Baez and Kea, namely
Artem Starodubtsev

If I were a grad student, i would want to be in Utrecht (with Ambjorn and Loll), but if not there then maybe Perimeter would not be too bad because they seem to stay engaged. I wish some Utrecht code could be transplanted to a Canadian computer
 
  • #77
Hi

This is great stuff. I wish I had more time here.

Marcus, do you think the mathematics of fractals as demonstrated by the Mandlebrot and Julia sets will apply to these CDT fractal dimensions? I am still looking for my books on fractals.

Thanks,

Richard
 
  • #78
nightcleaner said:
... do you think the mathematics of fractals as demonstrated by the Mandlebrot and Julia sets will apply to these CDT fractal dimensions? ...

I can only guess that yes, some of what has been learned about fractal sets WILL be applicable to spacetime at very small scales, but since i am not very knowledgeable about those things, I would not know what to expect.

It seems to me that some things about the familiar beautiful fractals would NOT apply. They are self-similar (proportions the same at all scales) and so their dimensionality is the same all the way down. But the CDT people seem to be saying that spacetime is almost just ordinary cliche 4D spacetime at large scale but gets more frizzy as you go down scale
so that at small sizes it gets quite cheesy and flakey.
So this is not self-similar or scale-invariant behavior at all.

It's early days for understanding these things (at least for me)
and also CDT could be wrong.
I like it that AJL have not been afraid to go ahead with something that is radically innovative and doesn't even have an underlying smooth continuum (such as strings live in, or such as the paraphernalia of LQG is built on)

instead of a smooth differentiable continuum, they have an extremely kinky continuum, without even coordinate patches. this is a moral satisfaction to me and resonates with my deep inner perversity, thus having a calming effect. it almost makes me happy. I hope you too
 
  • #79
Yes, quite so.

I suspect the universal set is not discrete in itself, but any observer is a limited system and therefore the interaction of the observer with the universe is limited...hence the fact that, to any observer, the universe appears to have limits. These limits necessarily receed, as the observer develops.

R
 
  • #80
Nightcleaner, I am coming to recognize the paper I call "Dynamically..."
as a tutorial.

(short for hep-th/0105267 "Dynamically Triangulating Lorentzian Quantum Gravity)

Lorentzian Q. G. is one of their old names for CDT. they finally settled on CDT permanently in 2004. But when you take that change in terminology into account, then the title tells you what it is. This is a HOW TO do it paper. they take you thru the 3D case partly because it is easier and once you have been thru the 3D case the 4D case feels better.

A tutorial type paper is one that it is worthwhile working thru at least part of it, equation by equation.

For someone who likes triangles and tetrahedra, you may at first be confounded by the fact that there are two kinds of triangles spacelike and timelike and they have different areas!

For example look on page 6, equation (4) near the bottom.

area of spacelike triangle = [tex]\frac {\sqrt{3}}{4} [/tex]

I suspect it was Rafael Sorkin who made up the word "bones" for the D-2 simplexes. So if you are in 3D case, the "bones" are just line segments, the edges of the tets. the reason I think this is because he grew up in Chicago.

anyway a spacelike triangle is just an equilateral with sides equal ONE.
so naturally the area = [tex]\frac {\sqrt{3}}{4} [/tex] we did this in middleschool or 9th grade

BUT THE SQUARE LENGTH OF A TIMELIKE EDGE IS MINUS ALPHA.

In CDT you allow the timelike length to be an imaginary number oi veh oi veh, and you give it some freedom so that its square length does not have to be exactly minus one, but can be [tex] - \alpha [/tex]

So imagine a timelike triangle, in the sandwich, with its base in one spacelike layer. So its base has length one! OK but the other two sides are timelike. so the square of one of those sides is [tex] - \alpha [/tex].

It is an ISOSCELES triangle with the two equal sides IMAGINARY (whoops ) so what is the HEIGHT? well you just do pythagoras and square the hypoteneuse (minus alpha) and subtract 1/4 (the square of the side)

the square of the height = [tex] - \alpha - 1/4 = - \frac{4\alpha + 1}{4}[/tex].

the height = [tex] i \frac {\sqrt{4\alpha + 1}}{2} [/tex].

now the base = 1, remember it is spacelike and all in one layer,
so one half base times height = [tex] \frac {\sqrt{4\alpha + 1}}{4} [/tex].

where I dropped a factor of i because I thought you might not be watching.
and hey, an area or volume should be a real number.
 
Last edited:
  • #81
what I was calling "height" in the previous post was just the height of a timelike triangle (timelike means it spans two layers, we worked with layers before, the CDT universe is foliated in spacelike layers)

but suppose now we have a TETRAHEDRON of the (3,1) sort, that has a spacelike base that it is sitting on. an equilateral triangle with side one.
And it has three timelike isosceles triangles as sides.

now what is the (timelike) height of that tetrahedron?

well you draw a picture and see that it is the vertical leg of a right triangle whose squarehypoteneuse is minus alpha and whose other square leg is 1/3
so (ignoring a factor of i) the height of the tetrahedron is
[tex] \frac {\sqrt{3\alpha + 1}}{\sqrt{3}} [/tex].

and the volume of a cone or pyramid is always 1/3 the base area times the height, and the base of this tet is just [tex] \frac {\sqrt{3}}{4}[/tex]

cause its an equilateral triangle! so you multiply the base area times the height and the sqrt{3} cancels and you get
[tex] \frac {\sqrt{3\alpha + 1}}{4} [/tex]
and one third of that (because it is a pyramid type thing) is

[tex] \frac {\sqrt{3\alpha + 1}}{12} [/tex]

now let's see if that is what AJL say. YES! just look at their
equation (5)
so these are the stretches they have you do before anything aerobic happens. it is a tutorial.
 
  • #82
Marcus said:
I suspect it was Rafael Sorkin who made up the word "bones" for the D-2 simplexes. So if you are in 3D case, the "bones" are just line segments, the edges of the tets. the reason I think this is because he grew up in Chicago.

Is there some sly angeleno dig at Chicago here? What is the connection of Chitown with 'bones"? And what about LA then? I saw Chinatown, and it was (loosely) based on real events. :biggrin:
 
  • #83
americans like short visual-type words
(this is one reason much of our language is still strong even if the populace gets fat and fatuous)

even when they are doing misguided empty physics, american physicists will think up good words to call things

please remember I am not an angeleno, I do not do angeleno "digs"

I am proud that Rafael Sorkin grew up in Chicago.

Look at the list of organizers of the Loops 05 conference!
Almost no one there was born in the States!
Rafael Sorkin is one of the very few.

Theoretical physics in the US has lost ground and gotten off track probably. When we hear about really new developments they seem to be coming from Utrecht! It didnt used to be that way, IIRC.

I think "bones" is a good strong-visual monosyllable coinage, for something that needed a name. (the D-2 simplices are important in Regge calculus)

i don't know who named them bones. But I would bet an american
and Sorkin in 1975 wrote
"Time-evolution in Regge calculus" which I have not seen.
So I don't know who coined it, but he is as likely as anyone i can think of.

there is also an Englishwoman, ruth williams. she might have.

It was not a German, who would have said
[tex]\mathfrak {Knochen}[/tex]
instead

Sometimes I think you are sensitive about the Midwest. We coasters like the Midwest. We listen to Garrison Kiellor Prairie Home Companion.
and sometimes i can't tell if you are joking, like when you say angeleno (which to N calif ears sounds derogatory)
 
Last edited:
  • #84
Marcus I think the "bones" came from the fact that what you have been calling a tet (for tetrad) is also known as a vierbein (= "four bones" or perhaps "quadruple bone"). In string theory where they do 1+1 GR on the string worldsheet, sometimes you see the term "zweibein" for the corresponding thing, and sometimes by a stated abus de langage they still say vierbein.
 
  • #85
selfAdjoint said:
Marcus I think the "bones" came from the fact that what you have been calling a tet (for tetrad) is also known as a vierbein (= "four bones" or perhaps "quadruple bone"). In string theory where they do 1+1 GR on the string worldsheet, sometimes you see the term "zweibein" for the corresponding thing, and sometimes by a stated abus de langage they still say vierbein.

words are fascinating, arent they? and they are soaked in history which is fascinating too

In German Bein is "leg"
and a "Dreibein" is the same as a Tripod (word constructed the same way, as "three-leg" or "three-footed thing")

there is no connotation of "bone" as far as I know

and yet Bein does certainly sound like bone!

I suppose that Dreibein is an ancient German word, like the Greeks had Tripods for burning incense, and in Homer you have catalogs of X number of goblets and Y number of gold plates and Z number of Tripods, catalogs of things that one gave to the priest of apollo or in recompense to appease wrath etc.

I could be wrong. I will try to look it up
 
Last edited:
  • #86
It is important to notice that in simplicial manifolds Regge calculus a "bone" has nothing obvious to do with a Bein or a Vierbein or Dreibein.

where the basic building block is a 4-simplex, the bones are the TRIANGLES that form the sides of the Tetrahedra that form the faces of the 4-simplex.

where the basic building block is a 3-simplex, or tetrahedron, then the bones are the EDGES of the triangles that form the sides of the 3-simplex.

The bones are, by definition, the (D-2)-simplexes.


By contrast, a "Dreibein" at some point in a manifold can be pictured as a little X,Y,Z reference frame made of 3 tangent vectors. You can stick out your thumb and 2 fingers to make XYZ axes and give the idea.

In a 4-manifold, a "Vierbein" at some point is the analogous thing made of 4 tangent vectors.

The reason that the concept bone is a natural is that in 3D Regge calculus, the edges play the same role and the triangles do in 4D.

IN 3D, To make the Regge version of the Einstein action one is measuring the DEFICIT ANGLE of the tetrahedra around some EDGE

IN 4D one is measuring the deficit angle of the 4-simplexes which are joined around some TRIANGLE.

THE BONE IS WHERE THE THINGS COME TOGETHER THAT YOU ADD UP THEIR DIHEDRAL ANGLES TO FIND OUT THE DEFICIT ANGLE

the reason you need the "bone" concept is so that you can speak in general about 3D and 4D and 5D gravity all at once, and so you can treat them as analogs of each other.

the deficit angle of the simplexes joined around a bone is, of course, how Regge discovered he could measure curvature and in this way, using simplexes, he could implement General Relativity.

it is only gradually dawning on me how beautiful this approach is
 
Last edited:
  • #87
Hi Marcus

I am way behind on my reading and there are strawberries to put in and trees to plant, as well as dogs, rabbits, and a horse to care for. The good news is the sun has made an appearance and I have managed to get my laptop setup on my friend Peg's table, a rather crowded place, but with a nice view of the bird feeder, apple trees, sauna, and horse eating hay in the pasture. I have coffee, and a quiet idyllic setting for contemplation of the mysteries of alpha. Oh, lucky man.

I am reading the triangulation paper, Dynamically Triangulating Lorentzian Quantum Gravity, arXiv:hep-th/0105267v1, and I even think I can follow some of it. The idea that there are time-like and space-like edges, faces, volumes and so on, seems contrary to spacetime equivalence, but if it produces results I am willing to play along.

I am currently trying to understand Figure 1. Part (a) and part (b) seem to me to be identical tetrahedrons except for a small rotation. Part (a) is said to be a (3,1) tetrahedron, and part (b) is said to be a (2,2) tetrahedron. I would have thought (3,1) and (2,2) are (space,time) notations, but I can't seem to make sense of the figures from that perspective. Perhaps you or selfAdjoint or someone will help me find what it is I am missing here.

I have to use daylight to do chores, but hope to return to this tonight.

Thanks,

Richard



(later...)

Hi Marcus and all

I am working offline while the rice cooks.

Returning to the triangulation paper, I see that a point is defined as having volume one. This seems to me a generalization of the idea of volume, but the authors say that it is conventional and so I accept it, even though my sense of geometry tells me that a point has no volume at all. Then in eq. (3) Vol(space-like link)=1 and Vol(time-like link) = sqrt(alpha). Here a link is also given the attribute of volume. Further in equation (4) a volume of a space-like and a time-like triangle is defined. So the authors have generalized the use of the term volume to apply to point-like objects, line-like objects, and surface-like objects. I am not sure what the purpose of this generalization is. Then the authors go on to introduce the idea of space-like and time-like bones. Evidently they need a new term other than line, edge, and link to describe spacetime connectivity. The distinctions between these terms seem vague to me, but perhaps all will be made clear.

Richard
 
Last edited by a moderator:
  • #88
nightcleaner said:
...

I am way behind on my reading and there are strawberries to put in and trees to plant, as well as dogs, rabbits, and a horse to care for. The good news is the sun has made an appearance and I have managed to get my laptop setup on my friend Peg's table, a rather crowded place, but with a nice view of the bird feeder, apple trees, sauna, and horse eating hay in the pasture. I have coffee, and a quiet idyllic setting for contemplation of the mysteries of alpha. Oh, lucky man.

...

The picture of happiness. the laptop and the coffee make it complete. I was curious about what kind of tree you'd be planting, whether fruit or a windbreak line of trees, or for stove-wood. People who plant trees think ahead. My wife and I planted a dozen redwoods in a creekbed near here that is owned by various bureaucratic agencies who seem to have forgotten it exists.

I am reading the triangulation paper, Dynamically Triangulating Lorentzian Quantum Gravity, arXiv:hep-th/0105267v1, and I even think I can follow some of it. The idea that there are time-like and space-like edges, faces, volumes and so on, seems contrary to spacetime equivalence, but if it produces results I am willing to play along.

I am currently trying to understand Figure 1. Part (a) and part (b) seem to me to be identical tetrahedrons except for a small rotation. Part (a) is said to be a (3,1) tetrahedron, and part (b) is said to be a (2,2) tetrahedron. I would have thought (3,1) and (2,2) are (space,time) notations, but I can't seem to make sense of the figures from that perspective. Perhaps you or selfAdjoint or someone will help me find what it is I am missing here.

Yes! I too am willing to play along. BTW that "Dynamically..." paper has the hard nuts and bolts. I have only calculated a very few of the volumes and checked a few of the sines and cosines. there is a limit to how much of that I want to do---the humble nittygrit as opposed to understanding the ideas. maybe it is a swinging pendulum: gnaw on the hard details a little while and then visualize and think about what it might mean.

the (3,1) and (2,2) refer to the two ways a tetrahedron can sit in the foliation. Foliation means "leaf-ing" or layering. their spacetime is made of layer upon layer of spacelike leaves or sheets connected by layers of tetrahedra.

there is also the (1,3) tetrahedron which is just the (3,1) upsidedown standing on its point with its base in the next layer up

figure 1 is talking about the D = 3 = 2+1 case. So the spatial sheets are 2D and they are divided up into actual triangles (this is the one case where a "triangulation" is actually what it sounds like)

time is just the discrete numbering of spacelike leaves---an integer layer-count

in between any two 2D spacelike leaves is filled in by a layer of tetrahedrons. The triangles that triangulate a layer of space are the BASES of (3,1) and (1,3) tetrahedrons. In that way the tetrahedrons actually CONNECT THE TWO TRIANGULATIONS of two adjacent spacelike sheets. A mosaic of tetrahedra in the 3D bulk sandwich, joins the mosaic of triangles covering each 2D leaf. I don't know whether to call these 2D things leaves (which "foliation" as in leafy foliage suggests) or sheets, so I may alternate.

The (3,1) and (2,2) notation refers to the number of vertices in each of two adjacent spatial sheets. A (2,2) tetrahedron has one top ridge (with two endpoints) and one bottom ridge (with two endpoints). With two spacelike sheets each covered by triangles, and a the tetrahedrons fitted in between to make a perfect connection between the two triangulations, then every side of every triangle must be the top ridge or bottom ridge of some (2,2) tetrahedron

Now you can erase the paper-thin spatial leaves, if you want, and just imagine a 3D spacetime that is PACKED SOLID WITH TETRAHEDRA, that is all the approximating 3D triangulation is. It is a 3D spacetime packed solid with tets but don't forget that the packing was arrived at in a special way that respects causality. If you like dectective work you could look at the tets and find the spacelike sheets and DEDUCE the plan of each one.

Each space-sheet consists of events which could have CAUSED events in the next layer up, and which could have BEEN CAUSED by events in the next layer down.

So this particular way of packing 3D spacetime solid with tets EMBODIES in it a primitive idea of causality. that things cause other things and not the other way around. that is the reason for the layers
 
  • #89
Hi Marcus I'm online for an hour or so if you want to try real-time on this BB.

I'll work in edit and refresh the screen every few.

Causality seems to be an important item in this landscape of idylls. I read the image of foliation like a book. Write what you want on the pages. If you are clever, you can make a flip-vid (ok now I KNOW that one is a first use!).

Causality is placed in the idyll by two hands, one provides the sequence of the pages, the other provides something clever written upon them. Now we are considering fractal patterns in the idyll. A flip-book of fractals, chosen to foliate into a semblance of motion. The pages do not move and the patterns do not move, but the observer flips through the sequence...



Richard
 
Last edited by a moderator:
  • #90
Something to keep in mind. the CDT spacetime is not made of simplexes but is the CONTINUUM LIMIT of approximating mosaic spacetimes with smaller and smaller building blocks.

the quantum mechanics goes along with this process of finer and finer approximation. so at each stage in the process of going for the limit you have an ensemble of many many mosaic geometries

so there is not just one continuum which is the limit of one sequence of mosaics (mosaic = "piecewise flat", quite kinky manifold, packed solid with the appropriate dimension simplex)
there is a quantum jillion of continuums each being the limit of a quantum jillion of sequences of mosaics.

or there is a blur of spacetime continuums with a blur of different geometries and that blur is approximated finer and finer by a sequence of simplex manifold blurs

BUT DAMMIT THAT IS TOO INVOLVED TO SAY. So let us just focus on one of the approximating mosaics. Actually that is how they do it with their computer model. they generate a mosaic and study it and measure things, and then they randomly evolve it into another and study that, one at a time, and in this way they get statistics about the set of possible spacetime geometries. One at a time. One definite concrete thing at a time. Forget about the blur.

I am working offline while the rice cooks.

Returning to the triangulation paper, I see that a point is defined as having volume one. This seems to me a generalization of the idea of volume, but the authors say that it is conventional and so I accept it, even though my sense of geometry tells me that a point has no volume at all. Then in eq. (3) Vol(space-like link)=1 and Vol(time-like link) = sqrt(alpha). Here a link is also given the attribute of volume. Further in equation (4) a volume of a space-like and a time-like triangle is defined. So the authors have generalized the use of the term volume to apply to point-like objects, line-like objects, and surface-like objects. I am not sure what the purpose of this generalization is. Then the authors go on to introduce the idea of space-like and time-like bones. Evidently they need a new term other than line, edge, and link to describe spacetime connectivity. The distinctions between these terms seem vague to me, but perhaps all will be made clear.
...

yes, in mathtalk there is always this tension between wanting to use general terms so that you can JUST SAY IT ONCE and have that apply to all cases and all dimensions, and on the other hand wanting to use very concrete words that mean something to the reader.
So they have decided to use the word VOLUME to mean length of 1D things, and area of 2D things, and ordinary volume of 3D things, and hyperspace volume of 4D things.


the word BONE deserves a whole essay by itself. As you know the world evolved by itself without the help of any divine creator. However God did create mathematics (he just didnt bother to create the world, that happened spontaneously by some curious accident for which no one is responsible).
so by his infinite grace and glory and kindness, God arranged that in a 3D spacetime, packed solid with tetrahedra, the CURVATURE of the frigging thing can actually be measured by counting the tetrahedra around each LINE SEGEMENT. this sounds like a damnable lie but Regge discovered it, essentially. this is a CDT version of a truth Regge found in 1950.

And again by the infinite grace and mercy of the Creator (of mathematics) in a 4D spacetime the curvature of the frigging thing can be measured by counting the 4-simplexes around each TRIANGLE.

So to GENERALIZE the terminology, as mathematicians believe the Lord wishes them to do, so that they don't have to say edge in one case and triangle in another case, they invented the word BONE. a bone is the
D-2 thing.

so in the 3D case, the bone is the 1D thing
and in the 4D case, the bone is the 2D thing (the triangle)

I have a hard time imagining in 4D how a bunch of simplexes can surround a triangle.

but in 2D I can picture how 5 or 6 or 7 equilateral triangles can surround a POINT (which is the bone in 2D)
and obviously if the count is exactly 6 then the manifold is flat at that point, and if the count is 5 then the manifold has positive curvature at that point etc etc etc.

and in 3D I can picture how various numbers of tets can surround an edge (which is the 3D bone), and how that relates to curvature

but like I say it is hard for me to imagine in 4D how various numbers of 4simplexes can surround a triangle, which is the bone in 4D

my chorus is doing a concert in a few hours, so I must practice some
 
Last edited:
  • #91
Hi
guest stopped in...will continue later, thanks. Great stuff...R
 
  • #92
marcus said:
Something to keep in mind. the CDT spacetime is not made of simplexes but is the CONTINUUM LIMIT of approximating mosaic spacetimes with smaller and smaller building blocks.

the quantum mechanics goes along with this process of finer and finer approximation. so at each stage in the process of going for the limit you have an ensemble of many many mosaic geometries

so there is not just one continuum which is the limit of one sequence of mosaics (mosaic = "piecewise flat", quite kinky manifold, packed solid with the appropriate dimension simplex)
there is a quantum jillion of continuums each being the limit of a quantum jillion of sequences of mosaics.

or there is a blur of spacetime continuums with a blur of different geometries and that blur is approximated finer and finer by a sequence of simplex manifold blurs

BUT DAMMIT THAT IS TOO INVOLVED TO SAY. So let us just focus on one of the approximating mosaics. Actually that is how they do it with their computer model. they generate a mosaic and study it and measure things, and then they randomly evolve it into another and study that, one at a time, and in this way they get statistics about the set of possible spacetime geometries. One at a time. One definite concrete thing at a time. Forget about the blur.

This all sounds like a numerical method for the calculation of some calculus. Do they have a differential or integral equation for this process that they are doing with a numerical algorthm? Do they show that there is something pathalogical with the calculus to justify the numerical approach with computers? Thanks.
 
  • #93
Mike2 said:
This all sounds like a numerical method for the calculation of some calculus. Do they have a differential or integral equation for this process that they are doing with a numerical algorthm? .

yes they have a rather nice set of equations, look at the article
http://arxiv.org/hep-th/0105267
equation (2) gives the action integral
the discrete version (following Regge) is (38) on page 13
from thence, in the next section, a transfer matrix and a hamiltonian

You may not realize this but the Einstein equation of classical gen rel is normally solved NUMERICALLY IN A COMPUTER these days because one cannot solve it analytically. that is how they do differential equations these days, for a lot of physical systems. It is not pathological, it is normal and customary AFAIK.

the Einstein eqns are only solvable analytically in a few highly simplified cases. so be glad they have a model that they CAN implement numerically---in the real world that is already something of a triumph
:smile:

Do they show that there is something pathalogical with the calculus to justify the numerical approach with computers? Thanks

as I say, they don't have to justify using numerical methods, and it is not pathological----its the customary thing to do if you are lucky
 
Last edited by a moderator:
  • #94
marcus said:
yes they have a rather nice set of equations, look at the article
http://arxiv.org/hep-th/0105267
equation (2) gives the action integral
the discrete version (following Regge) is (38) on page 13
from thence, in the next section, a transfer matrix and a hamiltonian
So from (2) it would seem that they are integrating over the various possible metrics on a given dimension. It would seem that the demension is given a-priori as 4D. I don't get, then, what this talk is about 2D at small scales.

Edit:
Just a moment, does all possible metrics include those that give distance only in 1, 2, and 3 dimensions. If so, is this the way to integrate over various dimensions as well?

marcus said:
as I say, they don't have to justify using numerical methods, and it is not pathological----its the customary thing to do if you are lucky
Isn't it more desirable to find an analytic expression? Or are they just taking it from experience that these path integrals are generally not analytic and require numerical methods to solve? And why Monte Carlo method. Is this to avoid even the possibility that the other methods of numerical integration can be pathological? Thanks.
 
Last edited by a moderator:
  • #95
Mike2 said:
...Isn't it more desirable to find an analytic expression? Or are they just taking it from experience that these path integrals are generally not analytic and require numerical methods to solve? And why Monte Carlo method. Is this to avoid even the possibility that the other methods of numerical integration can be pathological? Thanks.

It doesn't seem efficient for me to just be repeating what they say much more clearly and at greater length in their paper, mike. I hope you will read more in the paper.

There is also this recent paper hep-th/0505154 which IIRC has some discussion of what they are actually integrating over, and why numerical, and why Monte Carlo. They are the ones you should hear their reasons from rather than me trying to speak for them. thanks for having a look-see at the papers
cheers
 
Last edited:
  • #96
Here is a relevant quote from right near the beginning of
http://arxiv.org/hep-th/0505154
the most recent CDT paper.

It addresses some of the issues raised in Mike's post, such as why numerical, why Monte Carlo. To save reader trouble i will copy this exerpt in, from bottom of page 2, the introduction section.

----quote from AJL----
In the method of Causal Dynamical Triangulations one tries to construct a theory of quantum gravity as a suitable continuum limit of a superposition of spacetime geometries [6, 7, 8]. In close analogy with Feynman’s famous path integral for the nonrelativistic particle, one works with an intermediate regularization in which the geometries are piecewise flat (footnote 2.) The primary object of interest in this approach is the propagator between two boundary configurations (in the form of an initial and final spatial geometry), which contains the complete dynamical information about the quantum theory.

Because of the calculational complexity of the full, nonperturbative sum over geometries (the “path integral”), an analytical evaluation is at this stage out of reach. Nevertheless, powerful computational tools, developed in Euclidean quantum gravity [9, 10, 11, 12, 13, 14, 15] and other theories of random geometry (see [16] for a review), can be brought to bear on the problem.

This paper describes in detail how Monte Carlo simulations have been used to extract information about the quantum theory, and in particular, the geometry of the quantum ground state (footnote 3) dynamically generated by superposing causal triangulations.

It follows the announcement of several key results in this approach to quantum gravity, first, a “quantum derivation” of the fact that spacetime is macroscopically four-dimensional [17], second, a demonstration that the large-scale dynamics of the spatial volume of the universe (the so-called “scale factor”) observed in causal dynamical triangulations can be described by an effective action closely related to standard quantum cosmology [18], and third, the discovery that in the limit of short distances, spacetime becomes effectively two-dimensional, indicating the presence of a dynamically generated ultraviolet cutoff [19]...



FOOTNOTES:
2. These are the analogues of the piecewise straight paths of Feynman’s approach. However, note that the geometric configurations of the quantum-gravitational path integral are not imbedded into a higher-dimensional space, and therefore their geometric properties such as piecewise flatness are intrinsic, unlike in the case of the particle paths.

3. Here and in the following, by “ground state” we will always mean the state selected by Monte Carlo simulations, performed under the constraint that the volume of spacetime is (approximately) kept fixed, a constraint we have to impose for simulation-technical reasons.

--------end quote from AJL----
 
Last edited by a moderator:
  • #97
I want to highlight something from the above quote:

by “ground state” we will always mean the state selected by Monte Carlo simulations

the ground state of the geometry of the universe (is not made of simplexes but is the limit of finer and finer approximations made of simplexes and) IS A WAVE FUNCTION OVER THE SPACE OF ALL GEOMETRIES that is kind of like a probability distribution covering a great bunch of possible geomtries

and we find out about the ground state wavefunction, find out things like what kind of geometries make up the bulk of it and their dimensionality etc, we STUDY the ground state by doing Monty simulations.

that's an interesting way of approaching it, I think. it's clear what they mean and operationally defined. If anyone is scandalized by this way of defining the quantum ground state, it would be lovely if they would tell us about it. contribute to the conversation etc.

Love this stuff. Renate Loll is keen as a knifeblade. they do not mess around. often seem to take original approaches.

Oh PLEASE CORRECT ME IF I AM WRONG. I think that before 1998 NO MATHEMATICIANS EVER STUDIED A PL MANIFOLD THAT WAS CAUSALLY LAYERED like this. The PL manifold, or simplicial manifold, is an important object that has been studied for many decades, I don't know how long but I encountered it already in grad school a long time ago. But what Ambjorn Loll invented to do was to MAKE THE SIMPLEXES OUT OF CHUNKS OF MINKOWSKI SPACE and to construct a NECCO WAFER foliation of spacelike sheets with a timelike FILLING in between. So you have a PL manifold which is 4D but it has 3D sheets of tetrahedrons.

and in between two 3D sheets of tets there is this yummy white frosting filling made of 4-simplexes which CONNECT the tets in one sheet with the tets in the next sheet up

and of course another layer of filling that connects the tets in that sheet with those in the one below it.

HISTORY: for a few years after 1998, Ambjorn and Loll tried calling that a "Lorentzian" triangulation. So it was going to be, like, a "Lorentizian" quantum gravity using Lorentzian PL manifolds. But the nomenclature didnt work out----the initials would have had to be LQG , for one thing, causing endless confusion with the other LQG.
So then in around 2003 or 2004 they started saying "causal" instead of "Lorentzian"

So here we have a geometric structure which AFAIK has not been studied in mathematics. A CAUSAL PL manifold.

TERMINOLOGY: The traditional "PL" means "piecewise linear" and it could be misleading. the thing is not made of LINES but rather building blox, but simplexes are in a very general abstract sense linear. So a simplicial manifold assembled out of simplex pieces has (for decades) been called "piecewise linear" or PL and the MAPPINGS BETWEEN such things are also piecewise linear (which is very important to how mathematicians think, they like to think about the mappings, or the morphisms of a category)

A NEW CATEGORY: we now have a new category with new mappings. the CAUSAL PL category. it will be studied. the Ambjorn Loll papers are the ground floor.


----quote from AJL----
In the method of Causal Dynamical Triangulations one tries to construct a theory of quantum gravity as a suitable continuum limit of a superposition of spacetime geometries [6, 7, 8]. In close analogy with Feynman’s famous path integral for the nonrelativistic particle, one works with an intermediate regularization in which the geometries are piecewise flat (footnote 2.) The primary object of interest in this approach is the propagator between two boundary configurations (in the form of an initial and final spatial geometry), which contains the complete dynamical information about the quantum theory.

Because of the calculational complexity of the full, nonperturbative sum over geometries (the “path integral”), an analytical evaluation is at this stage out of reach. Nevertheless, powerful computational tools, developed in Euclidean quantum gravity [9, 10, 11, 12, 13, 14, 15] and other theories of random geometry (see [16] for a review), can be brought to bear on the problem.

This paper describes in detail how Monte Carlo simulations have been used to extract information about the quantum theory, and in particular, the geometry of the quantum ground state (footnote 3) dynamically generated by superposing causal triangulations.

It follows the announcement of several key results in this approach to quantum gravity, first, a “quantum derivation” of the fact that spacetime is macroscopically four-dimensional [17], second, a demonstration that the large-scale dynamics of the spatial volume of the universe (the so-called “scale factor”) observed in causal dynamical triangulations can be described by an effective action closely related to standard quantum cosmology [18], and third, the discovery that in the limit of short distances, spacetime becomes effectively two-dimensional, indicating the presence of a dynamically generated ultraviolet cutoff [19]...



FOOTNOTES:
2. These are the analogues of the piecewise straight paths of Feynman’s approach. However, note that the geometric configurations of the quantum-gravitational path integral are not imbedded into a higher-dimensional space, and therefore their geometric properties such as piecewise flatness are intrinsic, unlike in the case of the particle paths.

3. Here and in the following, by “ground state” we will always mean the state selected by Monte Carlo simulations, performed under the constraint that the volume of spacetime is (approximately) kept fixed, a constraint we have to impose for simulation-technical reasons.

--------end quote from AJL----[/QUOTE]
 
  • #98
the CPL category. CPL mappings

OK so we have a new category where the objects are CPL manifolds and the morphisms are CPL mappings (causal piecewise linear)

there are only two basic papers so far
http://arxiv.org/hep-th/0105267
http://arxiv.org/hep-th/0505154

IMO it is a good time to get into the field.
AFAIK the CPL category has not been studied
and it will be studied.
(it is the basis of a new approach to quantum gravity which has really come alive in the past two years)
math grad students in Differential Geometry should know about CPL manifolds and consider proving some of the first easy theorems, picking the "low hanging fruit" is not a bad idea in math---where really new things are uncommon.

Or maybe i should not say "differential geometry" anymore, I should be saying "combinatorial geometry" or "simplicial geometry" I don't know---language fashions change---fields change terminology as they develop, sometimes.

a CPL mapping has to be a PL mapping (takes simplexes to simplexes, piecewise linear) and it has to respect the causal ordering as well.

I wonder if these things are going to be interesting. maybe and maybe not, can't tell much ahead of time. wouldn't have guessed the physics results of the past two years would be so exciting.

Matter fields have to be laid onto these CPL manifolds. I wonder how that will be done and what kind of new mathematical structure will appear when that is done?

these manifolds do NOT have coordinate patches----they are not locally diffeomorphic to Rn because they don't have a differentiable structure. you could put one on, maybe, but it wouldn't fit well, like a bad suit of clothes.

these CPL manifolds already have curvature. but the curvature is defined combinatorially by COUNTING the number of simplexes clustered around a "bone" (this is a very curious idea, the face of a face is a "bone" or "hinge")

I suspect Rafael Sorkin of promoting the word "bone" (or possibly even coining it). BONES ARE WHAT YOU ADD UP THE DIHEDRAL ANGLES AROUND so that you can know the deficit or surplus angle. I would have to go back to hard copy, a 1975 article by Sorkin in Physics Rev. D, to find out and that seems too much bother. His 1975 article is "Time-evolution in Regge Calculus". I suspect, but do not know, that the delightful word "bone" occurs in this article.

what one wants to be able to do in the CPL category is take LIMITS, going with finer and finer triangulations. I picture it as somewhat like taking "projective limits". the idea of a limit of finer and finer 4d causal triangulations would be defined. it would be some kind of manifold, maybe a new kind of manifold.

when you go to the limit, the bones disappear, but you still have the curvature. how?
 
Last edited by a moderator:
  • #99
marcus said:
I want to highlight something from the above quote:

by “ground state” we will always mean the state selected by Monte Carlo simulations

the ground state of the geometry of the universe (is not made of simplexes but is the limit of finer and finer approximations made of simplexes and) IS A WAVE FUNCTION OVER THE SPACE OF ALL GEOMETRIES that is kind of like a probability distribution covering a great bunch of possible geomtries
Do they have a "cononical" field equation like position and cononical momentum operators on the "wave function"?

It seems to me that they are supposing without explanation a Lorentzian metric. Can this metic form be derived? Is it necessary to do any calculations at all?

How does one pronounce "simplicial"?

Thanks.
 
  • #100
Mike2 said:
How does one pronounce "simplicial"?

Hi Mike2

Pronounce "simplicial" with the stress on the second syllable "pli" with a short 'i' as in sim-PLi-shawl

Of course, I'm a minority accent English speaker!

Cheers
Kea
 
  • #101
Kea said:
Hi Mike2

Pronounce "simplicial" with the stress on the second syllable "pli" with a short 'i' as in sim-PLi-shawl

Of course, I'm a minority accent English speaker!

Cheers
Kea

I expect we all would like your accent very much if we could hear it, let us adopt sim-PLi-shawl
as per Kea
(except that I am apt to say shul instead of the more elegant shawl)

all vowells have a tendency become the schwa
and be pronounced "uh"
 
  • #102
If I may chime in a comment on statistics, the usual reason for using monte carlo methods is to give an unbiased representation of all possible [or at least reasonable] initial states of the system under study. The 'outlier' states are the ones you worry about - the ones where the model collapses and lead to unpredicatable outcomes. It is vitally important to find the boundary conditions, where the model works and where it does not. This is not necessarily a continuum, where the model always works when x>y, x<z. There may, instead, be discrete intervals where it does not work. You need to run the full range of values to detect this when you do not have a fully calculable analytical model. Interestingly enough, this kind of problem often arises in real world applications - like manufacturing - where you have complex interactions between multiple process variables.
 
Last edited:
  • #103
Chronos said:
If I may chime in a comment on statistics, the usual reason for using monte carlo methods is to give an unbiased representation of all possible [or at least reasonable] initial states...

Chronos, thanks for chiming in. I expect it has different associations. For some people, "Monte Carlo method" is a way of evaluating an integral over a multidimensional space, or more generally a way of evaluating the integral of some function which is defined over a very LARGE set, so that it would be expensive in computer time to do ordinary numerical integration.

what one does is to consider the integral as an average, or (in probabilistic terms) an EXPECTATION VALUE. And then one knows that one can estimate the expectation value empirically by sampling. So one picks some RANDOM points in the large set, and evaluates the function at each point in that random sample, and averages up the function values----and that "monte carlo sum" is a stab at the true value of the integral.

(I may be just repeating something you said already in different words. Cant be sure. but want to stress the application of M.C. to evaluating integrals over large sets where other methods inapplicable or too costly)

Naturally the more random points one can include in one's sample the better the value of the integral one is going to get.

Ambjorn et al (AJL) approach to quantum gravity is a PATH INTEGRAL approach, where the "path" is AN ENTIRE SPACETIME.

It is like a Feynman path integral except Feynman talks about the path of an actual particle as it goes from A to B, and AJL talk about the PATH THE UNIVERSE TAKES IN THE SPACE OF ALL GEOMETRIES AS IT GOES FROM BIGBANG TO BIGCRUNCH or from beginning to end whatever you want to call them. And for AJL a "path" is a possible spacetime or a possible evolution of the geometry. Well that is not such a big deal after all. It is just a Feynmanian path integral, but in some new territory.

And they want to study various properties like dimension. So they want to find expectation values, essentially, but the set of all paths is a BIG SET. So it is not practical to do the whole integral (over the range of all spacetimes, all evolutions from A to B or beginning to end). So what they are doing with their Monte Carlo is this:

they found a clever way to pick random spacetimes that are paths of geometry from beginning to end. So they pick many many, a large random sample, and they evaluate the function they want to study.

they evaluate the function they want to study for each of a random sample of spacetimes and they AVERAGE UP and that is using Monty method to evaluate the "path integral"

for now, the functions they are evaluating at sample points are very basic functions like "overall spacetime hausdorff dimension" or "spatial slice dimension" or "smallscale diffusion dimension, in the spacetime" , or in the spatial slice, or in a "thick slice". they have a lot of ways to measure the dimension and they are studying the general business of dimensionality.

but the functions could be more sophisticated like "number of black holes" or "density of dark energy" or "abundance of lithium" (maybe? I really can't guess, I only know that this is probably only the beginning)
With Monty path integral method it should be possible to evaluate many kinds of interesting functions (defined on the ensemble of spacetimes).

this is early days and they are studying dimensionality, but they can study a lot of other aspects of the world this way and i expect this to be done. they say they are now working on putting matter into the picture.

they are going to need more computer time.

the present spacetimes are ridiculously small (on order of a million blox) and shortlived.
Have you had a look at a computergenerated picture of a typical one of their spacetimes? if so, you know what i mean
 
Last edited:
  • #104
marcus said:
Chronos, thanks for chiming in. I expect it has different associations. For some people, "Monte Carlo method" is a way of evaluating an integral over a multidimensional space, or more generally a way of evaluating the integral of some function which is defined over a very LARGE set, so that it would be expensive in computer time to do ordinary numerical integration.
I had to do a numerical calculation in a graduate course I took years ago to see the difference between the Monte Carlo method and some other traditional algorithms of numerical integration. What I learned was that most of the other numerical integration schemes rely on predictable algorithms that make some integrals impossible to evaluate. They blow up to infinity. Or there is always a great difference when you increase the resolution; they don't converge. It seems the algorithm used in traditional methods to divide the interval of integration into subdivisons itself actually contributes the the pathological nature of that numerical method. But the Monte Carlo method introduces a measure of randomness in the algorithm to help avoid any pathologies introduced by more predictable algorithms. Monte Carlo still equally divides the interval of integration, but picks at random where in each interval to evaluate the integrand.

I suspect that it is now common place to evaluate integrals in physics using Monte Carlo just to avoid even the possibility of other methods being pathological. Maybe someone else could confirm or deny this suspicion of mine.
 
  • #105
Mike2 said:
I suspect that it is now common place to evaluate integrals in physics using Monte Carlo just to avoid even the possibility of other methods being pathological. Maybe someone else could confirm or deny this suspicion of mine.

Just occurred to me what Monty Carlo means:
it is Carlo Rovelli in his birthday suit.
 

Similar threads

  • Beyond the Standard Models
2
Replies
39
Views
5K
  • Beyond the Standard Models
Replies
7
Views
2K
  • Beyond the Standard Models
Replies
10
Views
1K
  • Beyond the Standard Models
Replies
19
Views
3K
  • Beyond the Standard Models
Replies
3
Views
3K
  • Beyond the Standard Models
Replies
6
Views
2K
  • Beyond the Standard Models
Replies
1
Views
2K
  • Beyond the Standard Models
Replies
3
Views
2K
  • Beyond the Standard Models
Replies
1
Views
1K
  • Beyond the Standard Models
3
Replies
74
Views
9K
Back
Top