Triangulations study group, Spring 2008

In summary, the Triangulations approach, also known as Causal Dynamical Triangulations (CDT), is a nonperturbative theory of quantum gravity. It is based on the idea of constructing a sum-over-geometries, where spacetime is viewed as a collection of triangles, and using a numerical approximation scheme to compute the results. CDT emphasizes geometric properties and has shown promising results in producing a classical background and solution to the Einstein equations. It also provides evidence for a fractal spacetime foam on Planckian distance scales. The approach has been developed through various papers, with the two most recent ones being from November and December 2007. While there are still some earlier papers that explain the setup and
  • #1
marcus
Science Advisor
Gold Member
Dearly Missed
24,775
792
Triangulations is a leading nonstring QG approach which is among the least known, where we have to most catching-up to do to follow it. The name is short for CAUSAL DYNAMICAL TRIANGULATIONS (CDT).

If you want to join me in studying up on CDT in early 2008 then the articles to print off so you have hardcopy to scribble are:

http://arxiv.org/abs/0711.0273 (21 pages)
The Emergence of Spacetime, or, Quantum Gravity on Your Desktop

"Is there an approach to quantum gravity which is conceptually simple, relies on very few fundamental physical principles and ingredients, emphasizes geometric (as opposed to algebraic) properties, comes with a definite numerical approximation scheme, and produces robust results, which go beyond showing mere internal consistency of the formalism? The answer is a resounding yes: it is the attempt to construct a nonperturbative theory of quantum gravity, valid on all scales, with the technique of so-called Causal Dynamical Triangulations. Despite its conceptual simplicity, the results obtained up to now are far from trivial. Most remarkable at this stage is perhaps the fully dynamical emergence of a classical background (and solution to the Einstein equations) from a nonperturbative sum over geometries, without putting in any preferred geometric background at the outset. In addition, there is concrete evidence for the presence of a fractal spacetime foam on Planckian distance scales. The availability of a computational framework provides built-in reality checks of the approach, whose importance can hardly be overestimated."

http://arxiv.org/abs/0712.2485 (10 pages)
Planckian Birth of the Quantum de Sitter Universe

"We show that the quantum universe emerging from a nonperturbative, Lorentzian sum-over-geometries can be described with high accuracy by a four-dimensional de Sitter spacetime. By a scaling analysis involving Newton's constant, we establish that the linear size of the quantum universes under study is in between 17 and 28 Planck lengths. Somewhat surprisingly, the measured quantum fluctuations around the de Sitter universe in this regime are to good approximation still describable semiclassically. The numerical evidence presented comes from a regularization of quantum gravity in terms of causal dynamical triangulations."

There are some three or so other earlier papers that have explanations of the actual set up and the computer runs, they are from 2001, 2004, 2005. I will get the links later. Have to go out at the moment but will get started with this soon.
 
Physics news on Phys.org
  • #2
an early paper delving into elementary detail, that i found useful, is
http://arxiv.org/abs/hep-th/0105267

here's one summarizing the field as of May 2005: clear exposition alternates with some gnarly condensation
http://arxiv.org/abs/hep-th/0505154

I think those are the main ones needed to learn the subject,
but for an overview there are also a couple of review papers written for wider audience
http://arxiv.org/abs/hep-th/0509010
http://arxiv.org/abs/hep-th/0604212

and a couple of short papers of historical interest----the 2004 breakthrough paper where for the first time a spacetime emerged that was 4D at large scale
http://arxiv.org/abs/hep-th/0404156
and the 2005 paper where they explored quantum dimensionality at small scale and discovered a fractal-like micro structure with reduced fractional dimensionality
http://arxiv.org/abs/hep-th/0505113

we can pick and choose, as needed, among this stuff. The main thing is to focus on the two recent ones (November and December 2007) and only go back to the earlier papers when something isn't clear and needs explaining.

My impression is that CDT is ready to take on massive point particles and that 2008 could be the year it recovers Newton law.
 
Last edited:
  • #3
There's a wide-audience article from March 2007 that looks pretty good, as a non-technical presentation, but it's in German
http://www.phys.uu.nl/~loll/Web/press/spektrumtotal.pdf

=======================
EDIT TO REPLY TO NEXT

Hello Fredrik, I was glad to see your reflections on the CDT approach. I think the questions you raise are entirely appropriate. I will try to respond tomorrow (it is already late here.)
 
Last edited:
  • #4
initial amateur reflection

Hello Marcus, I'd be interested to hear how those who work on this add ot this thread but to start off in a philosophical style witout trying to derange the thread, here are some personal thinking.

If I get htis right, in this (CDT of Loll) idea, one more or less assumes that

(1) nothing is wrong with QM or it's predictions
(2) nothing is wrong with GR or it's predictions

And one tries to find a way to compute, and make sense out of the path integral, by *choosing* a specific definition and partitioning of the space of spacetimes, and assume it to be complete, and this also somehow imlplicitly contains an ergodic hypothesis at the instant you choose the space of spaces.

My personal reflection is that I would expect there to be a scientific or physical basis for such choice or hypothesis? Isn't such a choice effectively an expectation, and how is this induced? if there is an compelling induction for this I would be content, but I don't see that.

I can appreciate the idea that - let's try the simple first, and try to find a way (by hand) to interpret/defined the space of spacetimes in a way that makes sense out of the path integral and hopefully reproduces GR in the classical limit - but even given a success, I would still be able to formulate questions that leave me unsatisfied (*)

(*) To speak for myself, I think it's because I look for, not only a working model for a particular thing (without caring HOW this model is found), but rather a model for models, that would be rated with a high expected survival rate even when taken into a new environment. I know those who for sure rejects this by the choosing to not classify it as "physics".

Anyway, to discuss the suggested approach, leaving my second opinions aside, what does that mean?

Either we work out the implications and try to find empirical evidence for it? or we try to theoretically rate it's plausability, by logic examinations? any other ways?

To try to examin its' plausability, I immediately reach the above reasoning of modelling the model, scientific processes etc. And then I personally come to the conclusion of high degree specualtion that actually originates in the premises of (1) and (2). I intuitively would try to rate and examine the premises, before trying to invest further speculation in working out the implications of them? My gut feeling tells me to invest my speculations in analysing and questioning the premises rather than adding more speculation to an
already "speculative" premise.

Before I can defend further speculation, I feel that I have to question my current position, anything else drives me nuts.

Does anyone share this, or is it just my twisted perspective that causes this?

I should add that I think the original idea is interesting and I hope the pro's here will elaborate this and I think it's an exellent idea to have a thread on this, and maybe the discussion can help people like me gain motivation in this idea.

( I hope that sensitive readers doesn't consider this post offensive. It may look like philosophical trash, but fwiw it has a sincere intent to provoce ideas on the methodology.)

/Fredrik
 
  • #5
marcus,

When you first posted this thread, I did some searching here on PF thinking that a couple years ago there was a member here that was posting his ideas about this. i.e. if I'm remembering correctly. Do you remember anybody here that had their own theory? Are they still around if so?

Don
 
  • #6
dlgoff said:
marcus,

When you first posted this thread, I did some searching here on PF thinking that a couple years ago there was a member here that was posting his ideas about this. i.e. if I'm remembering correctly. Do you remember anybody here that had their own theory? Are they still around if so?

Don

Don, you may be remembering some threads I started about Ambjorn and Loll's CDT.
But if it was somebody discussing their OWN theory then it wouldn't have been me and I actually can't think of anyone with their own theory that resembled CDT.
 
  • #7
more reflections

Mmm the more I think of this, the more do my thinking take me away from what I suspect(?) was Marcus intended style of discussion...

Fra said:
And one tries to find a way to compute, and make sense out of the path integral, by *choosing* a specific definition and partitioning of the space of spacetimes, and assume it to be complete, and this also somehow imlplicitly contains an ergodic hypothesis at the instant you choose the space of spaces.

My personal reflection is that I would expect there to be a scientific or physical basis for such choice or hypothesis? Isn't such a choice effectively an expectation, and how is this induced? if there is an compelling induction for this I would be content, but I don't see that.

What strikes me first is that different ergodic hypothesis should give different results, and what is the discriminator between different ergodic hypothesis?

Choosing a ergodic hypothesis is IMO pretty much the same as choosing the microstructure - since redefining of transformation the mictrostructure seems to imply choosing another ergodic hypothesis. And in this case, not only the microstructure of spacetime, but rather the microstructure of the space of spacetimes whatever the physical representation is for that :) would the fact that someone are lead to ask a question of the space of spacetime, suggest that there should follow a natural prior? I like to think so at least.

Could this root in a missing constraint on the path integral formalism? i can't help thinking that this difficulties root in the premises of the foundations of QM and GR.

Unless there is some interest in this thinking I'll stop here as I have no intention to turn this thread into something that no matter how interesting only I am interested in reflecting over :shy:

/Fredrik
 
  • #8
Fra said:
Mmm the more I think of this, the more do my thinking take me away from what I suspect(?) was Marcus intended style of discussion...

I'm not dedicated to one or another topic or style, in this case. As long as you relate your discussion to Triangulations path integral in a way I can understand, I'm happy:smile:

Unless there is some interest in this thinking I'll stop here as I have no intention to turn this thread into something that no matter how interesting only I am interested in reflecting over :shy:

Well I'm interested, so there is no obstacle to your continuing.

I think you are probing the question of the REGULARIZATION which the Utrecht people use to realize their path integral.

Before being regularized a path integral is merely formal. The space of all spatial geometries is large, and the space of all paths through that large space is even larger.
So one devises a way to SAMPLE. Like deciding to draw all human beings as cartoon stick figures----this reduces the range of possible drawings of people down to something manageable.

For me, the natural way to validate a regularization is to apply the old saying: The proof of the pudding is in the eating! You pick a plausible regularization and you see how it works.

You Fredrik are approaching the choice of regularization, as I see it, in a different more abstract way from me. So as long as I can relate it to what I know of the Utrecht Triangulations path integral I am happy to listen. I like to hear a different approach from what I am used to.BTW you are using the concept of ERGODICITY and you might want to pedagogically explain, in case other people are reading.
Ambjorn and Loll, in several of the papers (IIRC the 2001 methodology paper I linked) talk about ergodicity in the context of their Monte Carlo.

Basically a transformation, or a method of shuffling the cards, is ergodic if it thoroughly mixes. Correct me if I am wrong, or if you have a different idea. So if a transformation is ergodic then if you apply it over and over eventually it will explore all the possible paths or configurations or arrangements----or come arbitrarily close to every possible arrangement.

When the Utrecht people do the Monte Carlo, they put the spacetime geometry through a million shuffles each time they want to examine a new geometry. I forget the details. But there is some kind of scrambling they do to get a new geometry and they do it a million times. And then they look and calculate observables (like a time-volume correlation function, or a relation between distance and volume, or a diffusion dimension. Then they again shuffle the spacetime geometry a million times (before making the next observations). This kind of obsessive thoroughness is highly commendable, but it means that it can take WEEKS to do a computer run and discover something. I wish someone would give them a Beowulf.

One thing that delights me enormously is how pragmatic the work is.

Another is that the little universes pop into existence (from the minimal geometry state) and take on a life of their own for a while and then shrink back down.

Another is that they actually GET the microscopic quantum spacetime foam that people have speculated for 50 years is down at that level. And they don't just get the foam, at a larger scale they get the familiar macro picture of approximately smooth 4D.

Another is the presumed universality. different regularizations involving other figures besides triangles have been tried at various times and it doesn't seem to depend on choosing to work with triangles----which seems to be confirmed by the fact that in any case one let's the size go to zero. I don't think this has been proven rigorously, but I have seen several papers where they are using a different mix of building blocks. And it makes sense. If you are going to take the size to zero it should not matter very much what is the shape of the blocks.

At this point I think Ambjorn and Loll are looking pretty good. they have a lot going for them, supporting the way they tell the story. that's why I thought it would be timely to have a study thread.
 
Last edited:
  • #9
Thanks.

Just quick note. I'll comment more later, it's getting friday evening here and I won't be on more tonight.

I sense some differences in our way of analysis and probably point of view as well and I also find that interesting. I got the feeling that you have a somewhat of a cosmology background in your thinking? or?

As I see it there are several parallell issues here, each of them are interesting but what is worse I see them as entangled too from an abstract point of view.

Your idea of the pudding is really a good point, I guess my point is that if the question is to decide to make and eat this pudding or not, can not be answered by doing it, this decision needs to be make on incomplete information, to actually do it is what I consider the feedback. This seems silly but this even relates to the regularization thing! but I think I may be too abstract to communicate this at this point, but if the proof is in the pudding, the problem is that there is probably more potential puddings around than I could possibly eat, and at some point I think evolving an intelligent selection is favourable.

Anyway, I am trying to skim through some of the other papers as well, and try to analyse the causality constraint from my point of view. The biggest problem I have is that there is also another problem, that I have not solved, and neither did they, that has to do with the validity and logic of feynmanns action weighting.. the problem is that as I see it we are making one speculation ontop of another one which makes the analysis even more cumbersome.

I guess one way is to not think so much, take the chance, and instead just try it... (like you suggest)... but that's not something I planned to do here, although it would be fun to implement their numerical routines... I like that kind of projects but my current comments is mainly a sujbective "plausability analysis" of mine. If the result of that is positive I would not doubt to actually try to implement some simulations on my own PC. But that is in my world the "next phase" so to speak. I'm not there yet.

Anyway I'll be back.

/Fredrik
 
  • #10
elaborating the perspective of my comments

I think you are probing the question of the REGULARIZATION which the Utrecht people use to realize their path integral.

Yes I am, since this seems to be the key of their approach.

How come we are asked to "make up" these regularizations in the first place? Does this, or does it not indicate that something is wrong?

The other question is the logic and motivation of of the particular scheme of regularization proposed by CDT - given that "regularizing" is the way to go in the first place.

I am personally expect that the correct "regularization" should follow from a good strategy, or probably then, then notion of "regularization" wouldn't appear.

Before being regularized a path integral is merely formal. The space of all spatial
geometries is large, and the space of all paths through that large space is even larger.
So one devises a way to SAMPLE. Like deciding to draw all human beings as cartoon stick
figures----this reduces the range of possible drawings of people down to something manageable.

In my thinking, not only is the space "BIG" and path integral formal, I see it as a bit ambigous, because how do we even measure the size of this space?

For me, the natural way to validate a regularization is to apply the old saying: The proof of the pudding is in
the eating! You pick a plausible regularization and you see how it works.

At first glance I have hard to disagree with this.

But, if we consider the "regularization" that the scientist needs to make here: How many possible regularizations are there? If there are a handful possibilibies and the testing time for it's viability is small, we can afford to say, let's test them all, starting with making a random or arbitrary possibility, and then take the best one. And I often try to express myself short, because it it's too long I don't think anyone bother reading it.

But if the number of possibilities get very many and/or the testing time increases, then it is clear that to survive this scientist CAN NOT test all options even if he wanted to, so he needs to rate the possibilities and start to test them one by one in som order of plausability. So the ability to construct a rating system seems to be a very important trait. To construct such a rating system, he must respect the constraints at hand. He has limited memory and limited processing power. Doesn't these constraints themself in effect impose a "natural" regularization?

This is closely related to the same problem as we are dealing with in the path integral regularization. These analogies at completely complexity different scales are inspiring me alot.

The idea that I rate higest on my list at this point, is that the observers complexity is exactly what imposes the constraints and implies the effective regularization we are seeking. Basically information capacity if the observer, limits the sum in the path integral. It does not ban anything, but it limits how many virtual paths the observer can relate to.

(*) What I am most interested in, is to see if the CDT procedure POSSIBLY can be interpreted as the environmentally selected microstructure of virtual paths? If this is so, it would be very interesting still! But I would have to read in a lot more detail to have an opinion about that. As you note, I implicitly always relate to an observer, because anything else makes little sense.

What if MAYBE the nonsensial path integral, is the result of the missing constraints - the observers complexity? This is exatly what I currently try to investigate.

You Fredrik are approaching the choice of regularization, as I see it, in a different more abstract way from me.
So as long as I can relate it to what I know of the Utrecht Triangulations path integral I am happy to listen.
I like to hear a different approach from what I am used to.

In a certain sense, their fundamental approach is not directly plausible to me, but I still find it interesting if one considers it to be an approximate approach, where they at least can make explicit calculations at and early stage.

BTW you are using the concept of ERGODICITY and you might want to pedagogically explain, in case other people
are reading.

Ambjorn and Loll, in several of the papers (IIRC the 2001 methodology paper I linked) talk about ergodicity in
the context of their Monte Carlo.

Basically a transformation, or a method of shuffling the cards, is ergodic if it thoroughly mixes.
Correct me if I am wrong, or if you have a different idea. So if a transformation is ergodic then
if you apply it over and over eventually it will explore all the possible paths or configurations
or arrangements----or come arbitrarily close to every possible arrangement.

Loosely speaking I share your definition of ergodic process as "perfect mixing", however I think these things can be represented and attacked in different ways and the same conceptual thing can be given different abstraction in different contexts. I'm afraid that due to my twisted perspective I'm most probably not the one to give this the best pedagogic presentation to most but I can try to elaborate a little bit at least.

On one hand there is ergodic theory as a part of mathematics that also relates to chaos theory. I do not however have the matematicians perspective, and often matematical texts has a different purpose than does say a physicists, and thus they ask different questions even though they often stumble upon the same structures.

A mathematical definition of an ergodic transformation T relative to a measure, is that the only T-invariant sets have measure 0 or 1. (The transformation here, is what generates the "shuffling process" you refer to. )

A connection to classical dynamical systems can be where Transformation T is "time evolution", and a typical measure is the classical the phase space volume (q and p) - and thus an ergodic dynamical system preserves the phase space volume. This is related to liouvilles theorem in classical hamiltonian mechanics.

Another connection to fundamental physics is that these things relate to the foundations of statistical mechanics but also the probability theory in QM.

Note here a major point that ergodicity is defined _relative to a measure_, which in term implicitly relates to measurable sets, and the point I raised regards the fact that ergodicity is relative.

But I don't find this abstraction the most useful for my purpose. Instead of thinking of phase space in the classical sense, I consider slightly more abstract the microstate of the microstructure.

Classically the microstructure is defined by what makes up the system, say a certain number of particles with internal structure etc. Given this microstructure, the possible microstates typically follow.

And in statistical mechanics, the usual stance is that the natural prior probability distribution to find the system in any particular microstate is equal to finding it in any microstate. Ie. the microstates are assume to be equiprobable, typically referring to the principle of indifference arguing that we lack discriminating information. This also provides the connection between shannon and Boltzmann entropies.

I think that such logic is self-deceptive. My argument is that the indifference is typically already implicit in the microstructure - and where did this come from?

It *seems* very innocent to appeal the principle of indifference, however if we have no information at all, then where did the microstructure itself come from? I think that there is a sort of "dynamical" principle where the microstructure is emergent, and this will replace the ergodic hypothesis and appeals to the somewhat self-deceptive principle of indifference.

What I'm trying to communicate is the conceptual issues we need to deal with, and these IMHO lies at the heart of foundational physics and IMO also heart of science and it's method. IMO the same issues still haunt us but in classical physics things where "simple enough" to still make sense in despite of this issue and I suspect that as our level of sophistication increase in QM and GR and QG, these issues are magnified.

I also see a possible connection between ergodic hypothesis and the ideas of decoherence.

These things is what I see, I personally sense frustration when these things are ignored, which I think they commonly are. My apparent abstractions is because I at least try to resolve them and I personally see a lot of hints that these things may also by at the heart of the problem of QG.

My point is that this beeing at the root of many things, the ergodic hypothesis are not proved. I want these pillars to be rated, and seen in a dynamical context.

This was just to explain my point of view, because without that any of my further comments will probably make no sense. I will try to read more of the papers that you put up Marcus and respond _more targeted to the CDT procedure_ when I've had the chance to read more. This was just my first impressions and (*) is what motivates my to study this further.

/Fredrik
 
  • #11
To respond to a partial sampling of your interesting comment!
Fra said:
...
(*) What I am most interested in, is to see if the CDT procedure POSSIBLY can be interpreted as the environmentally selected microstructure of virtual paths? If this is so, it would be very interesting still! But I would have to read in a lot more detail to have an opinion about that. As you note, I implicitly always relate to an observer, because anything else makes little sense.

What if MAYBE the nonsensial path integral, is the result of the missing constraints - the observers complexity? This is exatly what I currently try to investigate...

In a certain sense, their fundamental approach is not directly plausible to me, but I still find it interesting if one considers it to be an approximate approach, where they at least can make explicit calculations at and early stage...

I will try to read more of the papers that you put up Marcus and respond _more targeted to the CDT procedure_ when I've had the chance to read more. This was just my first impressions and (*) is what motivates my to study this further...

The papers are not necessarily for us all to read or to read thoroughly but can in some cases simply serve as a reality CHECK that we are summarizing accurately. I don't want to burden you with things you are not already curious about.

For example there is a 2005 paper where the Utrecht team is very proud that they derived the same wave function for the (size of) the universe that people like Hawking and Vilenkin arrived at earlier and had been using in the 1980s and 1990s. Hawking still refers to his "Euclidean Path Integral" as the only sensible approach to quantum cosmology. In a sense the Utrecht Triangulations approach is an OUTGROWTH of the Feynman (1948) path integral or sum-over-histories, as further processed and applied to cosmology by Hawking. There was a period in the 1990s where many people were trying to make sense of Hawking's original idea by using dynamical triangulations. It didn't work until 1998, when Loll and Ambjorn had the idea to use CAUSAL dynamical triangulations.
So there is this organic historical development that roughly goes Feynman 1950, Hawking 1985, Utrecht group 2000, and then the breakthrough in 2004 where they got 4D spacetime to emerge. Since that is the family tree it is somehow NOT SURPRISING that in 2005 they recovered Hawking's "wave function for the universe" (as he and Vilenkin apparently liked to call it). It is really just a time-evolution of the wavefunction of the size of the universe.

The only reason I would give a link to that paper is so you could, if you want, check to see that my summary is all right. I don't recommend reading it---this is just another part of the picture to be aware of.
==========================

The gist of what you say in the above about motivation has, I recognize, to do with plausibility.

I think that is what you refer to in (*) when you say "to see if the CDT procedure POSSIBLY can be interpreted as the environmentally selected microstructure of virtual paths?"

You could be more specific about what you mean by environmentally selected---perhaps one could say NATURAL. And you could specify what aspect(s) of their proceedure you would like to decide about.

Perhaps triangles (if that is part of it) are not such a big issue. I have seen several papers by Loll and others where they don't use triangles. It is the same approach they just use different shaped cells. But triangles (simplices) are more convenient. This is the reason for a branch of mathematics which is like differential geometry except it is simplicial geometry, piecewise linear geometry. It is tractable.

Loll often makes the point about UNIVERSALITY. Since they let the size go to zero, it ultimately doesn't matter what is the shape of the cells.

The piecewise linearity is part of the path integral tradition going back to 1948. Feynman used piecewise linear paths, made of straight segments, and then let the size of the linear segment go to zero. A simplex is basically the simplest analog of a line segment.
(segment is determined by two points, triangle by three, tet by four...)

Nobody says that these piecewise linear paths actually exist in nature. One is interested in taking the limit of observables and transition amplitudes as the size goes to zero.
In the limit, piecewise linearity evaporates----the process is universal in the sense of not depending on the details of the scaffolding.
 
Last edited:
  • #12
in case anyone is following the discussion and is curious, here is a Wikipedia article on the Path Integral formulation
http://en.wikipedia.org/wiki/Path_integral_formulation
Fredrik is I expect familiar with what is covered here. I find it interesting, and it helps give an idea of where the Triangulations approach to QG came from.
 
  • #13
Thanks for your feedback Marcus!

Here are some more brief comments, but I don't want to expand too much in my personal ideas that doesn't directly relate to the CDT. First it's not the point of the thread (only to put my comments in perspective) and I haven't matured my own thinking myself yet either.

marcus said:
Hawking still refers to his "Euclidean Path Integral" as the only sensible approach to quantum cosmology. In a sense the Utrecht Triangulations approach is an OUTGROWTH of the Feynman (1948) path integral or sum-over-histories, as further processed and applied to cosmology by Hawking. There was a period in the 1990s where many people were trying to make sense of Hawking's original idea by using dynamical triangulations. It didn't work until 1998, when Loll and Ambjorn had the idea to use CAUSAL dynamical triangulations.
So there is this organic historical development that roughly goes Feynman 1950, Hawking 1985, Utrecht group 2000, and then the breakthrough in 2004 where they got 4D spacetime to emerge.

From my point of view, what we are currently discussing is IMHO unavoidably overlapping with also the foundations of QM at a deeper level, beyond effective theories. The fact that the quest for QG, at least in my thinking, traces back to the foundations of QM, is very interesting!

marcus said:
The gist of what you say in the above about motivation has, I recognize, to do with plausibility.

Yes, and as might be visible from my style of ramblings - but I'm aware that it's something I have yet to satisfactory explain in detail - I seek something deeper than just "plausability" in the everyday semse. The ultimate quantification of plausability is actually something like a conditional probabilities. The logic of reasoning is closely related to subjective probabilities, and this is where I closely connect to physics, action principles and regularization.

( There are several people but I think not too many that are researching along this spirit.
Ariel Caticha, http://arxiv.org/abs/gr-qc/0301061 is one example. That particular paper does not relate to QG, but it presents some of the "spirit" I advocate. So the most interesting thing in that paper is IMO the guiding spirit - Ariel Caticha is relating the logic of reasoning to GR - and on this _general point_, though this is "in progress", I share his visions. )

Edit: See also http://arxiv.org/PS_cache/physics/pdf/0311/0311093v1.pdf, http://arxiv.org/PS_cache/math-ph/pdf/0008/0008018v1.pdf. Note - I don't share Ariels way or arguing towards the "entropy measure" but the spirit of intent is still to my liking.

To gain intuition, I'd say that I associate the path integral - summing over virtual transformation/paths - weighted as per some "action" with a microscopic version on rating different options. Now that's not to jump into conclusions that particles are conscious, but the abstracting lies at a learning and evolutionary level.

Wether there is at some level of abstraction, a correspondence between the probability formalism and physical microstructures is a question that I know people disagree upon. My view is that subjective probabilities are corresponding to microstructures, but the probability for a certain event, is not universal - it depends on who is evaluating the probability. In this same sense, I think the path integral construction is relative.

The same "flaw" is apparent in standard formulation of QM - we consider measurements in absurdum, but where is the measurement results retained? This is a problem indeed. The ideas of coherence suggest that the information is partly retained in the environment, this is part sensible, but then care should be taken when an observer not having access to the entire environment formulates questions.

This is IMO, relates in the feynmann formulation to the normalisation and convergence of the integral and the ambigous way of interpreting it. It seems innocent to picture "sum over geometries" but howto _count_ the geometries is non-trivial at least as far as I understand, but I could be wrong.

marcus said:
I think that is what you refer to in (*) when you ay "to see if the CDT procedure POSSIBLY can be interpreted as the environmentally selected microstructure of virtual paths?"

You could be more specific about what you mean by environmentally selected---perhaps one could say NATURAL. And you could specify what aspect(s) of their proceedure you would like to decide about.

Natural might be another word yes, but what I mean is somewhat analogous to quantum darwinism by Zurek - http://en.wikipedia.org/wiki/Quantum_darwinism - but still not the same thing.

In my way of phrasing things, I'd say that the microstructure that constitutes the observers, is clearly formed by interaction with the environment. And in this microstructure is also encoded (IMO) the logic of reasoning and also the "correspondence" to the rating system used in the path integral. Now, in one sense ANY rating system could be imagined, but equally plausible is it to imagine that the environment will favour particular rating system. So the "selected" microstructure is in equilibrium with the environment at a very abstract level, though it may still be far from equilibrium at other levels.

I'm not sure that makes sense. But it's a hint, and let's not go more into that. It's my thinking about I could certainly be way off chart.

marcus said:
Loll often makes the point about UNIVERSALITY. Since they let the size go to zero, it ultimately doesn't matter what is the shape of the cells.

...

Nobody says that these piecewise linear paths actually exist in nature. One is interested in taking the limit of observables and transition amplitudes as the size goes to zero.
In the limit, piecewise linearity evaporates----the process is universal in the sense of not depending on the details of the scaffolding.

I have second opinions on this. I will try to get back on these points. MAybe try to come up with an example/analogy to illustrate my point.

/Fredrik
 
Last edited by a moderator:
  • #14
equilibrium assumption

This brings to my mind an association from biology: As evolution in biology is pictures, organisms has evolved in a particular environment and is selected basically for it's fitness of survival and reproduction in that particular environment. This same highly developed organisms may not be near as "fit" when put in a completely different environment. Then survival boils down to the skill to readapting to the new environment.

But evolution is still ongoing, organisms we see may not be perfectly evolved, and how is the measure "perfect" defined? This is not easy.

I studied some years ago simulations of cellular metabolic networks, where one tried to simulate the behaviour of a cell culture. And instead of trying to implement simulating the cell from molecular dynamics would would clearly be too complex, the attempt was to try to formulate the measure that the organisms tries to optimise, relating to growth and survival. And then find the gene expression that would yield that behaviour, and then this was compared to a real bacterial culture. And fromwhat I recall they found that initially the model and the real culture disagreed, but after a few generations of growth, the gene expression in the real culture converged quite closely to the one found by the computer simulation.

Then in a sense, the selected microtructure is the environmentally selected "behaviour" encoded in the microstructure iof the system. But this is only valid then, under equilibrium assumptions at that level.

/Fredrik
 
  • #15
relative universality?

Marcus said:
Loll often makes the point about UNIVERSALITY. Since they let the size go to zero,
it ultimately doesn't matter what is the shape of the cells.
...
Nobody says that these piecewise linear paths actually exist in nature. One is
interested in taking the limit of observables and transition amplitudes as the size goes to zero. In the limit, piecewise linearity evaporates----the process is universal in the sense
of not depending on the details of the scaffolding.

Wouldn't the choice of uniform size of the cells matter - and they have chosen a uniform size? For example, optionally why not choose smaller blocks when the curvature are higher, so as to increase the measure of highly curved geometries even in the continuum limit? Of course, don't ask me why one would do this, all I'm thinking is that the construction doesn't seem as universal as innocent to me as they seem to suggest?

The first impression is that appealing to the principle of indifference, the uniform choice is the given natural choice. This is what I think is deceptive, because uniform is a relative concept and in my view this is related to the missing condition/observer to which this construct relates. Or do I miss something?

This type of reasoning is related to the previous discussion, it's very common in statistical mechanics. For example the shannon entropy measure, has an implicit condition of a background uniform prior, and usually thus background is just put in by hand. Ultimately it means that even the concept of disorder is relative and finding the perfect measure of disorder isn't easy.

Even going back to probability itself, the concept of absolute probability is quite fuzzy.
In my thinking these are unmeasurable quantities, only relative probabilities are measureable, and if we are reaching what is measurable and what isn't I think we also touch the foundations of science?

So my comment originates from this perspective, and it boils down to the philosophy of probability theory as well. I think reflections on these things, unavoidably leads down there and I see the essence of the difficulties as deeply rooted.

I don't know if Loll and their group ever reflected this? It seems unlikely that they didn't, but maybe they never wrote about it. But if they at least commented on it, it might be they have an idea of it, maybe they have some good arguments.

But as I read some of their papers, they point out that it's not at all obvious that their approach works but while the "proof is in the pudding" like you said, and I guess their main point is that which the computational handle they have, eating the pudding is not such a massive investment after all? (as is making more formal progress). I can buy
that reasoning! and I would curiously await their further simulation results.

Indeed to formally resolve the issuse I try to point out would take some effort too, but I just happen to rate it as less speculative, but I think even that judgement is relative.

In my previous attempt to explain my position, the logic of reasoning put abstractly, applied to physical systems like a particle, is the same thing as the logic of interaction. The different choice of words is just to make the abstract associations clearer, thus my personal main non-professional point of view is not that the CDT is wrong, but that there seems to be a missing line of reasoning, and this translates to missing interaction terms.

Another point is that they start by assuming the lorentzian signature. But if I remember right in one of their papers they pointed out that themselves that there _might_ be a more fundamental principle from which this is emergent. I don't remember which one, I skimmed several of their papers with some days in between. Their comparasiom to the euclidian approach and the problem of weighting is interesting. This is the more interesting poitn to go deeper in but I have to think more to comment on that...This I think may relate back to foundational QM as what the path integral really means and what the action really means in terms of deeper abstractions.

I'm sorry if I got too philosophical here. I hope nto. If I missed something and jumped into conclusions please correct me, this is admittedly my initial impressions only.

/Fredrik
 
  • #16
Fra said:
Wouldn't the choice of uniform size of the cells matter - and they have chosen a uniform size? For example, optionally why not choose smaller blocks when the curvature are higher, so as to increase the measure of highly curved geometries even in the continuum limit?

Of course I don't know what Ambjorn or Loll would say in reply. But I think one consideration is diffeomorphism invariance and the desire to avoid overcounting.

To the extent practical one wants that two different gluings---two different ways of sticking the triangles together---should correspond to two different geometries (not the same one with a reparametrization of coordinates or a smooth morphing.

Curvature, of course, is calculated at the edges or faces where the simplices fit together, combinatorially (Regge's idea). It is easiest to imagine in 2D when one reflects on the fact that one can have more than 6 equilateral triangles joined around a given point, or less than 6----curvature is related to the defect angle.

If two triangulations are essentially the same except that one has a patch of smaller triangles, then one is in danger of overcounting.

So that is something to remember---be aware of. I am not saying it is a flaw in your idea. I am not sure how your idea would be implemented. but it is something to keep in mind.

one wants to avoid duplicating----avoid counting the same (or diffeo-equivalent) geometry twice.
========================

Now in addition let us notice that we are not APPROXIMATING any give smooth geometry.

So it is not clear how your idea would be implemented. where is there a place of more than usual curvature that should deserve temporarily smaller triangles?
========================

Eventually, in the limit, the triangles (really simplices) get infinitely small anyway, so what possibility is being missed? It is hard to say how significant. Eventually one gets to the same as one would have had (just picture unifying some of the small ones, and leaving others of the small ones separate---to recover an earlier triangulation which you Fredrik may have wished)

========================

Remember that in Feynman path integral the piecewise linear paths are very jagged. They have unphysically swift segments. They are NOT REALISTIC at all. No particle would take such paths. Almost everywhere unsmooth, undifferentiable. In the path integral method the objective is not to APPROXIMATE anything, but to sample infinite variability in a finite way. In the end, using a small sample, one wants to find the amplitude of getting from here to there. It is amazing to me that it works.

And similarly in the Triangulations gravity path integral, the finite triangulation geometries are VERY jagged----they are like no familiar smooth surfaces at all. They are crazy bent and crumpled. It is amazing that they ever average out to a smooth space with a welldefined uniform integral dimensionality.
 
  • #17
Fredrik, I have been looking back over your recent posts, in particular #13 which talks about the microstructure of spacetime.

I wonder if you read German,
because there is an interesting nontechnical article which Loll wrote for Spektrum der Wissenschaft that explains very well in words what her thinking is about this.
I would really like to know your reaction to this article, if German is not an obstacle.

I will get the link.
http://www.phys.uu.nl/~loll/Web/press/spektrumtotal.pdf

From it one can better understand the philosophical basis of their approach. At least it helps me understand, I hope you also.
 
Last edited:
  • #18
marcus said:
Of course I don't know what Ambjorn or Loll would say in reply. But I think one consideration is diffeomorphism invariance and the desire to avoid overcounting.

To the extent practical one wants that two different gluings---two different ways of sticking the triangles together---should correspond to two different geometries (not the same one with a reparametrization of coordinates or a smooth morphing.

I understand this. My suggestion, relative to their suggestion would indeed result in "overcounting" to use their term. Loosely speaking it seems we agree that this is most certainly the case?

But my point is, that to determine what is overcounting and what is not, is highly non-trivial. The conclusion that my suggestion here would result in overcounting, is based on the assumption that their uniform choice is the obviously correct one. Of course, if there was a non-uniform size that was correct their ideas would result in _undercounting_.

So what view is the obviously right one? I am overcounting or they are undercounting? :) In the question posed by them, the apparence is they their choice is more natural, but I don't think it would be too hard to reformulate the question so that my choice would be more natural. In a certain way, I claim that the microstructure of the space of spacetimes is implicit in their construction.

This was my point. The same analogous flaw exists in stat mech. You make up a microstructure, and most books would argue in the standard way that given no further information, the equiprobable assignment to each microstate is the correct counting. This is directly related to the choice of measure of disorder. And with different prior probability distributions or prior measures over the set of microstates, different measures of disorder typically result.

/Fredrik
 
  • #19
Thanks for that link Marcus. Swedish and german may be related, but unfortunately my German is very poor. I only know a few phrases, just enough to order beer and food where the don't speak english :)

/Fredrik
 
  • #20
Fra said:
I understand this. My suggestion, relative to their suggestion would indeed result in "overcounting" to use their term. Loosely speaking it seems we agree that this is most certainly the case?
...

Actually that was not what I meant, Fredrik. I did not mean that your suggestion would result in overcounting relative to theirs, I was suggesting that it might (I am still not sure how it would work) result in overcounting in an absolute sense.

To review the basic general covariance idea of GR, the gravitational field is not a metric, but is rather an equivalence class of metrics under diffeomorphism. Two metrics are equivalent if one can be morphed into the other (along with their accompanying matter, if they have). So imagine we are integrating over a sample of spacetime geometries----if two are equivalent we do not want them both.
If our regularization is forever bringing in equivalent versions of the same thing, it is to that extent absolutely wrong.

I do not really understand your idea, so I cannot say. But I suspect that if I did understand then even if it was the only method in the world (and Ambjorn Loll had not been invented and was not there to compare) that I could look at it and say "this absolutely overcounts" because would be counting as different many geometries could could be morphed into each other, and are not really different. Again I must stress that I cannot say for sure, not having a clear idea yet.

This suspicion of mine is vague at best! If you want, please explain your idea in a little more detail. When and where would you start using smaller simplices?
What you said earlier was to do that where there is more curvature. but there is no underlying metric manifold. we start the process with no curvature defined at all, anywhere. No curvature in the usual differential geometry sense is ever defined. In piecewise linear geometry the curvature is zero almost everywhere ("except on a set of measure zero" as one says in measure theory.)

That is because the interiors of all the simplices are FLAT---everywhere and throughout the process. So in your scheme, where do you start using smaller simplices?

The trouble is, I do not understand you to have a well-defined regularization algorithm. Although if you could somehow define an algorithm then my suspicion is that it would tend to overcount in an absolute sense.

Probably, as you said, it would also overcount relative to the Ambjorn-Loll proceedure also.

But this is all just my hunch since I don't understand in actual practice what you mean by using smaller simplices where the curvature is greater.

BTW your command of English is great (I'm sure you don't need me to say :smile:) but it is a pity you don't read German because this Loll piece in Spektrum der Wissenschaft (SdW) gives clear easy explanations and a lot of motivation for what they do.

I wonder if I should try to translate? What do you think?
 
Last edited:
  • #21
Something i find interesting in almost all your posts in this thread is your continuing focus on the question of spacetime microstructure
that is something that Loll and the other Utrecht people also think a lot about.
Moreso, perhaps, than in any of the other QG approaches.

something I find good about them is they do not define it at the outset, they do not even specify a dimensionality (like 2D or 2D or 4D)
they let the foam define itself
and the dimensionality at any given location becomes a quantum observable (dependent also on the scale----how microscopic one is observing at).

I don't know any other approach that goes this far---
 
  • #22
quick

marcus said:
Actually that was not what I meant, Fredrik. I did not mean that your suggestion would result in overcounting relative to theirs, I was suggesting that it might (I am still not sure how it would work) result in overcounting in an absolute sense.

That's even better, let's see if I understand.

Then, it must mean that you have an absolute, universal measure of the space of geometries to compare with - your "absolute measure"?

How did you come up with this measure? And is this way of "coming up with" unambigous?

A problem to start with is how to define a univeral measure in the space of geometries. To define SOME measure isn't hard, isn't one problem that there are many ways to define the measure?

To define the measure on the space of geometries - for "fair sampling" - is pretty much the same thing as the counting we do here, the idea is that the continuous measure is the limiting case of the counting.

marcus said:
To review the basic general covariance idea of GR, the gravitational field is not a metric, but is rather an equivalence class of metrics under diffeomorphism. Two metrics are equivalent if one can be morphed into the other (along with their accompanying matter, if they have). So imagine we are integrating over a sample of spacetime geometries----if two are equivalent we do not want them both.
If our regularization is forever bringing in equivalent versions of the same thing, it is to that extent absolutely wrong.

The curvature was just an example, but since the curvature is intrinsic of the geometry and invariant of coordinate system, I suggested going for the curvature at the edges, and so as to make the construction to avoid excessive "sharpness" at the gluing. This will not count the SAME geometry twice as I see, it will however increase the measure of the "density of degrees of freedom" in the space of geometries where the curvature is higher.

I could probably "make up" an argument for this, but that isn't my point, I was trying to come up with a simple conceptual example that questions their claimed "universality".

Does this change anything? If not, I have to think of another way of explaning.

marcus said:
I wonder if I should try to translate? What do you think?

Very kind of you but I wouldn't want to ask you to do that, it's too many pages! I have a feeling at this point that I failed to make my point clear. Or I am missing something in yours. Maybe we can sort it out.

Or if someone else reading this can see why I am missing Marcus point, or why I fail to communicate my point to Marcus that can add a third angle to the discussion.

/Fredrik
 
  • #23
marcus said:
Something i find interesting in almost all your posts in this thread is your continuing focus on the question of spacetime microstructure
that is something that Loll and the other Utrecht people also think a lot about.

I have some personal ideas on that which is why I keep getting back to it. But going into that now would be both blurry and out of topic (having nothing specifically todo with CDT) and they are currently very abstract. I suspect my vision of microstructure is different than what they mean.

My personally preferred basic starting point has discared a lot of the blocks of Loll's group as baggge - this is another reason why I personally see the CDT approach as both speculative and semiclassical still, and still doesn't adress all fundamental questions.

If I am not mistaken Smolin elaborated this in some of his papers, that there are different cases of "background independence". I look for the stronger case of background independence. This means that I can't accept manual universal background prior measures in the space of spaces or geometries unless there is a physical principle from which it's emergent. I think this can be done however, but I am not aware of that it _has_ been done yet. But many research progrems doesn't even have the ambition to do this.

In short and abstractly, my idea of emergent microstructures of spaces, and spaces of spaces of spaces (iteratively), is to define a hierarchy in terms of information processing and selective retention of history constrained by the observers information capacity, but this information capacity is also dynamic, since the observer itself also evolves. This is really complicated, and I am in part looking for a new fundamental formalism that matches my intuitive view. Maybe I will fail, but I have no better choice than to do my best, whatever the outcome is.

So my meaning of microstructure contains not only the possibilities, but also the action formulations and rating system. So in my thinking dynamics and states are mixed and not cleanly separated conceptually due to the ongoing feedback.

That's something like my personal ideas. As you see this is not related to CDT so I won't expand. But in many research programs one see similarities, this is interesting and I like to learn from other ideas.

/Fredrik
 
  • #24
Fra said:
...
Then, it must mean that you have an absolute, universal measure of the space of geometries...
Not at all, Fredrik. I have no measure on the space of geometries. that is a very big hairy space.
I merely suspect that the regularization you are proposing would count the SAME geometry multiple times. this is absolutely to be avoided :biggrin:

but without some definite regularization proceedure I can't tell for sure.

The curvature was just an example, but since the curvature is intrinsic of the geometry and invariant of coordinate system,

What curvature do you mean? What coordinate system?
In the Ambjorn Loll approach there is no coordinate system one could use to compute curvature. There is no prior geometry. There is no prior curvature.

Once one HAS a piecewise linear manifold---a triangulation ( not a triangulation OF any prior shape, it is just a triangulation) then the most urgent thing on the agenda is to SHUFFLE it----to randomize it by making local substitutions, locally reconnecting the simplices----replacing 2 adjoining ones by 3, or 3 adjoining ones by 2 etc.

they completely change the geometry in a random fashion, in this way, and get a new triangulation. then they do that repeatedly a million times. only then do they have the first random triangulation! and they wish to study an ensemble of these so one is not enough-----another passage of a million shuffles occurs, and another...

I think you can see that it would be rather inconvenient if you were to round off the edges, and I see no benefit to be gained. Feynman, in his path integral, did not round off the corners. he used very jagged paths.
One let's the size of the segments go to zero eventually anyway, and one is not particularly aiming at smoothness. So rounding just adds mathematical complexity, or wastes computer memory and time, to no purpose. As i see it.

I'm glad you are interested in the regularization process that goes into their Monte Carlo method, and trust that you are also thinking about the randomization---the shuffling of the geometry, like a deck of cards. This is where the Einstein-Hilbert action takes hold, because different random 'moves' are given different probabilities of happening by the computer program.

I was trying to come up with a simple conceptual example that questions their claimed "universality".

Ah! Now I understand better what you were aiming at. Trying to find counter-examples is a constructive effort. Let's look at their claim, on page 6 where they use that word and let's see what they mean by it

==quote==
...
Finally, we must make an important point regarding the status of this regularized framework.
We do not identify the characteristic edge length a of the simplicial set-up with a
minimal, discrete fundamental length scale (equal to the Planck length, say). Rather, we
study the path integral Z in the limit as a → 0, N → ∞, which means that individual building
blocks are completely shrunk away. In taking this limit, we look for scaling behaviour of
physical quantities indicating the presence of a well-defined continuum limit. By construction,
if such a limit exists, the resulting continuum theory will not depend on many of the
arbitrarily chosen regularization details, for example, the precise geometry of the building
blocks and the details of the gluing rules. This implies a certain robustness of Planck-scale
physics
, as a consequence of the property of universality, familiar from statistical mechanical
systems at a critical point. By contrast, in models for quantum gravity which by postulate
or construction are based on some discrete structures at the Planck scale (spatial Wilson
loops, causal sets etc.), which are regarded as fundamental, the Planck-scale dynamics will
generically depend on all the details of how this is done,...
==endquote==

Now we are getting down to the nittygritty, so to speak. This is what they actually said, in the recent paper, and we want to test this claim and see how credible it is. I hope to continue with this tomorrow. must get to bed now.
 
Last edited:
  • #25
Fra said:
Then, it must mean that you have an absolute, universal measure of the space of geometries to compare with - your "absolute measure"?

This is such a background prior measure I refer to.

I think the selection, emergence or choice of this particular absolute meausure you seem to refer to (unless I missed something else of course) must be justified.

I get the impression that you seem to find this absolute measure self-evident? This is not a background _metric_ but it is still a background structure (microstructure that is). This is what I meant with environmentally selected, that _maybe_ one could as as semi-attempt imagine that this has been environmentally selected, but then I'd at minimum would like to see a mechanism for it's revision as per some principle - which in essence suggest that even this background prior measure is emergent from an _even_ larger space.

This just gets more twisted and complex, but my idea of regularization here is that ALL this "lives" in the observers microstructure. So if the observers information capacity is bounded, this will automatically assign low weight (high action) to these obsessive "options" - so in effect these huge spaces are never realized.

Edit: I just saw your above post. I will read it and try to think again to see why we circle this.

/Fredrik
 
Last edited:
  • #26
I got another idea. Maybe the point is this.

You may not have the universal measure of geometries yet, but you and Loll thinks that this exists, and the task is to find it? Does that sound close?

If so, to rephrase my position differnetly, what I suggest is that this univeral measure doesn't exists. And in the sense I mean it, is not that a measure can't be constructed, rather than it can't be universal - it there is no way to define a universal measure without speculation.

That may sounds strange, but could that be why we disagree? I didn't think of this first but I came to think of this.

What do you think? (Set aside who is "right" I'm trying to understand why we seem to circle this)

/Fredrik
 
  • #27
Just to comment on this remark of yours in relation to my view...

marcus said:
Something i find interesting in almost all your posts in this thread is your continuing focus on the question of spacetime microstructure
that is something that Loll and the other Utrecht people also think a lot about.

combined with

"By contrast, in models for quantum gravity which by postulate or construction are based on some discrete structures at the Planck scale (spatial Wilson loops, causal sets etc.), which are regarded as fundamental, the Planck-scale dynamics will generically depend on all the details of how this is done,..."

-- from "The Emergence of Spacetime or Quantum Gravity on Your Desktop"

I suspected that you where thinking that I had my own favourite "background microstructure" in my back pocket, but I don't :) While it's an important concept, I totally agree with Loll et all. I envision the microstructure is emergent and the concept of discreteness are not litteral, it's more at distinguishability level. I see some possible ways to unite a discrete view with the continuum view, depending on how you see it. My desire is definitely to release as much implicit prior structure and instead argue why specific structure still emerges.

/Fredrik
 
  • #28
Fra said:
My desire is definitely to release as much implicit prior structure and instead argue why specific structure still emerges.

From your other posts here and there I suspect that we more or less fully share this view, even though there might be opinions of exactly how much prior structure one can start without?

To implement the above goal, to focus on the "physics of rating" (I associate probability) and "the physics of speculation"(ass. "actions") are right on. This brings us back to the physics and logic of probability / ratings, and speculations. This is why I use these seemingly untraditional words all the time. I don't find the standard measures for this to quite be satisfactory because they themselvs are part of the baggage.

I think that instead of trying to makee up funny wiggling things, based on the old framework, I think we migh need to go back and revise the fundamentals of our tools, the theory of measurement and probability for example and the treatist of information - this even takes us back to the root of statisitcal mechanics, and do away with imaginary ensembles and other IMO highly fictive background structures.

/Fredrik
 
  • #29
Fra, I like your posts #26-28 very much. A lot to think about.
I will reply more fully later in the day when I've had a chance to reflect.

BTW one thing I don't think we have mentioned explicitly (although it is always present implicitly)
is the classical principle of least action. As an intuitive crutch (not a formal definition! :biggrin: ) I tend
to think of this as the principle of the "laziness of Nature" and I tend to think of "action" in this mathematical
context as really meaning bother, or trouble, or awkward inconvenience.
So it is really (in an intuitive way) the principle of least bother, or the principle of least trouble.

what Feynman seems to have done, if one trivializes it in the extreme, is just to put the square root of minus unity, the imaginary number i
in front of the bother.

In that way, paths (whether thru ordinary space or thru the space of geometries) which involve a lot of bother cause the exponential quantity to WHIRL AROUND the origin so that they add up to almost nothing. The rapidly changing phase angle causes them to cancel out.

It is that thing about
[tex]e^{iA}[/tex]
versus
[tex]e^{-A}[/tex]

where A is the bother. The former expression favors cases with small A because when you get out into large A territory the exponential whirls around the origin rapidly and cancels out. The latter expression favors cases with small A in a more ordinary mundane way, simply because it gets exponentially smaller as A increases.

this is the naive intuition with which I approach equations (1) and (2) on page 4 of the main paper we are looking at (the "Quantum Gravity on Your Desktop" paper)

I want to share my crude intuitive perception of these things because that's really how I think most of the time.
If you have a different way of approaching equations (1) and (2) you could let me know---it might be interesting for me to look at it in a different light.

Anyway, for me the term MEASURE when I use it in technical discussion is associated with the mathematical field of Measure Theory (on which Probability Theory is based, but also a lot of other math as well). There would be a Wikipedia article. And in order to define a measure (in that sense) you need to define a SET, and a collection of subsets that are going to be measurable----then you satisfy certain axioms. I think this is familar to both of us.

What I find delightful, one of the many things actually, about Loll's Monte Carlo approach is that it never defines the SET! Or the collection of measurable subsets. Or any measure, in the sense I mean----in the sense of Measure Theory.

I suppose you could say that it defines a "generalized" measure on the set of all paths thru geometry-land.
A "generalized" measure that is pragmatically defined by the Monte Carlo proceedure of "shuffling the deck of cards"
a million times and then looking at the next spacetime. A certain kind of random walk, or diffusion process, has replaced formal measure theory.

a lightning bolt hits the Gordian knot and the damn knot goes up in smoke. I love it.

Also look at the very first paragraph, page 2, where it says that the very idea of geometry itself must be suitably generalized essentially because at small scale it must be expected to be so chaotic (by Heisenberg fluctuation) that ordinary metric ideas of differential geometry will not apply. And not only has it been EXPECTED for 50 years to be too chaotic at small scale for classic geometry, but also when the Utrecht people got down to that level in their model that is how they actually FOUND it, chaotic and even of a non-classic dimensionality. So there are still some metric properties, but the classic smoothness goes away and the idea of geometry is suitably generalized.

so then the space of paths thru geometries, evolutions of the universe, or whatever you will call it, becomes much much more complicated. Because it is no longer paths from one classic geometry to another. It is paths thru a land of generalized geometries which have not been studied yet by mathematicians!----where even the dimensionality can be less at short range and the small scale structure can resemble fractal, or foamy turmoil.

they could never have defined a formal measure on this space of paths, because they could not mathematically define even the underlying set of suitably-generalized geometries. It is an unknown to us, so far.

they are using Monte Carlo to take a random walk in the unknown. To me that is beautiful. you see why, I think.

Anyway I am already thinking about your posts especially #26, and will respond some later after reflecting more on what you said.
 
Last edited:
  • #30
Fra said:
I suspected that you where thinking that I had my own favourite "background microstructure" in my back pocket, but I don't :) While it's an important concept, I totally agree with Loll et all. I envision the microstructure is emergent and the concept of discreteness are not litteral,...

here is the first paragraph of the second article I gave a link to (Planckian...de Sitter Universe)
==quote==
To show that the physical spacetime surrounding us can be derived from some
fundamental, quantum-dynamical principle is one of the holy grails of theoretical
physics. The fact that this goal has been eluding us for the better part of the
last half century could be taken as an indication that we have not as yet gone
far enough in postulating new, exotic ingredients and inventing radically new
construction principles governing physics at the relevant, ultra-high Planckian
energy scale. – In this letter, we add to previous evidence that such a conclusion
may be premature.
==endquote==

I am gradually constructing a kind of dictionary: you say BACKGROUND MICROSTRUCTURE
and Ambjorn/Loll papers often say fundamental dynamical degrees of freedom of spacetime. In one Utrecht Triangulations paper it began by stating simply that "the goal of nonperturbative quantum gravity is to discover the fundmental dynamical degrees of freedom of spacetime" or words to that effect.

That is how they see their job. And it is not too unlike what you are saying about the quest to determine the "background microstructure" that interests you.

In what I just quoted they use a different phrase: fundamental quantum dynamical principle. I think the aim remains the same and they just use different words.
==================
In this second paper they present a new piece of evidence that they are on the right track------They make lots and lots of random universes and then a giant superposition of all these universes, and they discover it is S4. Roughly speaking the "wick rotation" of usual de Sitter space. they are often going back and forth between Lorentzian version and Euclidean, substituting imaginary time for real time and back again.
In doing the Monte Carlo runs, they "wick rotate" in this sense so that complex amplitudes become real probabilities. Only then can they do a random walk, in effect tossing dice or coins to decide which modifications of geometry to do.
that part can be a bit confusing. Anyway, according to them, what they got (S4 ) is the Euclidean version of the right thing, namely de Sitter space. So it is the right thing, and it is part of the program "To show that the physical spacetime surrounding us can be derived from some fundamental, quantum-dynamical principle."

===================
This then strengthens the argument, which must be familiar to you from the first paper, that it is PREMATURE to resort to exotic and newfangled structures to represent the microscopic degrees of freedom. Like you said earlier, we do not have to resort to "funny wiggling things". Not YET anyway, because a simple nonperturbative path integral appears to be working.

In case anyone wants links to the two main papers, here they are
http://arxiv.org/abs/0711.0273 (21 pages)
The Emergence of Spacetime, or, Quantum Gravity on Your Desktop

http://arxiv.org/abs/0712.2485 (10 pages)
Planckian Birth of the Quantum de Sitter Universe
 
Last edited:
  • #31
Marcus, now many interesting questions are exposed! I feel a little frustrated due to timeconstraints in elaborating my responses. I also feel that I could easily diverge in elaborating my personal ideas here... in particular in reflecting of the action principles, and that is probably not advisable because the discussion would diverge.

I need to think howto respond to all this in order to keep some focus here :) Later... I'm a lttle tight on time to think and write the acutall repsonses.

I think you raise many interesting things here... and my first problem is to decide how to scheduele my resources in responding. More specifically, I need to regularize my actions here ;)

please hang on...

/Fredrik
 
  • #32
Action, entropy and probability - measures (fuzzy reflections)

We have multiplied our focuses now, so i'll try to comment in bits. I decided to try to be brief, or at brief as possible and I appeal to your intuition.

The following is my personal reflections upon your reflection so to speak.

I guess what I am aiming at is that the concepts entropy, probabilitiy and action are in my thinking very closely related measures. But since I am partly working on my own thinking still, and due to my regularized response I am brief in order to just hint the thinking without attempt to explain or argue in detail.

marcus said:
BTW one thing I don't think we have mentioned explicitly (although it is always present implicitly)
is the classical principle of least action. As an intuitive crutch (not a formal definition! :biggrin: ) I tend
to think of this as the principle of the "laziness of Nature" and I tend to think of "action" in this mathematical
context as really meaning bother, or trouble, or awkward inconvenience.
So it is really (in an intuitive way) the principle of least bother, or the principle of least trouble.

what Feynman seems to have done, if one trivializes it in the extreme, is just to put the square root of minus unity, the imaginary number i
in front of the bother.

In that way, paths (whether thru ordinary space or thru the space of geometries) which involve a lot of bother cause the exponential quantity to WHIRL AROUND the origin so that they add up to almost nothing. The rapidly changing phase angle causes them to cancel out.

It is that thing about
[tex]e^{iA}[/tex]
versus
[tex]e^{-A}[/tex]

where A is the bother. The former expression favors cases with small A because when you get out into large A territory the exponential whirls around the origin rapidly and cancels out. The latter expression favors cases with small A in a more ordinary mundane way, simply because it gets exponentially smaller as A increases.

I analyse this coming from a particular line of reasoning, so I'm not sure if it makes sense to you but.

Some "free associative ramblings"...Equilibrium can be static or dynamic. Ie. equlibrium can be a state or a state of motion. How do we measure equilibrium? Entropy? entropy of a state vs entropy of a state of motion? Now think away time... we have not define time yet... instead picture change abstractly without reference ot a clock, as something like "uncertainty" and there is a concept similar to random walks. Now this random walk tends to be self organized and eventually a distinguishable preferred path is formed. This is formed by structures forming in the the observers microstructure. Once expectations of this path is formed, it can be parametrized by expected relative changes. Of course simiarly preferred paths into the unknown are responsible for forming space! I THINK that you like the sound of this, and in this respect I share some of the visions of behind the CDT project. But I envision doing away with even more baggage than they do. Of course I ahve not complete anything yet, but then given the progress that others have accomplished the last 40 years with a lot of funding I see no reason whatsoever to excuse myself at this point...

Simplified, in thermodynamics a typical dynamics we see is simple diffusion. The system approach the equilibrium state (macrostate) simply basically by a random walk from low to high entropy or at least that is what one would EXPECT. In effect I see entropy as nothing but a measure of the the prior preference for certain microstates, which is just another measure of the prior probability to find the microstructure in a particular distinguishable microstate.

Traditionally the entropy measure is defined by a number of additional requirement that some feel is plausible. There are also the axioms of cox, that some people like. I personally find this somewhat ambigous, and think that it's more useful to directly work with the "probability over the microstates". The proper justification of a particular choice measure entropy is IMO more or less physically equiuvalent to choosing measures for the microstates. It's just that the latter feels cleaner IMO.

I've come to the conclusion that to predict things, one needs to make two things. First to try to find the plausability(probability) of a transition, ie. given a state, what are the probability that this state will be found in another state? Then if one considers the concept of a history, one may also try to parametrize the history of changes. What is a natural measure of this? some kind of itme measure? maybe a relative transition probability?


I think there is a close connection with the concept of entropy, and the concept of prior transition probability. And when another complication is added there is a close relation to the action and transition probabilities.

I find it illustrative to take a classical simple example to illustrate how various entropy measures relate to transition probabilities.

Consider the following trivial but still somewhat illustrative scenario:

An observer who can distinguish k external states, and from this history and memory record, he defines a prior probability over the set of distinguishable states, define by the relative frequency in the memory record. We can think of this memory structure as defining the observer.

Now he may ask, what is the probability that he will draw n samples according to a particular frequency distribution?

This is the case of the multinomial distribution,

[tex]
P(\rho_i,n,k|\rho_{i,prior},k) = n! \frac{ \prod_{i=1..k} \rho_{i,prior}^{(n\rho_{i})} }{ \prod_{i=1..k} (n\rho_{i})! }
[/tex]

now the interesting thing is that we can interpret this as the probability (in the space of distributions) of seeing a transition from a prior probability to a new probability. And this transition probability is seen to be related to the relative entropy of the probability distributions. This is just an example so I'll leave out the details and just claim that one can find that

[tex]
P(\rho_i,M,k|\rho_{i,prior},k)= w e^{-S_{KL}}
[/tex]

Where
[tex]
w = \left\{ M! \frac{ \prod_{i=1..k} \rho_{i}^{(M\rho_{i})} }{ \prod_{i=1..k} (M\rho_{i})! } \right\}
[/tex]

[tex]S_{KL}[/tex] is the "relative entropy", also called Kullback-Leibler divergence or information divergence. It is usually considered a measure of the missing relative information between two states. The association here is that the more relative information that's missing the more unlikely is the transition to be observed. The other association is that the most likely transition is the one that minimizes the information divergence, this smells like action thinking.

w can be interpreted as the confidence in the final state. w -> 1, as the confidence goes to infinity. The only thing this does is hint the principal relation between probability of probability, and the entropy of the space of spaces etc. It's an inductive hierarchy.

M is the number of counts, loosely associative to "inertia" or information content, or the number of distinguishable microstates. Strictly this is unclear, but let it be an artistic image of a vision at this point :)

One difference between thermodynamics and classical dynamical systems is that in thermodynamics that equilibrium state is usually a fixed macrostate point, in dynamical system the equilibrium is often say a orbit or steady state dynamical pattern. it should be conceptually clear here how the notion of entropy is generalized.

So far this is "classical information" and just loose associative inspiring reflection.

One should also note that the above is only valid given the prior distribution, and this is emergent from the history and memory of the observer the probabiltiy is relative to the observers history - so it's a conditional probabiltiy first of all. Second, the probability is updated gradually, so strictly speaking the entropy formula only makes sense in the differential sense, since technically after each new sample, the actions are updated!

So what about QM? What is the generalization to QM and how does the complex action and amplitudes enter the picture? I am working on this and I don't have the answer yet! but the idea, that I for various reasons think will work is that the trick on howo make QM consistent with this is to consider that the rention of the information, stored inthe observers microstructure, may be done in different, and generally unknown ways! because the microstructure of an observer could be anything. They question is mainly, what are the typical microstructures we encounter in nature? For example, is there some plausabilit arguments as to what the elementary particles have the properties they have?

The idea is that there are internal equilibration processes going on, in parallell to the external interactions, in a certain sense I association here to datacompression and _learning_. Given any microstructure I am looking to RATE different transformations in the whole and parts of the microstructure. Now we are closing up on sometihng that might look like actions. The actions are association to transformations, and each transformation have a prior probability, that is updated all the way as part of the evolution and state change. In a certain sense the state change may be considered as the superficial changes, and the evolutionary changes of the microstructure and action rations is a condensed form that evolves slower.

I can't do this yet, but if I am right and given time I think I can be able to show how and why these transformations give rise to something that can be written in complex amplitudes. It basically has to do with retention of conflicting information, that implies a nontrivial dynamics beyond.

What is more, the size of the memory record (complexity of the microstructure) is clearly related to the confidence in the expectations. Because we know loosely speaking from basic statistics the confidence level increases as the data size increases - this, in my thinking is related to proto-ideas of intertia. Note that I am never to to assume that there is a connection, my plan is to show that the information capacity itself possesses intertia! This also contains the potential to derive some gravity action from first principles. The complex part is exactly the part that all of these things we circle really are connected. I'm trying to structure it, and eventually some kind of computer simulations is also in my plan.

I wrote reflections in a few settings, and there is probably no coherent line of reasoning the builds the compelxity here, but this is my reflection on the action stuff. This can be made quite deep and it's something I an processing ongoingly.

/Fredrik
 
Last edited:
  • #33
Note that simple example I gave is not best viewed as a one-dimesional things. No it's best view as simply as an abstract "indexing" of possible distinguishable microstates. So the emergent dimensionality could of course be anything. This could be microstates of spaces as well. This is what I mean with the hieracrhy note.

Rather than to introduce non-physical embeddings and continuum instead picture indexing of distinguishable microstates states.

Edit: I can't stop once I get going :( anyway... to add to the above... in the total consutrction, the point is that the intertia is distributed over hierarchies, this is why the intertia of object embedded are entangled up with the inertia of space itself. Conceptually that is. Of course the proof is missing. Anyway I'll stop now!

/Fredrik
 
Last edited:
  • #34
measures and baggage

General note on the forum problem: I can't get in the front door here at PF and hasn't been for at least the last day. Then I found out the backdoor. I suspect othres have similar problems... here i continue whrere I was
---------

Your comment about measures, terminology and it's relation to "random walks" is interesting, here's a little bit how I see it.

( Just to confirm that we use the words similarly, measure in the mathematical sense is
that from measure theory, so I think there is no misunderstnadings here)

they are using Monte Carlo to take a random walk in the unknown. To me that is beautiful. you see why, I think.

I think we are getting closer in resolving our standpoints.

Intuitively this is very appealing to me, no doubt. I especially dig that you use the word unknown :)

And perhaps we share the same idea of what this means, I don't know yet, but I suspect not quite? But if we are talking about the same thing I see a deep beauty indeed!

However, wether the CDT conceptual framework is consisteny with my meaning of random walk is unclear to me, my first impression is that it's not quite there yet and i'll try to explain why.

For me a "random walk un the unknown" is naturally complemented by a strategy of learning. This gets philosophical but now we are close enough that I think that you might get the idea anyway.

How can a random walk predict non-trivial and evolutionary dynamics? They seems to complement the random walk with an ad hoc strategy, old framework of path integrals and old actions. No question these frameworks has been proven effective so far, but in a construction of this depth, this is not good enough IMO. Beucase as I tried to convey in the 2 previous posts I see, from a information theoretic basis a coupling between change and states, between entropy and action. And the reasoning is an inductive reasoning, and this inductive reasoning is what I picture to be part of the conceptual explanation of the coupling between say matter and space, and inductively ALSO space and the space of spaces!

This logic, when matured, should if I am right suggest a more fundamental action. So even the action is evolving by random walking in the space of actions! does this make sense? So eventually, by equilibration chances are that certain typica actions are emergent! And this can be interpreted as self organisation.

I think you get the essence, and as how to implement this, I am working on it. I've really tried to analyze the conceptual points carefully before jumping into toymodels. I am now in the stage where the conceptial parts slowly start to fall in places, and the next phase is to find the formalism that allows predictive computations. The conceptual understanding is the guidance that will supposedly guide me through the "space of theories". That's how I see this.

I think you have a far better overview over current research than I have, but I am not aware of many people that are currently trying this. If the CDT people could get rid of the remaining baggage of the path integrals and classical actions, it would be much better. Of course that is asking a lot and I don't have a better theory at the moment, but I subjective think that I've got a decent _strategy_ lined out, and this is what I follow and I can't wait to see where it takes me.

Marcus, let me know what you think, am I missing something still? If not, I think we are sort of closing up on each other.

/Fredrik
 
  • #35
Slight further note...

Fra said:
For me a "random walk un the unknown" is naturally complemented by a strategy of learning. This gets philosophical but now we are close enough that I think that you might get the idea anyway.

For example, the notion of trying to - if possible - distinguish places from one another, for example how can the randow walker distinguish a places? and so to speak attempt to "index" the distinguishable states. This somehow calls for a memory structure, and the structure of the memory is a constraint on the learning. One such an indexing is starting to form, changes in this indexing can also be explored, and all the structures and relate to eahc other, building higher levels of complexity.

In my thinking, one of the first developments the random walker is the notion of distinguishable states. On top of that, rating system can build, who evolev as to protect the structure. The properties of the emergnent structurs are self-preserving almost by construction, becaue non-self-preserving structures simply won't emerge, at least they are highly unlikely to occur, and soon it's unlikely enough to be fair to say that it doesn't happen.

Then I like to think in terms of transformations of the emerged structures. To each transformation a probability is assigned, and this will relate to actions too. But a lot remains.

Ultimately this reconnets to the observer issue, by noting that the observers is the one performing the random walk. And thus, evolution is pretty much a random walk in the "space of random walkers".

Edit: Marcus, I'll reread and respond to your post #30 later.

/Fredrik
 
Last edited:

Similar threads

  • Beyond the Standard Models
Replies
19
Views
3K
  • Beyond the Standard Models
Replies
0
Views
2K
Replies
2
Views
3K
  • Beyond the Standard Models
Replies
1
Views
1K
  • Beyond the Standard Models
Replies
14
Views
5K
  • Quantum Physics
Replies
0
Views
513
  • Beyond the Standard Models
Replies
1
Views
821
  • Beyond the Standard Models
Replies
2
Views
2K
Replies
72
Views
7K
Replies
62
Views
3K
Back
Top