Steve Carlip: Dimensional Reduction at Small Scale

  • Thread starter marcus
  • Start date
  • Tags
    Reduction
In summary, a recent conference on Planck Scale physics has sparked discussion about the possibility of a spontaneous dimensional reduction at small scales in various models of quantized geometry. Prominent physicist Steve Carlip has focused attention on this coincidence and is exploring its implications. Renate Loll, who has been studying this phenomenon since 2005, has also discussed it in her talks and provided references to relevant papers. It is important to note that quantum mechanical events define space-time, and where there are no events, there is no space-time. Therefore, in isolated systems such as entangled particles, the concept of space-time may not be applicable. The idea of a microscopic universe with less than four dimensions is being explored and may have implications for our understanding
  • #1
marcus
Science Advisor
Gold Member
Dearly Missed
24,775
792
http://www.ift.uni.wroc.pl/~planckscale/lectures/1-Monday/1-Carlip.pdf

Steve Carlip gave the first talk of a weeklong conference on Planck Scale physics that just ended July 4 (yesterday). The pdf of his slides is online.

We here at PF have been discussing since 2005 from time to time the odd coincidence that several very different approaches to quantizing GR give a spacetime of less than 4D at small scale. As you zoom in, and measure things like areas and volumes at smaller and smaller scale you find in these various quantized-geometry models that the geometry is behaving as if it was fractional dimensioned less than 4. Going continuously down to 3.9 and 3.8 and 3.7 ...and finally approaching 2D.

Dimensionality does not have to be a whole number, like exactly 2, or exactly 3. There are easy ways to measure the dimensionality of whatever space you are in----like by comparing radius with volume to see how the volume grows---or by conducting a random walk diffusion and seeing of fast diffusion happens. And these easy ways to measure dimension experimentally can give non-integer answers. And there are many well-known examples of spaces that you can construct that have non-whole-number dimension. Renate Loll had a SciAm article about this, with nice illustrations, and saying why this could be how it works at Planck Scale. The link is in my signature if anyone wants.

So from our point of view it's great that Steve Carlip is focusing some attention on this strange coincidence. Why should the different approaches of Renate Loll, Martin Reuter, Petr Horava, and even also Loop QG, why should these very different models all arrive at the bizarre spontaneous dimensional reduction at small scale (that is the title of Carlip's talk.)

Carlip is prominent and a widely recognized expert so IMHO it nice he is thinking about this.

Here is the whole schedule of the Planck Scale conference which now has online PDF for many of the talks for the whole week
http://www.ift.uni.wroc.pl/~planckscale/index.html?page=timetable
 
Physics news on Phys.org
  • #2


Renate Loll also dicussed this coincidence of different QG approaches on spontaneous dimensional reduction from 4D down to 2D.
See her slide #9, two slides from the end.
http://www.ift.uni.wroc.pl/~planckscale/lectures/1-Monday/4-Loll.pdf
She gives arxiv references to papers by her and by Reuter, and by Horava, and by Modesto, and the one by Benedetti etc. So you have a more complete set of links to refer to than you get with Steve Carlip. She has been writing about this since 2005 and has the whole thing in focus and sharp perspective. The interesting thing is Steve Carlip is a completely different brain now looking at the same coincidences and likely making something different out of it.

I'm beginning to think this Planck Scale conference at Wroclaw (pronounced "Breslau" by many people despite the polish spelling) was a great conference. Perhaps it will turn out to have been the best one of summer 2009. Try sampling some of the talks and see what you think.
 
Last edited:
  • #3


An interval is defined by the events at its ends. There is no reason for us to claim there are intervals that do not have events at their ends, because an interval without events is immeasurable and has no meaning except classically.

Where there are no events, there are no intervals, and where there are no intervals, there is no dimension (for dimensions are sets of orthogonal, or at least non-parallel, intervals) and where there is no dimension, there is no space-time. For this reason we see space and time (i.e. a classical concept) inapplicable in systems that are isolated from us. Thus a closed system like an entangled pair of particles are welcome to disobey causality just as readily as are a couple of local particles, because in both cases there is no space-time between them. Likewise time executes infinitely fast within a quantum computer because it is a closed system and there are no intervals between it and the rest of the universe to keep its time rate synchronized with the rest of the universe.

It is a grave mistake to think of quantum mechanical events as occurring "in space-time". Rather, it is those events that define space-time, and there is no space-time where there are no events.

The fewer the events, the fewer orthogonal (and non-parallel) intervals, and thus the fewer dimensions.
 
Last edited:
  • #4


marcus said:
Renate Loll also dicussed this coincidence of different QG approaches on spontaneous dimensional reduction from 4D down to 2D.
See her slide #9, two slides from the end.
http://www.ift.uni.wroc.pl/~planckscale/lectures/1-Monday/4-Loll.pdf
She gives arxiv references to papers by her and by Reuter, and by Horava, and by Modesto, and the one by Benedetti etc. So you have a more complete set of links to refer to than you get with Steve Carlip. She has been writing about this since 2005 and has the whole thing in focus and sharp perspective. The interesting thing is Steve Carlip is a completely different brain now looking at the same coincidences and likely making something different out of it.

I'm beginning to think this Planck Scale conference at Wroclaw (pronounced "Breslau" by many people despite the polish spelling) was a great conference. Perhaps it will turn out to have been the best one of summer 2009. Try sampling some of the talks and see what you think.


Interesting.

By the way Marcus, is there a reason why you think the idea of string theory that the universe may have more than four dimensions silly but you seem to like the idea that the universe at a microscopic scale may have less than four dimensions? I personally think that we should be open-minded to all possibilities and use physics (and maths) to guide us, not personal prejudices. But you seem close-minded to one opossibility while being open-minded to the other. Is it just because you don't like anything that comes from string theory and like everything that come from LQG or is there a more objective reason?

Thanks for your feedback. Yourwork in bringing up interesting papers and talks is sincerely appreciated!
 
  • #5


What force of a loupe do we need to see the Plank scale and LQG?
 
  • #6


Bob_for_short said:
What force of a loupe do we need to see the Plank scale and LQG?
That's a good question and I like the way you phrase it! The aim is to observe Planck scale effects. There are two instruments recently placed in orbit which many believe may be able to measure Planck scale effects.

These are Fermi (formerly called the gammaray large array space telescope) and Planck (the successor to the Wilkinson microwave anisotropy probe.)

The Fermi collaboration has already reported an observation that shows the desired sensitivity. To get a firm result one will need many tens of such results. One has to record gammaray bursts arriving from several different distances.

The Fermi collaboration reported at conference in January 2009 and published in Science journal soon after, I think it was in March. They reported that with 95% confidence the quantum gravity mass MQG is at least 0.1 MPlanck.

The Science article is pay-per-view but Charles Dermer delivered this powerpoint talk
http://glast2.pi.infn.it/SpBureau/g...each-contributions/talk.2008-11-10.5889935356
at LongBeach at the January meeting of the American Astronomical Society (AAS). It summarized observational constraints on the QG mass by other groups and discusses the recent Fermi stuff.

The main point is we already have the instruments to do some of the testing that is possible. They just need time to accumulate more data. The result so far that MQG > 0.1 MPlanck is not very helpful. But by observing many more bursts they may be able to say
either MQG > 10 MPlanck or MQG < 10 MPlanck. Either result would be a big help. The first would pretty much kill the DSR (deformed special relativity) hypothesis and the second would strongly favor DSR. If you haven't been following the discussion of DSR, Carlo Rovelli posted a short paper about it in August 2008, and there are earlier longer papers by many other people in the LQG community. Rovelli seems to be a DSR skeptic, but he has this one paper which I think wouldn't be such a bad introduction.

Several of us discussed that 6 page Rovelli paper here:
https://www.physicsforums.com/showthread.php?p=2227272

Amelino gave a seminar talk at Perimeter this spring, interpreting the Fermi result. Amelino is a QG phenomenologist who has made QG testing his specialty, written a lot about it, he gets to chair the parallel session on that line of research at conferences. He was happy with the Chuck Dermer paper reporting a slight delay in arrival time of some high energy gamma, but he emphasized the need to observe a number of bursts at different distances to make sure the effect is distance-dependent. Any one who is really curious in this can watch the video.

The url for the video is in the first post of this thread:
https://www.physicsforums.com/showthread.php?t=321649

======================
Various papers have been written about QG signature in the microwave Background temperature and polarization map. Right now I don't know of firm predictions---of some theory that will live or die depending on detail found in that map. Planck spacecraft just arrived out at L2 lagrange point and began observing and I'm watching to see if people in the QG community can utilize the improved resolution of the map.
 
Last edited by a moderator:
  • #7


marcus said:
The result so far that MQG > 0.1 MPlanck is not very helpful. But by observing many more bursts they may be able to say
either MQG > 10 MPlanck or MQG < 10 MPlanck.

This is not correct. Smolin argued for a distribution function for the delay of the GRB, so merely posing the cut off at the first photons detected may not be the correct path.
 
  • #8


MTd2 said:
This is not correct. Smolin argued for a distribution function for the delay of the GRB, so merely posing the cut off at the first photons detected may not be the correct path.

I'm not sure what you mean. Who is not correct? As I recall the parameter MQG has been in general use for some time. Smolin(2003) used it and so did the John Ellis+MAGIC paper(2007). Charles Dermer used it in his Jan 2009 report for the Fermi collaboration, which then used it in their Science journal article.

It is not a cutoff and has nothing to do with a cutoff as far as I know. Just a real simple handle on quantum-geometry dispersion. You may know all this and think it is basic (it is basic) and maybe you are talking about something else more sophisticated. I want to keep it simple.

Papers typically look at two QG masses, one appearing in a first order dispersion relation and the other in a second order, so you see alternate formulas with both MQG1 and MQG2. When they don't make the distinction, they are talking about the first order.

I don't like the notation personally. I think they should use the symbol EQG because it is really an energy they are talking about, expressed either in GeV or in terms of the Planck energy EPlanck.

EQG is the Planck scale factor by which dispersion is hypothetically suppressed.

The hypothesis to be tested is that the speed the photon (of energy E) travels is not c but

(1 - E/EQG)c.

More complicated behavior could be conjectured and tested, maybe there is no first-order dependence, but there is some second order effect of E on the speed. But this linear hypothesis is simple. The observational astronomers can test it and possibly rule it out (if it is wrong) fairly quickly.

The way you would rule it out is to raise the lower limit on MQG or on EQG as I would prefer (my two cents).
Intuitively if there is a Planck scale DSR effect, then MQG is on the order of the Planck mass. So if you can show that dispersion is more strongly suppressed than that, if you can show that the parameter is , say, > 10 time Planck, that would effectively rule out DSR (or at least rule out any simple first-order theory).

To be very certain perhaps one should collect data until one can confidently say that the parameter is > 100 times Planck. But I personally would be happy to reject DSR with only a 95% confidence result saying > 10 times Planck.

On the other hand, if DSR is not wrong, then groups like Fermi will continue to observe and will come up with some result like the parameter is < 10 Planck. Then it will be bracketed like in some interval like [0.1 Planck, 10 Planck]
Then the dam on speculation will break and the cow will float down the river.
Because we will know that there is a linear dispersion coefficient actually in that range.
Everybody including John Ellis with his twinkly eyes and Tolkein beard will go on television to offer an explanation. John Ellis has already offered an explanation in advance of any clear result of this sort. And straight quantum gravitists will be heard as well. It is an exciting thought, but we have to wait and see until there are many lightcurves of many distant GRB.

The Smolin(2003) paper was titled something like "How far are we from a theory of quantum gravity?"
The John Ellis+MAGIC paper was around 2007. I will get the link.
http://arxiv.org/abs/0708.2889
"Probing quantum gravity using photons from a flare of the active galactic nucleus Markarian 501 observed by the MAGIC telescope"
 
Last edited:
  • #9


I am not referring to neither. I am referring to the fuzzy dispersion, p. 22, section 5.1

http://arxiv.org/PS_cache/arxiv/pdf/0906/0906.3731v3.pdf

Note eq. 20 and 21.

If you suppose that the delta time is is the mean of a gaussian, it does not make sense to talk of a first or second order aproximation. You have to look at the infinite sum, that is, the gaussian.
 
  • #10


I see! But that is one small special section of the Amelino-Smolin paper. On "fuzzy dispersion". For much of the rest of paper they are talking about the simple MQG that I am used to.
They cite the Fermi collaboration's paper in Science, and quote the result
MQG > 0.1 MPlanck.
It's true they look ahead to more complicated types of dispersion and effects that more advanced instruments beyond Fermi might observe. They even consider superluminal dispersion (not just the slight slowing down when a photon has enough energy to wrinkle the geometry it is traveling thru.)
They want to open the QG dispersion question up and look at secondorder and non-simple possibilities, which is what scientists are supposed to do. We expect it.
But our PF member Bob asked a question about the LOUPE you need to see Planck-scale wrinkles! The tone of his question is keep-it-basic. I want to not go where Amelino-Smolin go in that 2009 paper. I want to say "here is an example of the magnifying glass that you see wrinkles with" The loupe (the jewelers' squint-eye microscope) is this spacecraft called Fermi which is now in orbit observing gamma bursts.

MTd2, let's get back to the topic of spontaneous dimensional reduction. I would like to hear some of your (and others) ideas about this.
I suspect we have people here at PF who don't immediately see the difference between a background independent theory (B.I. in the sense that LoopQG people speak of it) where you don't tell space what dimensionality to have and a fixed geometric background theory where you set up space ahead of time to have such and such dimensionalities.

In some (B.D.) theories you put dimensionality in by hand at the beginning.
In other (B.I.) theories the dimensionality can be anything and it has the status of a quantum observable, or measurement. And it may be found, as you study the theory, that to your amazement the dimensionality changes with scale, and gradually gets lower as the scale gets smaller. This behavior was not asked for and came as a surprise.

Significantly I think, it showed up first in the two approaches that are the most minimalist attempts to quantize General Relativity. Reuter following Steven Weinberg's program of finding a UV fixed-point in the renormalization flow (the asymptotic safe approach) and Loll letting the pentachorons assemble themselves in a swarm governed by Regge's General Relativity without coordinates. Dimensional reduction appeared first in the approaches that went for a quick success with the bare minimum of extra machinery, assumptions, new structure. Both had their first papers appear in 1998.

And Horava the former string theorizer came in ten years later with another minimalist approach which turned out to get the same dimensional reduction. So what I am thinking is there must be something about GR. Maybe it is somehow intrinsic built into the nature of GR that if you try to quantize it in the simplest possible way you can think of, without any strings/branes/extradimensions or anything at all you dream up. If you are completely unimaginative and go directly for the immediate goal, then maybe you are destined to find dimensional reduction at very small scale. (If your theory is B.I., no fixed background geometry which would preclude the reduction happening.) Maybe this says something about GR that we didn't think of before. This is just a vague hunch. What do you think?===========EDIT: IN REPLY TO FOLLOWING POST========
Apeiron,
these reflections in your post are some of the most interesting ideas (to me) that I have heard recently about what could be the cause of this curious agreement among several very different methods of approach to Planckscale geometry (or whatever is the ground at the roots of geometry). I don't have an explanation to offer as an alternate conjecture to yours and in fact I am quite intrigued by what you say here and want to mull it over for a while.

I will answer your post #11 here, since I can still edit this one, rather than making a new post just to say this.
 
Last edited:
  • #11


marcus said:
If you are completely unimaginative and go directly for the immediate goal, then maybe you are destined to find dimensional reduction at very small scale. (If your theory is B.I., no fixed background geometry which would preclude the reduction happening.) Maybe this says something about GR that we didn't think of before. This is just a vague hunch. What do you think?

Can you explain just what it is in the approaches that leads to this dimensional reduction?

In the CDT story, is it that as path lengths shrink towards Planck scale, the "diffusion" gets disrupted by quantum jitter? So instead of an organised 4D diffusion, dimensionality gets so disrupted there are only linear jumps in now unspecified, uncontexted, directions?

Loll writes:
"In our case, the diffusion process is defined in terms of a discrete random walker between neighbouring four simplices, where in each discrete time step there is an equal probability for the walker to hop to one of its five neighbouring four-simplices."

So it does seem to be about the choice being disrupted. A random walker goes from four options (or 3+1) down to 1+1.

As to general implications for GR/QM modelling, I think it fits with a systems logic in which it is global constraints that produce local degrees of freedom. So the loss of degrees of freedoms - the disappearace of two of the 4D - is a symptom of the asymptotic erosion of a top-down acting weight of constraint.

GR in a sense is the global macrostate of the system and it attempts to impose 4D structure on its locales. But as the local limit is reached, there is an exponential loss of resolving power due to QM fluctuations. Pretty soon, all that is left is fleeting impulses towards dimensional organisation - the discrete 2D paths.

Loll's group would seem to draw a different conclusion as she then says that continue on through the Planckscale and she would expect the same 2D fractal universe to continue crisply to infinite smallness. I would expect instead - based on a systems perspective - that 2D structure would instead dissolve completely into a vague QM foam of sorts.

But still, the question is why does the reduction to 2D occur in her model? Is it about QM fluctuations overwhelming the dimensional structure, fragmenting directionality into atomistic 2D paths?

(For those unfamiliar with downward causation as a concept, Paul Davies did a good summary from the physicist's point of view where he cautiously concludes:)

In such a framework, downward causation remains a shadowy notion, on the fringe of physics, descriptive rather than predictive. My suggestion is to take downward causation seriously as a causal category, but it comes at the expense of introducing either explicit top-down physical forces or changing the fundamental categories of causation from that of local forces to a higher-level concept such as information.

http://www.ctnsstars.org/conferences/papers/The%20physics%20of%20downward%20causation.pdf
 
Last edited by a moderator:
  • #12


I've gotten the impression that my point of views are usually a little alien to marcus reasoning but I'll add fwiw an opinon realating to this...

I also think the common traits of various programs are interesting, and possibly a sign of something yet to come, but I still think we are quite some distance away from understanding it!

I gave some personal opinons on the CDT logic before in one of marcus threads https://www.physicsforums.com/showthread.php?t=206690&page=3

I think it's interesting, but to understand why it's like this I think we need to somehow think in new ways. From what I recall from the CDT papers I read, their reasoning is what marcus calls minimalist. The start with the current model, and do some minor technical tricks. But from the point of view of ratioanl reasoning here, I think the problem is that the current models are not really a solid starting point. It's an expectation of nature, based on our history.

I think similarly, spontaneous dimensional reduction or creation might be thought of as representation optimations, mixing the encoded expectations with new data. This somehow mixes "spatial dimensions" with time history sequences, and also ultimately retransformed histories.

If you assume that these histories and patterns are physical encoded in a material observer, this alone puts a constraint on the combinatorics here. The proper perspective should make sure there are no divergences. It's when no constraints on the context are given, that absurd things happens. Loll argued for the choice of gluing rules so that otherwise it would diverge etc, but if you take the observer - to be the computer, there simply is no physical computer around to actually realize a "divergence calculation" anyway. I don't think the question should have to even appear.

So, if you take the view that spatial structure are simply a preferred structure in the observeser microstructure (ie. matter encodes the spacetime properties of it's environment) then clearly, the time histories of the measurement history, are mixed by the spacetime properties as the observers internal evolution takes place (ie. internal processes).

I think this is a possible context to understand emergent dimensionality (reduction as well as creation). The context could then be an evolutionary selection for the observers/matters microstructure and processing rules, so as to simply persist.

The reasonings where the Einstein action is given, and there is a unconstrained context for the theory I think it's difficult to "see" the logic, since things that should be evolving and responding, are taken as frozen.

So I personally think in order to understand the unification of spacetime and matter, it's about as sinful to assume a background (action) as it is to assume a background space. I think it's intersting what Ted Jacobsson suggested that Einsteins equations might simply be seen as a state (a state of the action that is), but that a more general case is still out there.

/Fredrik
 
  • #13


What puzzles me more is that Loll makes a big deal about the "magic" of the result. The reason for the reduction should be easy to state even if it is "emergent" from the equations.

As I interpret the whole approach, CDT first takes the found global state of spacetime - its GR-modelled 4D structure complete with "causality" (a thermodynamic arrow of time) and dark energy expansion. Then this global system state is atomised - broken into little triangulations that, like hologram fragments, encode the shape of the whole. Then a seed is grown like a crystal in a soupy QM mixture, a culture medium. And the seed regrows the spacetime from which it was derived. So no real surprise there?

Then a second part of the story is to insert a Planck-scale random walker into this world.

While the walking is at a scale well above QM fluctuations, the walker is moving in 4D. So a choice to jump from one simplex to another is also, with equal crispness, a choice not to jump in any of the other three directions. But then as scale is shrunk and QM fluctuations rise (I'm presuming), now the jump in some direction is no longer also a clear failure to have moved in the other directions. So dimensionality falls. The choice is no longer oriented as orthogonal to three other choices. The exact relationship of the jump to other potential jumps is instead just vague.

Is this then a good model of the Planckscale?

As Loll says, the traditional QM foam is too wild. CDT has a tamer story of a sea of unoriented, or very weakly oriented, actions. A lot of small definite steps - a leap to somewhere. A 1 bit action that defines a single dimension. Yet now there are no 0 bit non-actions to define an orientation to the other dimensions.

So from your point of view perhaps (and mine), a CDT type story may track the loss of context, the erosion of observerhood. In 4D realm, it is significant that a particle went a step in the x-axis and failed to step towards the y or z axis. The GR universe notices these things. But shrink the scale and this clarity of orientation of events is what gets foamy and vague.
 
  • #14


The way I interpret Loll's reasoning is that there is not supposed to be a solid, plausible, convincing reason. They simply try what they think is a conservative, attempt to reinterpret the old path integral approch, but with an additional details of putting in manually a microstructure in the space of sampling the spacetimes (implicit in their reasoning of gluing rules etc).

And they just note an interesting result, and suggest that the interesting result itself indirectly is an argument for the "validity".

I think it's great to try stuff, but I do not find their reasoning convincing either. That's doesn't however remove the fact of interesting results, but it questions the program as beeing sufficiently ambitious and fundamental. It is still somewhat semiclassical to me.

/Fredrik
 
  • #15


> Then a second part of the story is to insert a Planck-scale random walker into this world.

IMO, I'm not sure are describing what a small random walker would see, they try to describe what a hypotetical massive external observer would sees, when observing a supposed random walker doing a random walk in the a subsystem of the larger context under the loupe. Ie. an external observer, observing the actions and interplay of the supposed random walker.

The problem is when you use such a picture to again describe two interacting systems, then you are not using the proper perspective.

What I am after, is, when they consider the "probability for return" after a particular extent of evolution, then, who is calculating that probability, and more important what is the consequences for the calculating device (observer) when feedback from actions based on this probability comes from the environment? The ensemble escape is IMO a static mathematical picture, not a evolving physical picture.

I think we need to question the physical basis even of statistics and probability here in a context, to make sense out of these path integrals. I guess I think there is actally an element of "reality" to the wavefunction, but a relative one. As soon as you introduce ensembles are hypotetical repeats of mesurements, I feel we are really leaving reality. In a real experiment statistics, the ensembles are still real memory records. the manifestation of the statistics is the memory record, encoding the statistics. Without such physical representation the information just doesn't exist IMO.

But I think that this STILL says something deep about GR, that marcus was after. Something about the dynamical relational nature of reality, but to understand WHY this is so, I personally think we need to understand WHY the GR actions looks like it does. No matter how far CDT would take us, I would still be left with a question mark in my forehead.

/Fredrik
 
  • #16


Hey sorry to interrupt the deep conversation. But one can give a pretty boring argument forward as to why gravity seems to be 2 dimensional on small scales: Newtons constant is dimensionless in d=2 hence one should expect gravity to be UV complete and hence renormalisable if a dimensional reduction t d=2 occurs. In RG language it makes sense that d=2 at the UV fixed point. The reason I suspect that we can't renormalize the theory in a perturbative way is that in this case the dimensionality is fixed or rather one expands around a 4d background in such rigid way that the expansion is still 4d. Reuter suspects that it is the background Independent formulation of gravity that allows for the fixed point.

http://arxiv.org/abs/0903.2971


As for a deeper reason why this reduction happens maybe we need to look no further than a) why is G dimensionless in d=2? i.e study of why GR takes the form it does and b) why is it important to have dimsionless couplings to renormalize QFT's?


I view both CDT and RG approaches to QG as attempts to push QFT and GR as far as they can go to help us understand the quantum nature of spacetime. They are humble attempts that hope to give us some insights into a more fundamental theory.
 
  • #17


Finbar said:
Hey sorry to interrupt the deep conversation. But one can give a pretty boring argument forward as to why gravity seems to be 2 dimensional on small scales: Newtons constant is dimensionless in d=2 hence one should expect gravity to be UV complete and hence renormalisable if a dimensional reduction t d=2 occurs. In RG language it makes sense that d=2 at the UV fixed point. The reason I suspect that we can't renormalize the theory in a perturbative way is that in this case the dimensionality is fixed or rather one expands around a 4d background in such rigid way that the expansion is still 4d. Reuter suspects that it is the background Independent formulation of gravity that allows for the fixed point.

http://arxiv.org/abs/0903.2971As for a deeper reason why this reduction happens maybe we need to look no further than a) why is G dimensionless in d=2? i.e study of why GR takes the form it does and b) why is it important to have dimsionless couplings to renormalize QFT's? I view both CDT and RG approaches to QG as attempts to push QFT and GR as far as they can go to help us understand the quantum nature of spacetime. They are humble attempts that hope to give us some insights into a more fundamental theory.

To me that argument is not boring, Finbar. It makes good sense. It does not explain why or how space could experience this dimensional reduction at micro scale.

But it explains why a perturbative approach that is locked to a rigid 4D background geometry might not ever work.

It explains why if you give more freedom to the geometry and make your approach background independent (or less dependent, at least) then your approach might find for you the UV fixed point where renormalization is natural and possible. (the theory becomes predictive after a finite number of parameters are determined experimentally.

And it gives an intuitive guess as to why dimensionality MUST reduce as this UV fixedpoint is approached, thus at small scale as you zoom in and look at space with a microscope.

So your unboring argument is helpful, it says why this must happen. But still not fully satisfying because one still wonders what this strange microscopic geometry, or lack of geometry, could look like, and what causes nature to be made that way.

It's like if you are with someone who seems to have perfectly smooth skin, and you notice that she is sweating. So you say "well her skin must have holes in it so that these droplets of water can come out." This person must not be solid but must actually be porous! You infer this, but you did not yet take a magnifying glass and look closely at the skin, to see what the pores actually look like, and you did not tell us why, when the skin grows, it always forms these pores in such a particular way

You told us that dimensionality must reduce, because the field theory of geometry/gravity really is renormalizable as we all suspect. But you did not show us a microscope picture of what it looks like zoomed in, and explain why nature could turn out to not so smooth and solid as we thought.
 
Last edited:
  • #18


Finbar said:
one can give a pretty boring argument forward as to why gravity seems to be 2 dimensional on small scales: Newtons constant is dimensionless in d=2 hence one should expect gravity to be UV complete and hence renormalisable if a dimensional reduction t d=2 occurs..

Are you saying here that the reason why a dimensional reduction to 2D is "a good thing" for the CDT approach is that a 2D realm would give us the strength of gravity that would be required at the Planckscale? So take away dimensions and the force of gravity no longer dilutes with distance?

But that would not explain why the model itself achieves a reduction to 2D. Or was I just wrong about it being quantum fluctuations overwhelming the random walker as being the essential mechanism of the model?
 
  • #19


Fra said:
>

I think we need to question the physical basis even of statistics and probability here in a context, to make sense out of these path integrals. I guess I think there is actally an element of "reality" to the wavefunction, but a relative one. As soon as you introduce ensembles are hypotetical repeats of mesurements, I feel we are really leaving reality. In a real experiment statistics, the ensembles are still real memory records. the manifestation of the statistics is the memory record, encoding the statistics. Without such physical representation the information just doesn't exist IMO.

/Fredrik

Everything may be flux (or process) when it comes to reality, but there must still be some kind of stability or structure to have observation - a system founded on meanings/relationships/interactions.

You ought to check out Salthe on hierarchy theory to see how this can be achieved in a "holographic principle" style way.

So an observer exists at a spatiotemporal scale of being. The observer looks upwards and sees a larger scale that is changing so slow it looks frozen, static, permanent. The observer looks down to the micro-scale and now sees a realm that is moving so fast, equilbrating its actions so rapidly, that it becomes a solid blur - a different kind of permanence.

This is the key insight of hierarchy theory. If you allow a process to freely exist over all scales, it must then end up with this kind of observer-based structure. There will be a view up and a view down. And both will be solid event horizons that have effects which "locate" the observer.
 
  • #20


apeiron said:
Are you saying here that the reason why a dimensional reduction to 2D is "a good thing" for the CDT approach is that a 2D realm would give us the strength of gravity that would be required at the Planckscale? So take away dimensions and the force of gravity no longer dilutes with distance?

But that would not explain why the model itself achieves a reduction to 2D. Or was I just wrong about it being quantum fluctuations overwhelming the random walker as being the essential mechanism of the model?

I have a better understanding of the RG approach than the CDT. But both CDT and RG approaches are valid at all scales including the Planck scale. My point is not that it is a "good thing" but that it is essential for the theory to work at the Planck scale. In CDT you putting in 4-symplexs(4 dimensional triangles) and using the 4 dimensional einstein-hilbert(actually Regge) action but this does not garentee a d=4 geometry comes out or that you can even complute the path integral(in this case sum over triagulations). For the theory to be reasonable it must produce d=4 on large scales to agree with the macroscopic world. Now there are many choices one must make before computing a path integral over triangulations such as what topologies to allow and how to set the bare couplings etc. My guess is that in RG language this corresponds to choices of how to fix the gauge and which RG trajectory to put oneself on. Different choies can lead to completely different geomtries on both large and small scales(as the orginal Eucldiean DT showed). So one must choose wisely and hope that the geomtery one gets out a) is d=4 on large scales and b) has a well defined contiuum limit. My point is that to get a good continuum limit would probably involve the small scale to look 2 dimensional other wise i would expect the theory to blowup (or possibly the geometry to implode).

Think of it this way: If we want to describe whatever it is that is actually happening on the Planck scale in the language of d=4 GR and QFT we have to do it in such a way that the effective dimension is d=2 else the theory blows up and can't describe anything. In turn this tells us that if we can push this GR/QFT theory to the Planck scale(if there's a UV fixed point) then its very possible that the actual Planck scale physics resembles this d=2 reduction. We could then prehaps turn the whole thing the other way and say that Planck scale physics must look like this d=2 GR/QFT theory to produce a d=4 GR/QFT theory on larger scales.

Of course this is all spectulation. But the key question here is when,that is at what energy scale, do we stop using GR and QFT and start using something else?
 
  • #21


So it is still a mystery why the reduction occurs as a result of the mathematical operation? Surely not.

I can see the value of having an operation that maps the 4D GR view on to the limit view, the QM Planckscale. It allows a smooth transition from one to the other. Before there was a problem in the abruptness of the change. With CDT, there is a continuous transformation. Perhaps. If we can actually track the transformation happening.

To me, it seems most likely that we are talking about a phase transition type situation. So a smooth change - like the reduction of dimensionality in this model - could actually be an abrupted one in practice.

I am thinking here of Ising models and Kauffman auto-catalytic nets, that sort of thing.

So we could model GR as like the global magnetic field of a magnet. A prevailing dimensional organisation that is impervious to local fluctuations (of the iron dipoles).

But heat up the magnet and the fluctuations grow to overwhelm the global order. Dimensionality becomes fractured (rather than strictly fractal). Every dipole points in some direction, but not either in alignment, or orthogonal to that alignment, in any definite sense.

So from this it could be said that GR is the ordered state, QM the disordered. And we then want a careful model of the transition from one to the other.

But no one here seems to be able to spell out how CDT achieves its particular result. And therefore whether it is a phase transition style model or something completely different.
 
  • #22


apeiron said:
Everything may be flux (or process) when it comes to reality, but there must still be some kind of stability or structure to have observation - a system founded on meanings/relationships/interactions.

You ought to check out Salthe on hierarchy theory to see how this can be achieved in a "holographic principle" style way.

So an observer exists at a spatiotemporal scale of being. The observer looks upwards and sees a larger scale that is changing so slow it looks frozen, static, permanent. The observer looks down to the micro-scale and now sees a realm that is moving so fast, equilbrating its actions so rapidly, that it becomes a solid blur - a different kind of permanence.

This is the key insight of hierarchy theory. If you allow a process to freely exist over all scales, it must then end up with this kind of observer-based structure. There will be a view up and a view down. And both will be solid event horizons that have effects which "locate" the observer.

Apeiron, this is right. I do not argue with this.

But in my view, there is still a physical basis for each scale.

To me the key decompositions are Actions (observer -> environment) and Reactions (environment -> observer). The Actions of an observer, are I think constructed AS IF this most stable scale was universal. The action is at best of of probabilistic type. The reaction is however less predictable (undecidable). The reactions is what deforms the action, by evolution. The action is formed by evolution, as it's interacting and subject to reactions.

I think the ACTION here, is encoded in the observer. Ie. the complexity of the action, ie the informatio needed to encode the action, are constrained by the observers "most stable view". The observer acts as if this was really fixed. However, only in the differential sense, since during global changes, the entire action structure way deforms. It doesn't have to deform, but it can deform. The action is at equilibrium when it does not deform.

And I also think that this deformation can partially be seen as phase transitions, where the different phases (microstructure) encode different actions.

I think there is no fundamental stability or determinism, instead it's this process of evolution that produces effective stability and context.

/Fredrik
 
  • #23


I wrote this in a different context (loops and strings) but I think i may be valid here as well: the major problem of quantizing gravity is that we start with an effective theory (GR) and try to find the underkying microscopic degrees of freedom; currently it seems that they could be strings, loops (or better: spin networks), or something like that.

The question is which guiding principle guarantuees that starting with GR as an "effective theory" and quantizing it by rules of thumb (quantizing is always using rules of thumb - choice of coordinates, hamiltonian or path integral, ...) allows us to identify the microscopic degrees of freedom. Of course we will find some of their properties (e.g. that they "are two-dimensional") but I am sure that all those approaches are limited in te sense that deriving microscopic degrees of freedom from macroscopic effective theories simply does not work!

Let's make a comparison with chiral perturbation theory:
- it is SU(2) symmetric (or if you wish SU(N) with N being the number of flavors)
- it is not renormalizable
- it incorporates to some extend principles like soft pions, current algebra, ...
- is has the "correct" low energy effective action with the well-known pions as fundamental degrees of freedom
But it simply does not allow you to derive QCD, especially not the color degrees of freedom = SU(3) and its Hamiltonian.
Starting with QCD there are arguments how to come to chiral perturbation theory, heavy baryons etc., but even starting with QCD, integrating out degrees of freedom and deriving the effective theories mathematically has not been achieved so far.

So my claim is that the identification of the fundamental degrees of freedom requires some additional physical insight (in that case the color gauge symmetry) which cannot be derived from an effective theory.

Maybe we are in the same situation with QG: we know the IR regime pretty well, we have some candidate theories (or only effective theories) in the UV respecting some relevant principles, but we still do not "know" the fundamental degrees of freedom: the effects corresponding to deep inelastic scattering for spin networks (or something else) are missing!
 
  • #24


tom.stoer said:
So my claim is that the identification of the fundamental degrees of freedom requires some additional physical insight

I think so too.

This is why I personally think it's good to try to think outside of the current frameworks and try to go back even to analyse our reasoning and how our scientific method looks like. Because when you phrase questions in a given frameworks, sometimes some possible answers are excluded.

This is why I have personally but a lot of focus on the scientific inquiring process and measurement processes. The problem of scientific induction and the measurement problem have common traits. We must question also our own questions. A question isn't just so simlpy as, here is the questions, what is the answer? Sometimes it's equallty interesting, and a good part of the answer to ask why are we asking this particular question? What is the origin of questions? And how does our questions influence our behaviour and actions?

The same questions appear in physics, if you wonder, what is matter and space, why is the action of matter and spacetime this or that? If a material object contains "information" about it's own environment, how does the learning process of this matter in his environment work like? And how is that principally different (if at all) from human scientific processes?

/Fredrik
 
  • #25


tom.stoer said:
I am sure that all those approaches are limited in te sense that deriving microscopic degrees of freedom from macroscopic effective theories simply does not work!

But why do you presume this? I am expecting the exact opposite to be the case from my background in systems science approaches. Or even just condensed matter physics.

In a systems approach, local degrees of freedom would be created by global constraints. The system has downward causality. It exerts a stabilising pressure on all its locations. It suppresses all local interactions that it can observe - that is, it equilbrates micro-differences to create a prevailing macrostate. But then - the important point - anything that the global state cannot suppress is free to occur. Indeed it must occur. The unsuppressed action becomes the local degree of freedom.

A crude analogy is an engine piston. An explosion of gas sends metal flying. But the constraint exerted by the engine cylinder forces all action to take place in one direction.

When all else has been prevented, then that which remains is what occurs. And the more that gets prevented, the more meaningful and definite becomes any remaining action, the more fundamental in the sense of being a degree of freedom that the system can not eradicate.

This is the logic of decoherence. Or sum over histories. By dissipating all the possible paths, all the many superpositions, the universe is then left with exactly the bit it could not average away.

Mostly the universe is a good dissipator of QM potential. Mostly the universe is cold, flat and empty.
 
  • #26


Maybe it's a matter of interpretation here, but Tom asked for physical insight. I guess this can come in different, forms. I don't think we need "insights" like "matter must be built from strings". In that respect I'm with Apeiron.

However, there are also the kind of insights that refer to the process of inference. In that sense, I wouldn't say we can "derive" microscopic domain from macro in the deducetive sense. But we can induce a guess, that we can thrive on. By assuming there is a universal deductive rule from macro to micro, I think we make a mistake. But there migth still be a rational path of inference, which just happens to be key to the GAME that does come with an emergent stability we do observe.

I think the missing physical insight, should be exactly how this process works, rather than coming up with a microstructure out of the blue. Conceptually might be the first step to gain intutition, but then also mathematically - what kind of mathematical formalism are best used to describe this?

I think this problem, gets then inseparable from the general problem of scientific induction.
- What is science, and what is knowledge, what is a scientific process?
- What is physics, what are physical states, and what are physical processes?

Replace the labels and the problems are strikingly similar.

/Fredirk
 
  • #27


Would a soliton count as a physical insight then? Standing waves are exactly the kind of thing that motivate the view I'm taking here.

Very interesting that you say we cannot deduce from macro but may induce. This is also a systems science point. Peirce expanded on it very fruitfully.

The reason is that both the macro and the micro must emerge together in interaction. So neither can be "deduced" from the other (although retrospectively we can see how they each came to entail the other via a process of induction, or rather abduction).
 
  • #28


Again Apeiron, I think we are reasonably close in our views.

apeiron said:
Would a soliton count as a physical insight then? Standing waves are exactly the kind of thing that motivate the view I'm taking here.

Yes, loosely something like that. The coherence of a system is self-stabilised.

But remaining questions are,

a) waves of what, in what? And what quantitative predictions and formalism does this suggest. The standing wave, mathematically is just a function over some space, or index.

b) what is the logic of the emergent actions that yield this soliton like stuff? And how can these rudimentary and abstract ideas, be used to reconstruct the normal physics - spacetime and matter?

I'm struggling with this, but I sure doesn't have any ready answers.

Before we can distinguish a wave, we must distinguish the index(or space) where the wave exists, but we also need to distinguish the state of the wave.

Maybe once we agree loosely on the direction here, peircian style stuff. What remains is partly a creative technical challange - to find the technical framework that realizes this intuitive vision.

So far I'm working on a reconstruction of information models. Where there are no statistical ensembles or prior continuum probabilities at all. I guess the reconstruction corresponds to how an inside observer would describe the universe origin. Indexes that are the discrete prototype of continuum spaces are emergent distinguishable states of the observers coherent degrees of freedom. These coherent degrees of freedom "just are". The observer doesn't know where the came from. However, the only logic to see the origin is to understand how these degrees of freedom can grow and shrink. Any by starting at the simplest possible observer, ponder how he can chooset to play a game, that makes him grow. Generation of mass, and thus mass of indexes, might generate space. The exploit is that the action possible for a simple observer, is similarly simple :) How many ways can you combine 3 states for example? So I am aiming for a discrete formalism. In the limit of the system complexity growing, there will be an "effective continuum model". But the only way I think to UNDERSTAND the continum model is to undertand how it emerges from the discrete limit.

But it's still a massive problem. This is why I'm always curious to read up on what others are doing. I want a mathematical formalism, implemented as per these guidelines. I haven't found it yet.

/Fredrik
 
  • #29


So this all is about making somehow the QG renormalizable?

In my opinion, the QED renormalizability plays a bad role - it makes an impression that the renormalizations are a "good soluiton" to mathematical and conceptual difficulties. "Cowboy's attacs" to manage divergences in QG fail and all these strings and superstrings, loops, dimension reductions are just attempts to get something meaningfull, at least mathematically.

At the same time, there is another approach that contains really natural (physical) regularizators or cut-offs and thus is free from divergences. I would like to read your opinions, if any, in my "Independent research" thread (not here).
 
  • #30


Marcus, great discussion...thanks for the post..

a lot of this is new to me so I'm still puzzling over some basics.

I had a similar thought, I think, as Aperion, who posted,

And the seed regrows the spacetime from which it was derived. So no real surprise there?

whereas I am puzzling (via the Loll, the Scientific American article Marcus referenced) :

Mix a batch of four dimensional simplicies glued together in a computer model with external independent inputs of time (arrows) (CDT), plus a cosmological constant, plus a tamed QM foam also via CDT results in a four dimensional de Sitter shape...

so I keeping wondering How close is this model to anything resembling quantum conditions? Is this a huge advance or a really tiny baby step, barely a beginning?

Yes, similar examples of self organization and self assembly exist the authors note, but those have an already established space,time,cosmological constant,etc in existence as background before they initiate...why would we suppose quantum emergence has all those present when the authors say unfettered quantum foam typically results in crumpled up dimensions?

One answer could be that "everything that is not prohibited is required" but is that all we have here??

PS: Just got to love the idea of fractal dimensions at sub Planck scales..fascinating!
 
  • #31


Naty1 said:
...
Mix a batch of four dimensional simplicies glued together in a computer model with external independent inputs of time (arrows) (CDT), plus a cosmological constant, plus a tamed QM foam also via CDT results in a four dimensional de Sitter shape...

Naty, I was glad to see your reaction. Mine is similar in a good many respects. Your short list of ingredients doesn't mention one which is the idea of letting the size of the simplices then go to zero. (I think that's understood, in your post, but it still can use a mention.)

It is like with a Feynman path integral you might only average over all the piecewiselinear paths---polygonal paths made of short linear segments.
Then you let the lengths of the segments all go to zero.

There is a kind of practical leap of faith there (even in the original Feynman path integral) because the segmented paths are admittedly a very small subset of the set of all paths. You have to trust that they are sufficiently representative like a "skeleton crew" of the whole set. Because the whole set of paths is too big to average over. The set of segmented paths is small and simple enough so you can put a probability measure or amplitudes or whatever you need.

And then you trust that when you let the segment size go to zero the skeleton average will somehow come close to the whole average, that you can't define mathematically so well and compute with.

With Loll's method it doesn't matter very much what shape blocks they use. Only that they use some uniform set of block and let the size go to zero in the limit. They have papers using other shape blocks.

So they are not fantasizing that space is "made" of simplices or that there is any "minimal length" present in fundamental geometric reality whatever that is (there may be limits on what you can measure but presumably that's a different matter)

This is all kind of implied in what you and others were saying. So I am being a bit tiresome to be spelling it out. But it seems to me that philosophically the idea is kind of elusive. It doesn't say that the fundmental degrees of freedom are nailed down to be a certain kind of lego block. It says that a swarm of shrinking-to-zero legoblocks provides a good description of the dynamic geometry, a good skeleton path integral. That captures some important features of how space might work at small scale.

And amazingly enough deSitter space does emerge out of it, just as one would want, in the matterless case. The deSitter model is what our own universe geometry is tending towards as matter thins out and nothing left but dark energy. and it is also how you represent inflation (just a different dark energy field, but also no ordinary matter).

No amount of talk can conceal that we are not saying what is space made of. We are trying to understand a process. Spacetime is a process by which one state of geometry evolves into another state of geometry. We want a propagator that tells transition amplitudes.

There may, under high magnification, be no wires, no cogwheels, no vibrating gidgets, there may only be a process that controls how geometry evolves. It feels to me a little like slamming into a brick wall, philosophically. what if that is all there is?
 
  • #32


apeiron said:
... I am expecting the exact opposite to be the case from my background in systems science approaches. ...

In a systems approach, local degrees of freedom would be created by global constraints.

I don't want to add too many explanations, I simply would like to stress the example of chiral perturbation theory: there is no way to derive QCD (with quarks and gluons as fundamental degrees of freedom) from chiral perturbation theory (as effective low-energy theory of pions). You need additional physical insight and new principles (here: color gauge theory, physical effects like deep inelastic scattering) to conclude that QCD is the right way to go. It's more than doing calculations.

I think with QG we may be in the same sitiuation. We know GR as IR effective theory and we are now searching for more fundamental entities. Unfortunately quantizing GR directly w/o additional physical ingredients is not the right way to go. New principles or entities (strings? loops? spin networks? holography?) that are not present in GR are required.

Fra is looking for something even more radical.
 
  • #33


fleem said:
It is a grave mistake to think of quantum mechanical events as occurring "in space-time". Rather, it is those events that define space-time, and there is no space-time where there are no events.

It is complaint with my understanding. As I showed in one of my publications, the "classical" phenomena are the inclusive QM pictures (with many-many events summed up).
 
  • #34


Marcus posts:
With Loll's method it doesn't matter very much what shape blocks they use. Only that they use some uniform set of block and let the size go to zero in the limit. They have papers using other shape blocks.

Gald you mentioned that, I meant to ask and forgot and was unaware of other shapes...thats a good indicator. Another thing I forgot to post is that I belieive somewhere the authors said their results were not very sensitive to parameter changes and I LIKED that if my recollection is correct. Fine tuning in a situation like this just does not seem right unless we know the process by which nature does it.

Also, glad you posted:
It doesn't say that the fundmental degrees of freedom are nailed down to be a certain kind of lego block. ...No amount of talk can conceal that we are not saying what is space made of. We are trying to understand a process.

that helps clarify my own understanding...I know from other forms of computer analyses and modeling if you don't understand the inputs, the underlying logic of the processing, and the sensitivity of outputs relative to input changes, understanding the outputs is almost hopeless.
 
Last edited:
  • #35


marcus said:
With Loll's method it doesn't matter very much what shape blocks they use. Only that they use some uniform set of block and let the size go to zero in the limit. They have papers using other shape blocks.

So they are not fantasizing that space is "made" of simplices or that there is any "minimal length" present in fundamental geometric reality whatever that is (there may be limits on what you can measure but presumably that's a different matter)

The significance of the triangles would be, I presume, that it gives an easy way to average over local curvatures. The angles of a triangle add to pi in flat space, less than pi in hyperbolic space, more than pi in hyperspheric space.

If geometry is actually shrunk to a point, the curvature becomes "invisible" - if could be anything. The only clue would be what it looked like before the shrinking took place. So you might be able to use any block shape. But it is pi as a measure of curvature which is the essential bit of maths here?

And thus what they are "fantasising" perhaps is that spacetime is made of an average over curvatures. At least I hope so because that is the core idea I like.

Curvature is "good" as it is a physical intuition that embodies both dimension (length/direction) and energy (an action, acceleration, tension, distinction). So flatness is cold. Curved is hot. This is pretty much the language GR speaks, right?

Again, making the parallel with Ising models, you could say that at the GR scale, spacetime curvature is all smoothly connected, like a magnetic field. Every point has a curvature, and all the curvatures are aligned, to make a closed surface (with now we discover, a slight cosmological constant positive curvature).

But as we descend to a QM scale, fluctuations break up the smooth closed curvature. Curvature becomes increasingly unoriented. It does not point towards the rest of the world but off out into some more random direction. Smooth and coherent dimensionality breaks up into a foam of "2D" dimensional impulses. Like heated dipoles spinning freely, having at best fleeting (fractal/chaotic) alignments with nearest neighbours.

Anyway, the CDT story is not really about shrinking triangles but about doing quantum averages over flexi GR curvatures?
 
Back
Top