Grav. + GUT (Gravity from a Particle Physicist's perspective)

In summary: I don't see anyone "throwing in the towel", quite the opposite, people seem to be proceeding in a logical, systematic way. The fact that this may have been obscured by a lot of hype and hoopla over the last 30 years is not relevant. The only thing that matters is if progress is being made.The next few years will provide a test of the asymptotic safety program. Either it will provide a successful unification between GR and QFT or it won't. If it does, then that will open a whole new research area that will probably keep people busy for the next 100 years. If it doesn't, that will also be interesting, since it will narrow down the
  • #1
marcus
Science Advisor
Gold Member
Dearly Missed
24,775
792
http://arxiv.org/abs/0910.5167
Gravity from a Particle Physicist's perspective
R. Percacci
Lectures given at the Fifth International School on Field Theory and Gravitation, Cuiaba, Brazil April 20-24 2009. To appear in Proceedings of Science
(Submitted on 27 Oct 2009)
"In these lectures I review the status of gravity from the point of view of the gauge principle and renormalization, the main tools in the toolbox of theoretical particle physics. In the first lecture I start from the old question "in what sense is gravity a gauge theory?" I will reformulate the theory of gravity in a general kinematical setting which highlights the presence of two Goldstone boson-like fields, and the occurrence of a gravitational Higgs phenomenon. The fact that in General Relativity the connection is a derived quantity appears to be a low energy consequence of this Higgs phenomenon. From here it is simple to see how to embed the group of local frame transformations and a Yang Mills group into a larger unifying group, and how the distinction between these groups, and the corresponding interactions, derives from the VEV of an order parameter. I will describe in some detail the fermionic sector of a realistic "GraviGUT" with [tex]SO(3,1)\times SO(10) \subset SO(3,11)[/tex]. In the second lecture I will discuss the possibility that the renormalization group flow of gravity has a fixed point with a finite number of attractive directions. This would make the theory well behaved in the ultraviolet, and predictive, in spite of being perturbatively nonrenormalizable. There is by now a significant amount of evidence that this may be the case. There are thus reasons to believe that quantum field theory may eventually prove sufficient to explain the mysteries of gravity."
 
Last edited:
Physics news on Phys.org
  • #2
Percacci is the chief organizer of the conference on Asymptotic Safety being held at Perimeter Institute a little over a week from now.
http://www.perimeterinstitute.ca/en/Events/Asymptotic_Safety/Asymptotic_Safety_-_30_Years_Later/

He has an AsymSafe FAQ at his website, and wrote the chapter on AsymSafe QG that appeared in Oriti's book Approaches to Quantum Gravity: Towards a New Understanding of Space, Time, and Matter, published by Cambridge U. P.

Percacci appears to be at the focus of current efforts to unify gravity with particle physics without inventing extra dimensions or extra degrees of freedom---simply using quantized general relativity in four dimensions and quantum field theory (more or less standard QFT) again on 4D.

We have been following this with increased attention ever since 6 July when Steven Weinberg gave a talk at CERN announcing his current participation in this line of research. I will get links for Weinberg's talk, for Percacci's FAQ, and for the upcoming conference at Perimeter.

Here is the video of Steven Weinberg's 6 July CERN talk:
http://cdsweb.cern.ch/record/1188567/
To save time jump to minute 58, the last 12 minutes---that is where he starts talking about his current research focus on AsymSafe as a possible avenue to unification also with applications to cosmology---offering a natural explanation for inflation.
Here's a condensed version of the Perimeter conference program listing speakers and talks:
https://www.physicsforums.com/showthread.php?p=2407013#post2407013
Here is the AsymSafe FAQ:
http://www.percacci.it/roberto/physics/as/faq.html
This has a bibliography on AsymSafe with papers by Percacci and others, including his survey chapter in Oriti's book:
http://www.percacci.it/roberto/physics/as/
Here are the slides of a June 2009 talk on AsymSafe QG which Percacci gave at a school sponsored by Renate Loll's network:
http://th-www.if.uj.edu.pl/school/2009/lectures/percacci.pdf
Percacci is normally at the Italian Institute for Advanced Studies (SISSA) outside Trieste. However he is currently on leave from there and spent all or part of the past academic year at Utrecht, and is spending the present semester at Perimeter Institute.
 
Last edited by a moderator:
  • #3
Great post, marcus. It would seem the particle physicists are throwing in the towel. They can't unify the Standard Model and GR with their approach to physics so they're starting to convince themselves that no real merger is possible :-)
 
  • #4
"GRaviGUT"

RUTA I can't react so quickly. I need time to appreciate just what is going on. I don't see Percacci or Weinberg throwing in the towel (slang for "giving up").
I think maybe it wasn't too bright of the other particle physicists to try for so long to establish gravity on a fixed rigid background. It was careless of them to think of gravity as a force and to imagine that formulating gravitons on flat space was all that's needed, as if they could treat a gravity like just another ordinary particle. Maybe that narrowminded vision is now dying and that narrow program is being abandoned.

But the smart particle physicists like Percacci are not trapped in that narrow program. They see that gravity is dynamic geometry of a 4D continuum and that quantum gravity must be quantum dynamic geometry, again of 4D.

So the natural thing for particle physicists to do (the ones that "get it") is take a quantum version of Gen Rel and build QFT on that dynamic 4D geometry.

They are "not giving up", on the contrary they may be the winners of the game, because they have the know-how to re-build QFT on the new basis.

Percacci and Weinberg are not the only particle physicists who have boarded this train. There is also Daniel Litim. Various others. It is not just relativists now---new players have arrived. I think Arkady Tseytlin was formerly a string theorist--I guess he would count as a particle physicist. Benjamin Ward is a particle physicist. These are people invited to give papers at the Perimeter conference taking place in a few days. I have a feeling that if I checked MOST of them would turn out to be particle physicists. Yes a small number compared with the huge mass of particle theorists (string and other) but it is always the small active minority that starts change and does the real stuff.
 
Last edited:
  • #5
RUTA said:
Great post, marcus. It would seem the particle physicists are throwing in the towel. They can't unify the Standard Model and GR with their approach to physics so they're starting to convince themselves that no real merger is possible :-)

Asymptotic safety has long been a logical route to investigate from the "particle physics" or Wilsonian viewpoint. In fact, from the "condensed matter" viewpoint, 1976 was already late to the game - conceptually - not calculationally - till this day, (proof or disproof of) asymptotic safety is not on firm mathematical ground. If you read Polchinski's string text, you will find asymptotic safety respectfully mentioned. Wilsonian renormalization has roots in work done by particle physicists including Stuckelberg and Petermann, and Gell-Mann and Low. Wilson himself was a particle physicist, who did his most famous work on critical phenomena in condensed matter, an area in which Kadanoff and Fisher are key names too. The clarity of the Wilsonian framework was so powerful that Weinberg was led to suggest asymptotic safety when he was trying to teach himself what his "statistical brethren" had achieved: http://ccdb4fs.kek.jp/cgi-bin/img/allpdf?197610218

This Wilsonian framework is nowadays part of standard coursework. Take Kardar's lectures for example http://ocw.mit.edu/OcwWeb/Physics/8-334Spring-2008/LectureNotes/index.htm .

In L7, he writes "the RG procedure is sometimes referred to as a semi-group. The term applies to the action of RG on the space of configurations: each magnetization profile is mapped uniquely to one at larger scale, but the inverse process is non-unique as some short scale information is lost in the coarse graining. (There is in fact no problem with inverting the transformation in the space of the parameters of the Hamiltonian.)" The "semi-group" comment is a reference to emergence; the parenthetical comment is a reference to possible asymptotic safety.

In L12, non-perturbative fixed points are considered, and it is noted that by luck, perturbative theory is sufficient to calculate in the particular physical case being discussed: "The uniqueness of the critical exponents observed so far for each universality class, and their proximity to the values calculated from the epsilon–expansion, suggests that postulating such non-perturbative fixed points is unnecessary."
 
Last edited by a moderator:
  • #6
RUTA said:
Great post, marcus. It would seem the particle physicists are throwing in the towel. They can't unify the Standard Model and GR with their approach to physics so they're starting to convince themselves that no real merger is possible :-)

Thanks for the encouragement, RUTA! I don't know how familiar you are with the history of the A.S. idea. Several of the usual papers go over it.
By Reuter, Percacci, and recently Weinberg looking back.

You might find it entertaining to see what our discussion of it at Physicsforums was like back in 2007. Then people were less aware and it was more difficult to get attention focused on it. I started a thread called What if Weinberg had succeeded in 1979?

https://www.physicsforums.com/showthread.php?t=180119

The point is he thought of A.S. 1976, and gave lectures on it---e.g. at Erice. And he tried to make it work but didn't have the math tools to tackle the 4D case. He wrote about it in 1979, in a chapter of a book edited by Hawking, celebrating the Einstein centennial.
Then he gave up on it.

Martin Reuter revived the approach in 1998, using some new mathematical techniques. But Weinberg still stayed on the sidelines, until according to him, he saw a 2006 paper by Percacci, which convinced him it had enough chance of being right to be worth pursuing.

I tried to imagine how things might have gone if Weinberg had gotten the result he wanted in 1979 (and the field didn't have to wait 20 years for Reuter to revive it). It's a bit of an odd way to approach it. But back in 2007 it was harder to get a conversation started about Asymptotic Safety.

In any case the subject has had quite an interesting history :biggrin:

I'm interested in how you imagine a "real merger". To me Asymptotic Safe QG does seem to offer the possibility of a real merger, within the context of a quantum field theory. But I may be wrong, or missing something important. I'd like to hear your take on it. (I'm still trying to assimilate this most recent paper of Percacci's---it may take me a while.)
=========================

In case anyone else is reading: newcomers may be confused by what Percacci says in the abstract "in spite of being perturbatively nonrenormalizable." This should not be taken to mean that the theory is nonrenormalizable in general---only when the wrong methods are used. The moral is, don't use perturbative techniques on gravity---they won't work---but other methods will.
Asymptotic safety has been described (by Percacci, Reuter and others) as nonperturbative renormalizability. An A.S. theory becomes predictive to arbitrarily high energy once a finite number of parameters have been determined by experiment--which is the practical consequence of renormalizability, whatever the context and the methodology being applied.
 
Last edited:
  • #8
Marcus - can I ask for a simple explanation of how asymptotic safety works? And I did read the FAQ!

Is is simply that Reuter found within flexi-GR 4D geometry a circumstance in which 4D breaks down into fractal 2D approaching Planckscale - and this would then reduce the available directions for quantum gravity self-action at this scale?

If so, what was the reason for this crumbling into 2D?

With CDT, though this was uncertain, it seemed to me that it must be something coming from the quantum side of the model so to speak. At smallest scale, direction becomes a confused issue and so you only have 2D actions (a vector against the backdrop of a foamy context rather than a vector going in one direction, and so also quite definitely not in the other two). Bit like the grin left behind by the disappearing Cheshire Cat.

The same idea would seem to fit Reuter's approach. Or have I got completely the wrong end of the stick here?
 
  • #9
But you have to admit, AS is nowhere near as ambitious as the unification of gravity with GUTs to get SUTs. I don't see any reason for taking gravity out of the mix given their paradigm of particles and forces. It's a fallback position.
 
Last edited:
  • #10
RUTA said:
But you have to admit, AS is nowhere near as ambitious as the unification of gravity with GUTs to get SUTs. I don't see any reason for taking gravity out of the mix given their paradigm of particles and forces. It's a fallback position.
How do you know Nature's ambition ? What matters is that we, as a community, have different groups pursuing all different logical possibilities (that we are aware of).
 
  • #11
apeiron said:
Marcus - can I ask for a simple explanation of how asymptotic safety works? And I did read the FAQ!
...

I'm glad you read the FAQ! I think Percacci did a great job with it. Hope you agree. It has 40 Q&A items, several with links to more technical explanation. Even though he is careful to keep the answers at a basic simple level, there is still a lot to think about. I have not read the whole FAQ myself---some parts read, some I've only skimmed.

When you ask "how it works" and then try out some mental imagery of geometry at extremely small scale, then what I believe you are asking is how should we picture the microstructure of geometric relationships?
How should we imagine the microstructure so that it would behave as A.S. says it should behave?

For example, an issue you raise, is how to picture microstructure so that it would have spontaneous dimensional reduction.

Steve Carlip has a recent paper discussing how spon. dim. red. arises in several different types of QG. He weaves these separate occurrences of it in separate theories into one picture and then he describes an heuristic classical GR reason for it. Not to stress this too much, but if you are curious about how Carlip addresses this question, and haven't seen his paper, here it is: http://arxiv.org/abs/0909.3329

I liked your mental imagery suggesting how spon.dim.red. might happen. I actually felt some physical intuition, as a kind of electricity, in those verbal images. I also liked Steve Carlip's suggestive classical GR analysis, which is surprising. I don't think I have anything better to offer---but I could try (and have tried in the past) to come up with some explanatory visions of microgeometry.

==================

I think what Percacci FAQ is saying---just to focus attention on that---is that to understand fundamental microgeometry you have to give up the idea of it being metric. To the extent that it is describable or representable by a metric, you must be prepared to have the metric be energy dependent----to run with scale (down near Planck level)

You may remember down around questions #32 or 33 in the FAQ where he talks about this.

To me this seems related to the nonmetric approach to QG that Kirill Krasnov has set in motion. Having many metrics, but no one particular metric, and having the metric able to run, to depend on scale----perhaps even making the basic item something else besides a metric---a differential form subject perhaps (not to the original Einstein equation) but to a variant of the Plebanski action. Krasnov say that one of his motivations is to enable spinfoam QG to come to terms with renormalization.
 
  • #12
humanino said:
How do you know Nature's ambition ? What matters is that we, as a community, have different groups pursuing all different logical possibilities (that we are aware of).

Sorry, I fail to see the relevance of your post to mine.
 
  • #13
RUTA said:
Sorry, I fail to see the relevance of your post to mine.
Sure, sorry.
 
  • #14
apeiron said:
...4D breaks down into fractal 2D approaching Planckscale - and this would then reduce the available directions for quantum gravity self-action at this scale?...

Thanks Marcus, but is this bit correct? I really am struggling with the jargon in the FAQ.

The other thing that interested me is that Pecacci seems to have both Newton's constant and the cosmological constant running to hit a fixed point. So two parameters that must intersect.

Does the cosmo constant actually run - have QM self-interactions?

There does seem a logic in a connection between the two constants as g sort of represents a spatial parameter - spatial curvature - and the cosmo constant a time-like parameter, expansion or growth of space.
 
  • #15
apeiron said:
...flexi-GR 4D geometry a circumstance in which 4D breaks down into fractal 2D approaching Planckscale ...

I think that's right. When I have needed to paraphrase it I've said much the same thing as you did.
I would advise reading Steve Carlip's recent paper to get a classic GR version of how that might happen. And a comparison of how spontaneous dimensional reduction happens in the various QG models (Reuter, Loop, Loll etc..) His classical GR discussion of it is the most graphic----although it has to be merely heuristic since classic would not really apply.

Here is a halfbaked analogy to think about (with a grain of salt). Take a 2D sheet of paper.

Crumple it into a ball. As you crumple, it gradually turns into a 3D object.

If you had 4D hands and lived in 4D space, you could continue to crumple it and it would gradually become a 4D ball, but we don't live in 4 spatial dimensions, so let's not think about that.

Let's think of the 3D ball, the wad of crumpled paper. Let's do an X-ray CAT scan. Let's do tomography. Let's examine the internal structure by imaging.

If our imager is low-resolution---if our CAT scan is blurry, then we will look in side and determine that it is 3D, just as it looks on the outside.

But now let's zoom in. Let's gradually increase the resolution. After a while we can begin to see that this 3D ball is really a foam made of 2D surface.

At macro scale of a centimeter----the mass of material within a radius of a given point varies as the CUBE of the radius.
the density behaves like an ordinary 3D density.

But at less than a millimeter scale---the mass of material with a given radius typically varies as the SQUARE of the radius---or as some power between square and cube because of occasionally including the paper of some nearby wall when walls are close together.

We can measure dimensionality by seeing how volume relates to radius. So dimensionality in the this wad of paper can be empirically determined and it depends on scale.

So since this happens even with ordinary crumple paper, it shouldn't be surprising if it happens with the geometry of space. Empirically measured dimensionality must depend on scale and probably depends fairly continuously---getting larger with larger scale and smaller with smaller scale.

Yes this is a dumb simple example---not really how it works etc etc. But you can read Carlip for more sophisticated discussion.
 
Last edited:
  • #16
apeiron said:
...

The other thing that interested me is that Pecacci seems to have both Newton's constant and the cosmological constant running to hit a fixed point. So two parameters that must intersect.

Does the cosmo constant actually run - have QM self-interactions?

There does seem a logic in a connection between the two constants as g sort of represents a spatial parameter - spatial curvature - and the cosmo constant a time-like parameter, expansion or growth of space.

Apeiron this line of questioning is gold. I really like this post. Don't have time to fully respond.

However if you read Percacci carefully (or any of Reuter's papers) you see explicitly stated that only dimensionless constants run. G is a physical quantity, not a number. So what they have to study is what Percacci calls G-tilda. The dimensionless version of Newton G. Remember k is the cutoff, an energy. We will take k to infinity.

˜G = G k2 Here both k and G are varying, I should write G(k) instead of plain G, to show this.

This ˜G is what goes to an UV limit. And also ˜Lambda = Lambda/k2

k, being an energy, is the reciprocal of length. But the cosmo constant Lambda is the reciprocal of area. So dividing Lambda by k2 gets you a pure number.
Percacci tells you in the paper what limit ˜Lambda converges to as k-> infinity.

And dimensionally speaking, Newton G is a length divided by an energy. And k2 is an energy divided by a length. So multiplying G by k2 again gets you a pure number. And Percacci tells you what number that converges to.
These are absolute universal numbers which do not depend on the system of units.

Now he also talks about the physical quantities G(k). The value of Newton G at various scales k.
Before I should have written ˜G = G(k) k2

but you understood I meant that. Because both k and G(k) are changing. It is the dimensionless pure number ˜G that goes to a limit as k-> infinity.

The behavior of ˜G we know. G(k) is a constant physical quantity for a long long range of small and moderate energies k. Newton told us this already. So that means ˜G must be increasing like k2. But then when k gets up near Planck scale the behavior changes and ˜G starts to converge to a finite number. That means that G(k) has to decrease!

So the G(k) relevant to the big bang, or big bounce as some people model it, would be a much smaller physical quantity than what we are used to. G(here and now) >> G(bang).
To me it is not clear that the comparison is even meaningful because conditions were so different. So I don't put much weight on that comparison.

However by the same flimsy uncertain reasoning. Lambda(k) would be constant over a long long range of k, but then as k gets "Plancky" and Lambda(k)/k2 is starting to converge it must be true that Lambda(k) gets very very big! This only would happen when k is very near Planck scale. It offers a possible explanation of a brief episode of inflation.
 
  • #17
I guess I am finally understanding a bit why this is called non nonperturbative renormalization. Suppose I could magically write down infinite counter terms to get away with divergences. Although there were infinite many coupling terms, they would all magically conspire to slide to a stable value within a finite dimensional surface, made of observable eigenvectors.

I will give a humble opinion of mine. I have issues with calling this non-perturbative because a perturbative methods are used in all aspects of this idea. So, I'd rather call this either:

*Collective Renormalization
*Dynamical Self-Renormalization
*Collective Dynamical Self-Renormalization
*Orbit Renormalization
*Attractor Renormalization
*BEC Renormalization (referring to the emergence of collective structures in low temperature materials).
*BES Renormalization (Bose Einstein Surface, to correct a misleading idea of the above item)

But never non-perturbative renormalization. This is quite a confusing and misleading name... at least for me :eek:
 
  • #18
question on motivation

I apologize for this ignorant question but I have not so far looked too deep into these programs due to rejection of some of the starting points, but I definitely see some lage potentials in trying to make more sense out of the renormalization ideas that I can connect to at a deeper level.

The "space of actions" that we are talking about, does in my view sort of correspond to the space of observer (or inference systems). When you change the observational scale, that certainly means the actual observing context changes.

So the deeper idea here, is merely a special case of the general idea of connecting the laws of physics (as say encoded in action functionals) between two observers. This would IMHO, suggest the renormalization scheme itself (including the ERGE and the space of actions) are part of the real physics and not just a mathematical tool, becauase a bit more abstractly one can imagine that the "renormalization" is automatically done by nature all the time. In other worlds, the renormalization rules become on par with the normal phsyical laws, and thus there is an "action" also in the "action space", that conceptually one would EXPECT(at least I do) do be unified when this is fulyl understood.

So I seek the inside view of this, and then it seems a key is certainly how to CONSTRAIN some mathematically infinite fantazised the space of actions to a more "physical inside view" of DISTINGUISHABLE possible actions?

As it seems Reuter has done someone like this, he somehow truncates the picture here. But my question here is if anyone can point me to where this is motivated. Ie. does he do this simply because it's the only way to make real compuations (which is certainly a rational reason) or does he motivate this deeper in the sense that this "computability" is actually rooted in the constrains of nature itself, in particular the cmoplexit of observers?

If one would be able to go this route, I see plenty of possibilities, including complete TOE-style unification also of matter.

I'm sorry if this is a stupid question to the AS experts but I never really went into depth in this. So I wonder if there are some more promising ideas (like the one I seek) that is hidden somewhere in the current research, but that aren't obvious from the basic premises and introduction to these research progrms?

/Fredirk
 
  • #19


Fra said:
If one would be able to go this route, I see plenty of possibilities, including complete TOE-style unification also of matter.

In particular, I would expect even a connection to evolving law, where a physical view of the renormalization flow could relate to flow of evolution of law, and also by connecting constraining context to observers/matter, evolution and emergent of matter? So matter and law emerge together, in the sense that the more "non-trivial" matter systems that emerge to play the role of inside observers, more larger does the distinguishable "space of actions" become?

Then the truncation could be given a physical motication, as constraints coming from the context of beeing encoded in emergent matter?

Then the stable actions, would similarly correspond to stable matter, since the stable actions are then "preferred images" implicit in the observing system?

Anyone making similar associations to AS topic?

/Fredrik
 
  • #20
Truncation is just an approximative correction to the full perturbed action. In this case of AS it just shows that in higher orders the of the truncate action correction does not add anything qualitatively after one gets enough terms to find the safe surface. The lowest order suffice, thus the name "non perturbative renormalization".

There is nothing that is traightforwardly deep in this method, in the way you imagine. ERGE and the flow are indeed physical in this case, more even so, in certain ways, than in the case of Yang Mills theory, because it is not just the physicists' trying to dig something out of diagrams. It is the couplings of the theory dynamically cooperating and organizing somehow among themselves to find a point stable in a surface, all this which ends up causing the renormalization of the theory.
 
  • #21
MTd2 said:
Truncation is just an approximative correction to the full perturbed action.
...
There is nothing that is traightforwardly deep in this method, in the way you imagine.

Thanks MTd, this was my previous impression too of these things, and the reason I never tried to go digged into it that much. But I was starting to wonder wether this was unfair, and wether some of the advocates of this see something I don't.

MTd2 said:
In this case of AS it just shows that in higher orders the of the truncate action correction does not add anything qualitatively after one gets enough terms to find the safe surface. The lowest order suffice, thus the name "non perturbative renormalization".

I guess one might ask wether this is just a conincidence, or wether it's suggesting that perhaps there IS a deeper (but maybe not yet realized) motivation that suggests that the mathematically infinite space of actions, contains a huge physical redundancy, just like you can have smilar objections to the the observability of measurability of a continuum relative to a bounded observing system in the first place.

Well it was just a thought, trying to look positively upon this. It's always easier to find thinks you don't like :)

/Fredrik
 
  • #22
MTd2 said:
... thus the name "non perturbative renormalization".

I think the basic message here (which I agree with) is that we need to look at the purpose of renormalization, and what it accomplishes.

This will give us the ability to generalize the concept of renormalization, so that it is not anchored to some specific computational technique, but can apply where other numerical methods are used to accomplish the same general purpose.

Generalizing concepts is part of how physics evolves, and it is happening here...so let's have a look.

The purpose of renormalization is to get a predictive theory---which predicts up to arbitrarily high energies once a finite number of parameters have been determined experimentally.

Renormalization is applied to theories which are not predictive in their original form---which blow up and stop giving meaningful answers beyond a certain energy scale.

It is pretty clear that what Weinberg proposed in Erice in 1976, and what people like Reuter and Percacci eventually began to carry out, is a new and interesting kind of renormalization.

(There was another earlier case of asymsafe renormalization with some other theory, not gravity, but we don't need to get into the history.)

We still have a problem with the adjective "non-perturbative". It is not very descriptive, but it has come into use as a designation for asymsafe renormalization. Some adjective seems needed (at least for the time being) to distinguish this new type of renormalization from the conventional older type---which in fact did involve perturbative math techniques. But it is not up to us to advise the experts what adjectives to use.

That's just a semantic issue, so let's forget about it. MTd2 also makes an interesting substantive physical point in his post.

...ERGE and the flow are indeed physical in this case, more even so, in certain ways, than in the case of Yang Mills theory, because it is not just the physicists' trying to dig something out of diagrams. It is the couplings of the theory dynamically cooperating and organizing somehow among themselves to find a point stable in a surface, all this which ends up causing the renormalization of the theory.

We probably don't understand why renormalization works so well in certain cases. The renormalization group flow seems to be a real thing in nature. Nature seems to conspire to make it work. Things really do seem to depend on the energy or length scale at which you measure.
In optics, where there are wavelengths to provide a distance scale, this dependence is familiar to us and understandable. We can mentally picture how images depend on the scale of optical resolution.
But other kinds of energy-dependence can seem mysterious. Why should coupling constants run?

MTd2 in post #20 simply observed that in the case under discussion the running of constants seems to be a physical fact. It's worth pointing out---although I can't explain or elucidate.
 
  • #23
Fra:

"I guess one might ask wether this is just a conincidence, or wether it's suggesting that perhaps there IS a deeper (but maybe not yet realized) motivation that suggests that the mathematically infinite space of actions, contains a huge physical redundancy, just like you can have smilar objections to the the "

No coincidence here. The idea it is that there IS indeed a HUGE physical redundancy, which restricts the infinite parameter space of each constant to a finite surface. In this surface, there is a point, with finite parameters, where the trajectories of the said surface all converge to a point, called, Fixed Point. This is something CRAZY, NUTS! TOO GOOD TO BE TRUE! :eek: I mean, and entirely new physics concept is not just invented t fit an experiment, but actually, a new an unexpected physical concept or at least a model, was found!
 
  • #24
> This is something CRAZY, NUTS! TOO GOOD TO BE TRUE!

So would you agree that the deep understanding isn't really in place is it? Or am I missing something?

I think the basic implications here, are possibly very deep. But the way of inference I've seen motivating this is not so deep?

For me, these ideas unavoidably lead into the ideas of connects to evolution, and that might be an evolutionary interpretation of this self-organisation in action space where some actions are more fit than others. It seems to have the possibility to merge very well also to a reconstruction of probability theory that I seek, from a information theoretic angle. Since the objection of the non-physical redundancy of the continuum is the same there as here. The degrees of freedom is relative, and in the proper perspective the redundancy simply isn't seen.

I mean, somehow, the physical redundancy in some mathematics is both obvious, but still apparently ambigous.

The think I would love to see, is to connect the ideas of evolving law, with the physical meaning of renormalization, and in particular that at some level there must be a connection between some kind of cosmological time and renormalization flow too. So these renormalization flows as about as "real" as you think "time" is.

I'll try to keep reading some of the revivews on this.

/Fredrik
 
  • #25
Fra said:
I mean, somehow, the physical redundancy in some mathematics is both obvious, but still apparently ambigous.

I associate this to the same problem as with dual view of symmetry. On one hand symmetry expresses a form of redundancy, but on the other hand it's this same redundancy that gives predictive power to the cases when the symmetry is broken.

So it's not quite as simple as a "pure mathematical" redundancy either, since then it would not be physically predictive. It's somehow a sign of that the notion of possibility, or distinguishable degree of freedom is fundamentally observer dependent and that the fixed points rather correspond to equilibrium states where the observer is in consistency with it's environment, and that an observer that sees the action spaces shrink is effectively loosing mass an eventually disappears. So each observer either approaches a stable configuration, or destabilises.

This is IMO the what I think an interesting way to implement evolving law. I guess I am hoping that Smolin will make this connection in this upcoming book. It would be another way of seeing making the idea of evolving law predictive, that is more information theoretic than the CNS idea. (Although there is even a connection to that via black holes - BH-horizons - rindler horizon - general observers horizon - general observers, then the new universes smoling picture can be translated into new preferred pictures; encoded in new observers.)

But then the renormalization flow itself, is observer dependent. So there is again a self-reference. The observation/inference of redundancy is what might give the observing system predictive power/advantage over it's environment and maintain it's own mass, and perhaps even grow.

/Fredrik
 
  • #26
Hello PF folk.

If you believe the Dirac equation in curved spacetime, and you believe Spin(10) grand unification, then a Spin(3,11) GraviGUT, acting on one generation of fermions as a 64 spinor, seems... inevitable.

Also, it's pretty.

And it's up to you whether or not to take seriously or not the observation that this whole structure fits in E8. Personally, I take it seriously. Slides are up for a talk I gave at Yale:

http://www.liegroups.org/zuckerman/slides.html


Garrett
 
  • #28
RUTA said:
Great post, marcus. It would seem the particle physicists are throwing in the towel. They can't unify the Standard Model and GR with their approach to physics so they're starting to convince themselves that no real merger is possible :-)

It was string theorists who threw in the towel. They gave up on describing gravity with QFT simply because perturbation theory didn't work. "Particle physicists" haven't given up on unification or QFT

http://arxiv.org/abs/0712.3545
 
  • #29
marcus said:
But other kinds of energy-dependence can seem mysterious. Why should coupling constants run?

I guess this was a rethorical question but here goes an idea just for illustration.

I know that the usual introduction of renormalization is as a trick to cure nonsensial calculations. But to see a greater vision, the way I can see this - that has nothing to do with perturbation theory in the ordinary sense - is if you see the actions as basically a representation of "observed" or inferred law, then the renormalization can be thought of in a more general sense as "translating" the inferrable law, to another inference system (observer).

And the IMO most obvious connection to energy scale here is the complexity of the inference system. Clearly a more complex inference system (read higher energy scale) can do a more detailed resolution.

It's like picture two brains, one rat brain and one human brain. Let them make there best inference having access to the same environment and their conclusions would probably differ. The task of an outside scientist could then be to try to "renormalize" the inference system of the human, to that of a rat. ie. what happens to the inferrability when you "scale" a human brain down to a rat brain?

But even that scaling itself, is constrained by this third observer.

Similarly, what is LEFT out of say the full standard model + GR when you try to "scale it" down to say a Planck size observer?

And maybe more important, what does the reverse look like? Then one can not just average things out, it would have to be some kind of evolutionary search?

marcus said:
Why should coupling constants run?

Why should inferrable physical laws (encoded in say actions, lagrangians or whatever) change when the complexity of the inference system changes?

If you see it this way, I think the answer is intuitively obvious.

/Fredrik
 
  • #30
MTd2 said:
I guess I am finally understanding a bit why this is called non nonperturbative renormalization. Suppose I could magically write down infinite counter terms to get away with divergences. Although there were infinite many coupling terms, they would all magically conspire to slide to a stable value within a finite dimensional surface, made of observable eigenvectors.

I will give a humble opinion of mine. I have issues with calling this non-perturbative because a perturbative methods are used in all aspects of this idea. So, I'd rather call this either:

*Collective Renormalization
*Dynamical Self-Renormalization
*Collective Dynamical Self-Renormalization
*Orbit Renormalization
*Attractor Renormalization
*BEC Renormalization (referring to the emergence of collective structures in low temperature materials).
*BES Renormalization (Bose Einstein Surface, to correct a misleading idea of the above item)

But never non-perturbative renormalization. This is quite a confusing and misleading name... at least for me :eek:

It is non-perturbative! If something is perturbative it means one takes a known solution and expands around that solution using some small parameter. Unless this is what is done it is non-perturbabtive.
(Maybe you have a different definition? But as far as I understand if there is no expansion in some small parameter its not perturbation theory)

I think you are looking at renormalisation from a perturbative view(for example you say "full perturbed action"). While it is true that we understand QFT best through an expansion in feynman diagrams this does not mean that we cannot do things without reference to small parameters.

Now obviously in non-perturbative methods one has to use some kind of approximation; so a truncation is used in the ERG approach. But this gives an approximation which is of different nature to that of a perturbative expansion. In gravity it appears that it is more important to understand a theory non-perturbativley simply because by using perturbation methods doesn't tell us if the theory is well defined in the UV.
 
  • #31
garrett said:
And it's up to you whether or not to take seriously or not the observation that this whole structure fits in E8. Personally, I take it seriously. Slides are up for a talk I gave at Yale:

http://www.liegroups.org/zuckerman/slides.html


Garrett

Would you mind clarifying on some of the ways to find 3 generations? There are weird things there, and those are completely alien to me! :eek:

On Page 30:

What do you mean by "E8 appears to come with a nice Axion model building kit"? Can you explain this?

I really don't understand any of this:
"E9. Possible relation to QFT.
Leech lattice. Three E8's as inner shell"

Alright, the E8 lattice is an interesting object in 8 dimensions. It offers the solution to the densest sphere packing in that dimension. Or that if you transform the elements of the reciprocity vector and map each of them into a 2-spheres in 4 dimensions, linking each of them according the the dynkin diagram prescription, (that is a 1d object in 8d into a 2d objects in 4d) you get a every where non differential manifold. Probably a fractal in 4D.

But I cannot see what it has anything to do with finding 3 generations for the standard model. At best, I could you see you arguing for the emergence of a dimensional transition in the shape of a fractal because in the case of the transition of EQG in 4D to 2D, or in the embarrassingly vague idea above, you have the theme of a non integer Hausdorff dimension.

And what's up with the Leech lattice and those inner shells? What are those inner shells?
 
  • #32
Fra said:
And the IMO most obvious connection to energy scale here is the complexity of the inference system. Clearly a more complex inference system (read higher energy scale) can do a more detailed resolution. /Fredrik

But is this not a moderately standard approach? At least this seems generally the case to me as well.

So in say the language of QM decoherence, the world as a whole is the information, an inference system at some kind of general equilibrium (ie: classical), and then QM scale is what gets resolved. So as effectively the scale of classical coherence is run down (either by cutting the distances or turning up the temperatures), the ability to resolve anything crisply runs.

I mean it take a certain weight of information to constrain things and as you shrink the scale, that weight also shrinks to the point where it starts to fail to do the job. There is an exponential approach to a failure of resolving power.
 
Last edited:
  • #33
apeiron said:
Fra said:
And the IMO most obvious connection to energy scale here is the complexity of the inference system. Clearly a more complex inference system (read higher energy scale) can do a more detailed resolution.
But is this not a moderately standard approach? At least this seems generally the case to me as well.

Yes, the common parts is the connection between observational resolution and energy of course.

There is also of course the holographic principle and the holographic bounds, but that principle is still not yet properly understood as far as I know. I expect this to eventually be better understood.

But what is not standard is exactly how this energy connects to the observing system, and to the complexity in particular and how that can be quantified into constraining information capacity. The notion of inference system is certainly not standard.

This is exactly the problem with QM. One considers "measurements" but without considering what measurements that are possible, and how the measurement process implies an interaction that acts not ONLY on the observed system but ALSO produces a REaction back onto the observer, that forces the measurement machinery to evolve and run, since the observer is changed. This is the missing link to a intrinsic measurement theory as I see it.

But note that this can still be described in two ways. The decoherence view is to view the observer + observer system from the outside, and then just apply the same QM. This is not solving the problem, in fact it doesn't respect the information bound of the new observer, it gets larger and larger until you have some external birds view.

It's when you insist on the internal view, that the evolution of law seems like the natural way, but it is what you in fact SEE.

The reason why I think in terms of inference system, is that it can be reconstructed from discrete information structures where evidence is simply properly counted from the inside. Then the total distinguishable event counts are then corresponding (in some way) to inertia, and mass.

This also connects inertial complexity (defined as a measure of resistance to change) and gravitational complexity (defined as a measure of how intensly it competes for degrees of freedom with other complexity systems). The "resistance to change" means that if two systems are interacting, the more complex one will generally cause larger change in the smaller system that vice versa, and in terms of space distance as a measure of difference in information, there will be a mutual effect of shrinking space in between them (ie making them attract) because their interaction brings them into general agreement since they exchange information.

/Fredrik
 
  • #34
Hmm, I see another reason why Garrett is so interested in this article. Basically, this paper shows a 4th way to correct the radiative divergences of the Standard Model. The other 2 are supersymmetry, little higgs and little strings. The one for Percacci uses the global symmetry of little higgs, which basically predicts extra particles for every generation of the SM.

One of the things Jacques Distler has shown to Garrett it is the appearance of spurious particles in the extra generation. So, I guess Garrett is seen that this extra particles are actually sinks to the Fixed point for every generation.
 
  • #35
MTd2, it's encouraging to see you pursue the possibility of a connection with E8 theory, which Garrett hinted at earlier. All we can do is keep our eyes open and persist in asking questions.

Today we got some help, in seeing the overall picture (the "GraviGUT" idea of putting QFT on an asymsafe basis, from a new Percacci posting. I will excerpt the conclusions:
==quote today's Percacci paper, conclusions==

Another direction for research is the inclusion of other matter fields. As discussed in the introduction, if asymptotic safety is indeed the answer to the UV issues of quantum field theory, then it will not be enough to establish asymptotic safety of gravity: one will have to establish asymptotic safety for a theory including gravity as well as all the fields that occur in the standard model, and perhaps even other ones that have not yet been discovered. Ideally one would like to have a unified theory of all interactions including gravity, perhaps a GraviGUT along the lines of [45]. More humbly one could start by studying the effect of gravity on the interactions of the standard model or GUTs.

Fortunately, for some important parts of the standard model it is already known that an UV Gaussian FP exists, so the question is whether the coupling to gravity, or some other mechanism, can cure the bad behavior of QED and of the Higgs sector. That this might happen had been speculated long ago [33]; see also [46] for some detailed calculations.

It seems that the existence of a GMFP for all matter interactions would be the simplest solution to this issue. In this picture of asymptotic safety, gravity would be the only effective interaction at sufficiently high scale. The possibility of asymptotic safety in a nonlinearly realized scalar sector has been discussed in [47]. Aside from scalar tensor theories, the effect of gravity has been studied in [48] for gauge couplings and [49] for Yukawa couplings.

==endquote==

http://arxiv.org/abs/0911.0386
Renormalization Group Flow in Scalar-Tensor Theories. I
Gaurav Narain, Roberto Percacci
18 pages, 10 figures
 
Last edited:

Similar threads

Replies
13
Views
1K
Replies
1
Views
2K
Replies
0
Views
2K
Replies
6
Views
2K
Replies
0
Views
2K
Replies
1
Views
2K
Replies
1
Views
2K
Back
Top