- #1
- 24,775
- 792
The main weekly seminar at the UC Berkeley physics department's Center for Theoretical Physics is the string seminar---or one of a couple anyway---I think Petr Hořava is the main person in charge. His office is right by the seminar room. Whole setup is pretty nice.
Building's been remodeled.
Today they had a visitor from UC Davis come and give a talk. Steve Carlip. He drew on results from Loll Triangulation QG, Loop QG, Reuter AsymSafe QG, and other 4D approaches---results about spontaneous decline of dimensionality from 4D down to around 2D at very tiny scale.
There were about 16 of us in the audience---most looked like postdocs or advanced grad students. Hořava, who is still fairly young too, appeared to be the most authoritative person in the audience and asked quite a lot of questions.
Hořava is pronouced "Ho zha va" because of the mark over the letter ř.
Carlip's talk was similar to what he gave at the Planck Scale conference around start of July, at Wroclaw. It was the opening talk of the conference. The video is online. It's a good talk about a very interesting possibility of what could happen at small scale in nature.
http://www.ift.uni.wroc.pl/~rdurka/planckscale/index-video.php?plik=http://panoramix.ift.uni.wroc.pl/~planckscale/video/Day1/1-1.flv&tytul=1.1%20Carlip
Might not of course, but could.
Pointed to by these different 4D QG approaches all coming to unexpectedly similar conclusions from different directions by different reasoning. You imagine a 4D block universe generated by one of these approaches, and think of a diffusion process or random walk starting at some point and see how the probability of accidentally returning to start, after going a path of length s, depends on s. The probability trails off kind of as 1/sd/2.
The bigger the dimensionality d, the faster you get lost, and the less likely you are to accidentally find your way back home. So if you are simulating quantum universes in a computer you can "empirically" measure the dimensionality by having the computer run a random walk in the quantum spacetime you generated.
Well that is how Loll's Utrecht group did it.
Interestingly enough Carlip seems to like the CDT approach and he has a Phd student named Rajesh Kommu duplicating Loll's results but with their own UC Davis code.
It is significant that they didn't just take the Utrecht CDT code. They built their own CDT Monte Carlo from the ground up. That is a way of being sure that the computer "experiment" of generating these small quantum universes is repeatable.
CDT is also being run by Joe Henson at Perimeter, using a large computer there, but I don't know how that is going. Maybe somebody knows?
Building's been remodeled.
Today they had a visitor from UC Davis come and give a talk. Steve Carlip. He drew on results from Loll Triangulation QG, Loop QG, Reuter AsymSafe QG, and other 4D approaches---results about spontaneous decline of dimensionality from 4D down to around 2D at very tiny scale.
There were about 16 of us in the audience---most looked like postdocs or advanced grad students. Hořava, who is still fairly young too, appeared to be the most authoritative person in the audience and asked quite a lot of questions.
Hořava is pronouced "Ho zha va" because of the mark over the letter ř.
Carlip's talk was similar to what he gave at the Planck Scale conference around start of July, at Wroclaw. It was the opening talk of the conference. The video is online. It's a good talk about a very interesting possibility of what could happen at small scale in nature.
http://www.ift.uni.wroc.pl/~rdurka/planckscale/index-video.php?plik=http://panoramix.ift.uni.wroc.pl/~planckscale/video/Day1/1-1.flv&tytul=1.1%20Carlip
Might not of course, but could.
Pointed to by these different 4D QG approaches all coming to unexpectedly similar conclusions from different directions by different reasoning. You imagine a 4D block universe generated by one of these approaches, and think of a diffusion process or random walk starting at some point and see how the probability of accidentally returning to start, after going a path of length s, depends on s. The probability trails off kind of as 1/sd/2.
The bigger the dimensionality d, the faster you get lost, and the less likely you are to accidentally find your way back home. So if you are simulating quantum universes in a computer you can "empirically" measure the dimensionality by having the computer run a random walk in the quantum spacetime you generated.
Well that is how Loll's Utrecht group did it.
Interestingly enough Carlip seems to like the CDT approach and he has a Phd student named Rajesh Kommu duplicating Loll's results but with their own UC Davis code.
It is significant that they didn't just take the Utrecht CDT code. They built their own CDT Monte Carlo from the ground up. That is a way of being sure that the computer "experiment" of generating these small quantum universes is repeatable.
CDT is also being run by Joe Henson at Perimeter, using a large computer there, but I don't know how that is going. Maybe somebody knows?
Last edited by a moderator: