From time to timescape – Einstein’s unfinished revolution?

In summary, Garth proposes that the variance in clock rates throughout the universe is due to the gravitational energy gradients that Dark Energy is incorrectly presumed to be. However, Wiltshire argues that this is only a partial explanation, as differences in regional densities must also be taken into account in order to synchronize clocks and calibrate inertial frames. He claims that this means that the age of the universe varies depending on the observer, and supports his argument with three separate tests that his model universe passes. If his theory is correct, it would mean that the current widely accepted model of the universe, the Lambda CDM model, is misleading.
  • #71
marcus said:
Before that, many treatments of this did not take account of the possibility of a positive cosmological constant, or dark energy.

I'd make a stronger statement and say that the standard model of cosmology in 1997 asserted that the cosmological constant was zero. If you were to ask a cosmologist in 1997, the standard statement was that the cosmological constant was "Einstein's biggest mistake." Standard models do change, and the fact that you can have most people within a year say to themselves "well it looks like we were wrong" shows that physicists are less closed minded than they are given credit for,

What was generally realized in 1998 was that spatial closure does not necessarily imply destined to crunch.

This was known in the 1930's, but since the standard model circa-1997 assumed that the cosmological constant was zero, this was considered irrelevant. Then you get hit by data that pretty clearly says that if you say that the constant is zero then you have to assume something even weirder to explain the data.

Also one reason people assume dark energy is that it makes the math easier than the alteratives. You take all of your previous equations, just add one term and you run with it.

This also explains one aspect of the Wiltshire papers which is that they are trying to hit a moving target. One problem with trying to explain things with GR inhomgenities is that the math is really, really painful so by the time you explain weird result one, you get a dozen new weird results. One thing that Wiltshire is trying to do is to come up with a mathematically simple way of thinking about GR, so that you can rapidly put in new physics.
 
Space news on Phys.org
  • #72
For more of David Wiltshire's point of view here is an email he sent me on Dec 31, 2009

Edwin,

Thanks for letting me know that the paper had "sparked some discussion"; I
was not aware of this til your email, and just did a google search... Eeek,
those forums again. It is a bit amusing to see this only picked up now, as
this essay has been publically available at the FQXi competition website
http://fqxi.org/community/essay/winners/2008.1#Wiltshire for over a year
- and the longer Physical Review D paper on which the essay was based
[arxiv:0809.1183 = PR D78 (2008) 084032] came before that. I was just
tidying things up at the end of the year and - prompted by receiving
the proofs of the essay - put a few old things (this essay and some
conference articles 0912.5234, 0912.5236) on the arxiv.

Contributors to forums like PF tend to get such a lot of things wrong (since
as the admit they are not experts about the subjects under discussion), and
I don't have time to comment on all the wrong things - but it is reassuring
to see that a couple of people have realized that 0912.4563 is only
"hand-wavy" because it is an essay, and the real work is in the various
other papers like 0803.1183 and 0909.0749 which very often go unnoticed
at places like PF.

So just a few comments, which will make this missive long enough...

The understanding of quasilocal energy and conservation laws is an unsolved
problem in GR, which Einstein himself and many a mathematical relativist
since has struggled with. I never said Einstein was wrong; there are simply
bits of his theory which have never been fully understood. If "new" physics
means a gravitational action beyond the Einstein-Hilbert one then there is
no "new" physics here, but since not everything in GR has been settled there
are new things to be found in it. Every expert in relativity knows that, and
the area of quasilocal energy is a playground for mathematical relativists
of the variety who only publish in mathematically oriented journals, and
never touch data. Such mathematical relativists are often surprised by my
work as they never imagine that these issues could be of more than arcane
mathematical relevance. What I am doing is trying to put this on a more
physical footing - with I claim important consequences for cosmology - with
a physical proposal about how the equivalence principle can be extended to
attack the averaging problem in cosmology in a way consistent with the
general philosophy of Mach's principle. In doing so, it reduces the
solution space of possible solutions to Einstein equations as models
with global anisotropies (various Bianchi models) or closed timelike loops
(Goedel's universe) are excluded, while keeping physically relevant ones
(anything asymptotically flat thing like a black hole) and still extending
the possible cosmological backgrounds to inhomogeneous models of a class
much larger than the smooth homogeneous isotropic Friedmann-Lemaitre-
Robertson-Walker (FLRW) class. The observational evidence is that
the present universe has strong inhomogeneities on scales less than 200Mpc.

A number of other people (Buchert, Carfora, Zalaletdinov, Rasanen, Coley,
Ellis, Mattsson, etc) have (in some cases for well over a decade) looked at
the averaging problem - most recently with a view to understanding the
expansion history for which we invoke dark energy. But given an initial
spectrum of perturbations consistent with the evidence of the CMB these
approaches, which only consider a change to the average evolution as
inhomogeneity grows, cannot realistically match observation in a statistical
sense. The clock effect idea is my own "crazy" contribution, which the
others in the averaging community in mathematical relativity have not yet
subscribed to. But with this idea I can begin to make testable predictions
(which the others cannot to the same degree), which are in broad quantitive
agreement with current observations, and which can be distinguished from
"dark energy" in a smooth universe by future tests. My recent paper
0909.0749, published in PRD this month, describes several tests and
compares data where possible. The essay, which summarises the earlier
PR D78 (2008) 084032 is an attempt to describe in non-technical
language why this "crazy idea" is physically natural.

One important point I tackle which has not been much touched by my
colleagues in the community (a couple of papers of Buchert and
Carfora expected) is that as soon as there is inhomogeneity
we must go beyond simply looking at the changes to average evolution,
because when there is significant variance in geometry not every observer
is the same observer. Structure formation gives a natural division of
scales below the scale of homogeneity. Observers only exist in regions
which were greater than critical density; i.e., dense enough to overcome
the expansion of the universe and form structure. Supernovae "near a void"
will not have different properties to other than supernovae (apart from the
small differences due to the different metallicities etc between rich clusters of galaxies and void galaxies) because all supernovae are in
galaxies and all galaxies are greater than critical density.

Of course, my project is just at the beginning and much remains to be
done to be able to quantitatively perform several of the tests that
observational cosmologists are currently starting to attempt, especially
those that relate to the growth of structure (e.g., weak lensing, redshift
space distortions, integrated Sachs-Wolfe effect).

It is true that numerical simulations are an important goal. The problem
with this is not so much the computer power but the development of an
appropriate mathematical framework in numerical relativity. Because of
the dynamical nature of spacetime, one has to be extremely careful in
choosing how to split spacetime to treat it as an evolution problem.
The are lots of issues to do with gauge ambiguities and control of
singularities. The two-black hole problem was only solved in 2005
(by Pretorius) after many decades of work by many people.

In numerical cosmology at present general relativity is not really used.
(One sometimes sees statements that some test such as the one Rachel
Bean looked at in 0909.3853 is evidence against "general relavity" when
all that is being really tested is a Newtonianly perturbed
Friedmann-Lemaitre universe.) The only sense in which GR enters numerical
simulations in cosmology at present is that the expansion rate of a LCDM
Friedmann-Lemaitre universe is put in by hand, and structure formation is
treated by Newtonian gravity on top of the base expansion. This explains
some but not all the features of the observed universe (e.g., voids do
not tend to be as "empty" as the observed ones). Anyway, the base expansion
rate is kept artifically uniform in constant time slice and the expansion
and matter sources are not directly coupled as they are in Einstein's theory.

The full GR problem is just very difficult. But a former postdoc of
Pretorius has told me that he has begun looking into it in his spare time
when not doing colliding black holes. To make the problem tractable is so
difficult that I do not know yet that anyone has got funding to do the
numerical problem as a day job.

To make progress with the numerical problem one has to really make a
very good guess at what slicing to choose for the evolution equations.
The right guess, physically informed, can simplify the problem. My proposal
suggests that a slicing which preserves a unform quasilocal Hubble flow
[proper length (cubic root of volume) with respect to proper time] of
isotropic observer is the way to go. This would be a "CMC gauge"
(constant mean extrinsic curvature) which happens to be the one favoured
by many mathematical relativists studying existence and uniqueness in
the PDEs of GR. At a perturbative level near a FLRW geometry, such a
slicing - in terms of a uniform Hubble flow condition [as in one of
the classic gauges of Bardeen (1980)] supplemented by a minimal shift
distortion condition [as separated investigated by York in the 1970s]
has also been arrived by Bicak, Katz and Lynden-Bell (2007) as one of the
slicings that can be used to best understand Mach's principle. I mentioned
this sort of stuff in my first serious paper on this, in the New J Physics
special focus issue on dark energy in 2007, gr-qc/0702082 or
http://www.iop.org/EJ/abstract/1367-2630/8/12/E07 [New J. Phys. 9 (2007) 377]

To begin to do things like this numerically one must first recast the
averaging problem in the appropriate formalism. Buchert's formalism
uses a comoving constant time slicing because people did not think that
clock effects could be important in the averaging problem as we are
talking about "weak fields". [This is why I am claiming a "new" effect,
when one has the lifetime of the universe to play with an extremely small
relative regional volume deceleration (typically one angstrom per second2)
can nonetheless have a significant cumulative effect. As physicists
we are most used to thinking about special relativity and boosts; but
this is not a boost - it is a collective degree of freedom of the regional
background; something you can only get averaging on cosmological scales
in general relativity.] So anyway, while Buchert's formalism - with
my physical reinterpretation which requires coarse-graining the dust
at the scale of statistical homogeneity (200 Mpc) - has been adequate for
describing gross features of the average geometry (and relevant
quantitative tests), to do all the fine detail one wants to revisit
the mathematics of the average scheme. This would be a precursor to
the numerical investigations.

These are not easy tasks, as one is redoing everything from first
principles. At least I now have a postdoc to help.

At one level, it does not matter whether my proposal as it stands is
right or wrong. Physics involves asking the right hard questions in
the first place; and that is something I am trying to do. For
decades we have been adding epicycles to the gravitational action,
while keeping the geometry simple because we know how to deal with
simple geometries. I have played those games myself for most my career;
but none of those games was ever physically compelling. Once one is
"an expert in GR" one appreciates that real physics involves trying
to deeply understand the nature of space and time, symmetries and
conservation laws, rather than invoking new forces or particles
just for the hell of it. GR as a whole - beyond the simple arenas of
black holes and exact solutions in cosmology - is conceptually difficult
and not completely understood. But I am convinced that to begin to
resolve the complexity one needs to think carefully about the
conceptual foundations, to address the not-completely-resolved issues
such as Mach's principle which stand at its foundations. The phenomenon
of "dark energy" is, I think, an important clue. Whether I am right or
wrong the hard foundational problems - along with observational puzzles
which in a number of cases do not quite fit LCDM - are the issues
that have to be faced.

Happy New Year and best wishes,

David W

PS You are welcome to post and share this with your PF friends, but I don't
have time to get involved in long discussions or trying to explain all the
various technical things (such as CMC slicings etc). Writing this missive
has taken long enough!
 
  • #73
Wiltshire writes:

The understanding of quasilocal energy and conservation laws is an unsolved
problem in GR, which Einstein himself and many a mathematical relativist
since has struggled with.

Yup. The first reaction to anyone that tries to do GR is to simplify the problem and to find some sort of symmetry in which you can impose a quasilocal energy and conservation law. The trouble with doing this is that you end up with nice simple models that don't have any connection to messy reality.

My recent paper 0909.0749, published in PRD this month, describes several tests and
compares data where possible. The essay, which summarises the earlier
PR D78 (2008) 084032 is an attempt to describe in non-technical
language why this "crazy idea" is physically natural.

One of the reason that I had a somewhat negative reaction to the non-technical essay was that since we really don't understand Einstein's equations, I'm not too convinced by arguments toward physically naturalness. Since we don't understand enough about what happens once we work through the full equations, it's not clear that what seems natural is mathematically correct. Once I saw someone actually work through the equations, the idea became a lot less crazy.

It's ironically one of the consequences of the fact that we don't quite understand the full implications of GR, we don't really understand the role that Mach's princple plays in it. Now once someone goes from the equations to a new Mach's principle that tends to convince me, but it wasn't obvious until I read the technical paper that this was way Wiltshire was doing.

One important point I tackle which has not been much touched by my
colleagues in the community (a couple of papers of Buchert and
Carfora expected) is that as soon as there is inhomogeneity
we must go beyond simply looking at the changes to average evolution,
because when there is significant variance in geometry not every observer
is the same observer.

But that opens up a question. People that run numerical simulations want to use Newtonian gravity whenever possible. This may be incorrect, but how incorrect is it? What are the conditions under which a relatively simply approximation will give you answers that are sort of correct, and under what conditions will the answers be totally wrong?

Supernovae "near a void" will not have different properties to other than supernovae
(apart from the small differences due to the different metallicities etc between rich clusters of
galaxies and void galaxies) because all supernovae are in galaxies and all galaxies are greater
than critical density.

Not so sure. If I understand the model, then part of the illusion of acceleration has to do with the fact that you have different density evolutions in which case the light that goes through a void will behave differently that light that goes through dense regions because the clocks are running at different speeds. If that's happen a supernova that gets observed through an empty region will have events happen at different speeds than a supernova that gets observed through a dense regions.

One nice thing about supernova Ia is that not only are they standard candles, but they might be usable as standard watches. The fall-off in the light curve gives you an idea of how quickly something is happening, and so if clocks are running at different speeds that's something that you can see in the light curves (maybe).

It is true that numerical simulations are an important goal. The problem
with this is not so much the computer power but the development of an
appropriate mathematical framework in numerical relativity.

One thing that is sort of interesting is that there seems to be a correspondence between this problem and the quantum gravity problem. Once you put a set of equations on a computer, you are trying to quantize gravity. One thing that I don't think is obvious to the loop gravity people is how nicely a lot of their formalizations would work on a computer.

[he only sense in which GR enters numerical
simulations in cosmology at present is that the expansion rate of a LCDM
Friedmann-Lemaitre universe is put in by hand, and structure formation is
treated by Newtonian gravity on top of the base expansion. This explains
some but not all the features of the observed universe (e.g., voids do
not tend to be as "empty" as the observed ones).

Yup. The trouble here is that there are about five or six different things could cause this, and it may have nothing to do with GR at all. This is one problem with coming up with a computer simulation. If you have a computer simulation that is totally different from a Newtonian model, and you get different results, it's hard to know *why* you got different results or if you just have a bug.
 
  • #74
twofish-quant said:
...The 1998 supernova observations were important because supernova have nothing to do with cosmology, and the observations don't require any sort of cosmological assumptions to process...

This comment of yours is the key to resolving confusion, twofish-quant. Your posts are really illuminating.

The observation of a non-linearity in the Hubble plot when using supernovae as standard candles does have a bearing on cosmological models, though, because a precisely linear Hubble plot is a consequence of the FLRW cosmological model based on a highly symmetric cosmic fluid. What Wiltshire has done is to present an explanation for the observations that doesn't rely on this oversimplified model. His GR-based treatment is more realistic in that it takes into account (for the first time?) the actual observed lumpiness of the universe.

It now seems to me that it is misleading to mention dark energy in this context at all (despite Wiltshire's adherence to convention in this respect). If Wiltshire is correct (and I hope he is), dark energy --- a mysterious geometry-flattening fluid --- was for a while just mistakenly invoked to explain S1a non-linearity.

Perhaps for the same reason that US Presidents throw their weight around!
 
  • #75
apeiron said:
As usual, genius is in fact 90% perspiration. And cranks are the people who don't learn from their critics.

From my experience David is one to learn from his critics, he's also pretty self-critical. And yeah, he works his *** off.

twofish-quant said:
Part of the problem with the paper was that it wasn't clear whether or not he was arguing for new physics or not, and after the third time I read through it carefully, I came to the conclusion that he *was* arguing for a non-standard theory of gravity.

That's the big problem with the paper was that it wasn't clear whether it was arguing for more careful calculations or that gravity acts in a different way. After looking at the metric and thinking about it, my conclusion was that Wiltshire is arguing that general relativity is incorrect and that you need a mass dependent metric based on a new equivalence principle.

Um. Interesting. He isn't arguing for a non-standard theory of gravity or that general relativity is incorrect. I think he would be quite surprised to hear that that was the interpretation.

Personally, I'm rather interested in pursuing this idea, I plan to start a PhD with him next year. (Hence dragging up a 2 year old thread here.) There is certainly a vast quantity more work to be done, and at this point only himself, Teppo, and a couple of PhD students are directly working on this afaik.

Was going to reply to a couple more points but I see he has written a response to the thread, so nvm.
 

Similar threads

Replies
9
Views
1K
Replies
10
Views
3K
Replies
2
Views
974
Replies
31
Views
3K
Replies
2
Views
2K
Replies
9
Views
1K
Replies
2
Views
2K
Replies
4
Views
2K
Back
Top