Dark energy a furphy, says new paper

In summary, Wiltshire has made some interesting claims about the age of the universe, stating that it is dependent on where you're standing and can vary significantly within our observable universe due to time dilation. He has published several papers arguing that by abandoning the Cosmological Principle and assuming a non-uniform distribution of matter, we can do away with the need for dark energy. However, this approach has drawbacks, such as giving too much freedom and making it harder to fit models to data. Additionally, some upcoming quantum gravity theories require a small positive cosmological constant, which challenges Wiltshire's argument. Wiltshire's latest paper, published in Physical Review Letters, discusses the concept of clock variation and its implications in a universe of
  • #36
jonmtkisco said:
In his most recent papers, Wiltshire claims that there are 4 free parameters, of which 2 are so constrained by CMB priors and a tracking solution that they are insignificant. That leaves only 2 significant free parameters, the "bare" Hubble constant and the present void fraction (by volume). He notes that the present void fraction should eventually be estimatable by observation. He comments:

"In my view caution should always be exercised, but this includes caution with the conceptual basis of our theory and the operational interpretation of measurements. To those who are uncomfortable with my proposal about cosmological quasilocal gravitational energy let me ask the following: Without reference to an asymptotically flat static reference scale, which does not exist given the universe is expanding, and without reference to a background which evolves by the Friedmann equation at some level, an assumption which is manifestly violated by the observed inhomogeneities, What keeps clocks synchronized in cosmic evolution? Please explain."

One might equally ask, what makes the clock rates different? Wiltshire makes a lot of broad statements about 'quasilocal energy', 'finite infinity' and such to justify a large difference between wall and void clock rates.

However, the crucial point is that he has still not demonstrated how to calculate what this difference would be given a level of inhomogeneity in any example universe. He has only fitted this value to data. This is not good enough. Again, I'm not meaning to be critical of Wiltshire as he may well provide this in time and you can't do everything at once.

The key parameter is [tex]\gamma[/tex], the 'lapse' function (I erroneously called this the shift in a previous post). Wiltshire has this at around 1.5 which is ridiculously high. This means clock in walls run 1.5 times faster than in voids (or is it the other way around?). You'd have to be traveling close to the speed of light or be sitting very close to a black hold for General Relativity to predict such a lapse. It's a big ask for the very weak potentials present in the large scale structure of the Universe to be responsible for this.

This is why Wiltshire must show how this lapse can actually be calculated, and say how it precisely evolves with cosmic time (since it starts out at Unity). There is nothing approaching this in any of his papers to date.
 
Space news on Phys.org
  • #37
Hi Wallace,

Wallace said:
The key parameter is [tex]\gamma[/tex], the 'lapse' function (I erroneously called this the shift in a previous post). Wiltshire has this at around 1.5 which is ridiculously high. This means clock in walls run 1.5 times faster than in voids (or is it the other way around?). You'd have to be traveling close to the speed of light or be sitting very close to a black hold for General Relativity to predict such a lapse.

A key point of Wiltshire's model is that "apparent" acceleration is not a direct function of the value of the "lapse" parameter [tex] \overline{\gamma} [/tex]. Rather, the illusion of cosmic acceleration occurs only during a specific epoch while the void fraction [tex]f_{v}[/tex] is increasing at a high rate. He calculates that the illusion of acceleration began when [tex]f_{v} = 0.59 [/tex], at about 7Gy (z= 0.09). He puts the present void fraction at about 0.76, having begun very close to zero and increased slowly at first. The illusion of acceleration will reach a maximum in the near future when [tex]f_{v} \simeq 0.77 [/tex], when [tex] \ddot {f_{v}} \rightarrow 0[/tex]. After that, [tex] \ddot {f_{v}} [/tex] will go negative, and the illusion will begin to fade away. Note that at no point do void observers measure any apparent acceleration; they observe a decelerating Einstein-de Sitter universe.

Wiltshire calculates the lapse parameter [tex] \overline{\gamma} [/tex] at 1.38 now, not 1.5. Again, [tex] \overline{\gamma} [/tex] begins at 1 with almost 0 rate of increase. It then grows monotonically to its current value. It's average over 10Gy+ is about 1.1-1.2. At present, the time variation in the lapse function [ [tex] \ddot {\overline{\gamma}} [/tex] ] is near zero.

The 1.5 figure is the upper bound on how high [tex] \overline{\gamma} [/tex] can ever get. Wiltshire explains: "In the absence of a dark energy component the only absolute upper bound on the difference in clock rates is that within voids [tex] \overline{\gamma} (\tau, X) < \frac{3}{2} [/tex], which represents the ratio of the local expansion rate of an empty Milne universe region to an Einstein-de Sitter one.

When Wiltshire plugs "reasonable fit" numbers into his equations, he calculates the global matter density parameter at [tex]\Omega _{M0}= 0.127[/tex], far lower than the same parameter in the [tex]\Lambda CDM[/tex] model. This figure is consistent with a transition to an 'open' FRLW universe. The deceleration parameter q he calculates also is significantly lower than in [tex]\Lambda CDM[/tex], but it remains negative.

Regarding the gravitational energy of negative curvature which causes the time differential, Wiltshire says:

"The l.h.s. of the Friedmann equation ... can be regarded as the difference of a kinetic energy density per unit rest mass, [tex] E _{kin} = \frac{1}{2} \frac{\dot {a^2}}{a^2} [/tex] and a total energy density per univ rest mass [tex] E _{tot} = - \frac{1}{2} \frac{k^2}{a^2} [/tex] of the opposite sign to the Gaussian curvature, k. Such terms represent forms of gravitational energy, but since they are identical for all observers in an isotropic homogeneous geometry, they are not often discussed in introductory cosmology texts. ... In an inhomogeneous cosmology, gradients in the kinetic energy of expansion and in spatial curvature, will be manifest in the Einstein tensor, leading to variations in gravitational energy that cannot be localised. ... Clocks run slower where mass is concentrated, but because this time dilation relates mainly to energy associated with spatial curvature gradients, the differences can be significantly larger than those we would arrive at in considering only binding energy below the finite infinity scale, which is very small."

Jon
 
Last edited:
  • #38
I think we're still not on the same page here Jon. The point is that Wiltshire conjectures a model and then determines the parameters of the model a posteriori. Having done so he then demonstrates how his model fits other predictions (ellipticity in the CMB etc).

What is missing is a demonstration that the physical mechanism of the model can actually do what is being claimed, mere parameter values mean nothing without this and it is entirely absent.

It doesn't matter how many claims from papers you pull out, it doesn't strengthen the argument at all, since all of those claims rest on the same unstable base.

To give you an idea an of what is needed, Wiltshire (or anyone else) would need to be able to produce a process that could calculate the observational signature of a cosmological models specified by the homogenous variables (mean densities etc) and then additionally the power spectrum and amplitude of density fluctuations. In the standard case the homogenous parameters affect the evolution of the density fluctuations but not the other way around (either in 'reality' or apparent 'dressed' parameters). Wiltshire and others claim that in fact both of these feedback on each other but the models they propose cannot be transparently tested, since there is no coherent description of how these things relate in general. I.E. you should be able to play with the parameters and see how many different Universes would look, not just deal with a single set of parameters from our Universe.
 
  • #39
Hi Wallace,

Wallace said:
The point is that Wiltshire conjectures a model and then determines the parameters of the model a posteriori. ...

What is missing is a demonstration that the physical mechanism of the model can actually do what is being claimed, mere parameter values mean nothing without this and it is entirely absent.

To give you an idea an of what is needed, Wiltshire (or anyone else) would need to be able to produce a process that could calculate the observational signature of a cosmological models specified by the homogenous variables (mean densities etc) and then additionally the power spectrum and amplitude of density fluctuations. ... I.E. you should be able to play with the parameters and see how many different Universes would look, not just deal with a single set of parameters from our Universe

I don't understand how that's different from FLRW with [tex]\Lambda CDM[/tex]. Friedmann conjectured the original model before there was any observation of the Hubble constant parameter. The observed figure later was 'plugged in'. Even later, a 'best fit' number was plugged in for [tex]\Lambda[/tex]. These numbers were then changed and refined to reflect new data such as WMAP.

Wiltshire has a set of equations, into which he plugs in selected parameters. He 'best fits' an initial void fraction value which yields a reasonable current void fraction value, and yields other reasonable present parameters.

Wallace said:
Wiltshire and others claim that in fact both of these feedback on each other but the models they propose cannot be transparently tested, since there is no coherent description of how these things relate in general.

Wiltshire does not claim that inhomogeneities "feedback" on the average expansion rate. That is what the "backreaction" advocates claim. Wiltshire does not believe that backreaction can explain apparent acceleration.

Wiltshire claims simply that the observations of wall observers like us are misleading because they don't take into account the difference in wall and void clock rates caused by the significant negative curvature of voids. In fact he says that wall and void expansion rates are identical and decelerating when measured by a single volume-average clock. It's a fairly straightforward and logical concept, except that there is no accepted equation for exactly calculating averaged quasi-local energy values. That's why he supplies one, based on Buchert's equations. I don't know if it's right or wrong, but it is capable of generating appropriate results from what appear to be reasonable input parameters. In my book that's a very good start.

I have cited a number of his claims in these posts only because I want to ensure that his model is accurately described.

Jon
 
Last edited:
  • #40
jonmtkisco said:
I don't understand how that's different from FLRW with [tex]\Lambda CDM[/tex]. Friedmann conjectured the original model before there was any observation of the Hubble constant parameter. The observed figure later was 'plugged in'. Even later, a 'best fit' number was plugged in for [tex]\Lambda[/tex]. These numbers were then changed and refined to reflect new data such as WMAP.

Wiltshire has a set of equations, into which he plugs in selected parameters. He 'best fits' an initial void fraction value which yields a reasonable current void fraction value, and yields other reasonable present parameters.

Now we're getting somewhere. In a sense you're right, the standard model has of course also been shaped by data. However, there is still a big something missing. In the standard approach you can specify the physics, say the potential of a quintessence field, then, along with the density parameters and hubbles constant you can predict what the physics would look like. You can then have a look at the data. In practice in order to properly fit a model you need to calculate what thousands of slightly different parameter sets 'look like'.

This is what Wiltshire's model currently lacks. There is no theoretical tool that can determine the general observational signature from a given physics. As such you cannot properly test the model. Wiltshire has calculated the time delay implied by his model that gives the 'apparent' acceleration we observe by fitting to the data of our Universe. However he cannot predict this time delay for a Universe in general, from a hypothetical set of conditions. This is the basic requirement of a cosmological model.

jonmtkisco said:
Wiltshire does not claim that inhomogeneities "feedback" on the average expansion rate. That is what the "backreaction" advocates claim. Wiltshire does not believe that backreaction can explain apparent acceleration.

You misunderstand me, sorry if I wasn't clear enough. By 'feedback' I don't mean a physical mechanism neccessarily in the backreaction sense. Let me explain. Take two Universes with the same mean density and curvature. One is completely smooth, the other has the kind of structure we see in our Universe. In the standard model the structure does not significantly change observables such as Supernovae measurements that intend to probe the homogeneous background expansion. In Wiltshire model however, these two Universes would look different. The one with structure has a set of 'dressed' or 'apparent' parameters that differ from the homogenous one. That is if we interpret the results from the structured Universe assuming it is smooth (the way that cosmology operates today) we get an 'apparent' set of parameters that are different from the true ones. That is what I mean by feedback, that in Wiltshires model, if we wanted to just know something about the mean properties of the Universe, the equations we are working with need to know about the structure. Normally when we just use the FLRW metric for say Supernovae results, the equations don't care about the structure.

The problem is that Wiltshire has not provided the equations to do this with. We cannot predict what any arbitrary level and type of structure will do to measurements of the background with his work except for one set of parameters, the ones he has fitted to data. This means we don't know if his mechanism is valid.

jonmtkisco said:
Wiltshire claims simply that the observations of wall observers like us are misleading because they don't take into account the difference in wall and void clock rates caused by the significant negative curvature of voids. It's a fairly straightforward and logical concept, except that there is no accepted equation for exactly calculating averaged quasi-local energy values. That's why he supplies one, based on Buchert's equations. I don't know if it's right or wrong, but it is capable of generating appropriate results from what appear to be reasonable input parameters. In my book that's a very good start.

Be careful, again you need to read between the lines, when you say he supplies an exact equation you need to point out that the equations he uses have not been solved. They are merely a 'template' for dealing with the issue and demonstrate a possible form of the solution but he does not solve the equations. That is, he does not start with a description of a perturbed density field, plug them into his equations and produce a solution that predicts observables. He gets as far as a general form, then fixes the parameter values from the data. Without actually solving the equations for a realistic density field it is impossible to actually judge if the 'energy gradients' etc that he claims causing the apparent acceleration are anywhere big enough to do the job.
 
Last edited:
  • #41
Wallace said:
Now we're getting somewhere.

Hurray, Wallace!

Wallace said:
Wiltshire has calculated the time delay implied by his model that gives the 'apparent' acceleration we observe by fitting to the data of our Universe. However he cannot predict this time delay for a Universe in general, from a hypothetical set of conditions. This is the basic requirement of a cosmological model. ...

The problem is that Wiltshire has not provided the equations to do this with. We cannot predict what any arbitrary level and type of structure will do to measurements of the background with his work except for one set of parameters, the ones he has fitted to data.

Maybe you understand his equations better than I do, but the factual basis for your assertion escapes me.

Wiltshire starts with a crisp global definition of 'finite infinity' (valid for any data set), and sets the 'true' critical density (as evolved by FLRW) at that location in space. He uses the CMB data as the data input to work backwards to this critical density value. That becomes his baseline for all the other equations. If the CMB data changes, he can adjust his critical density value accordingly. He then uses his baseline to calculate the 'bare' Hubble constant.

Based on observations, he selects a "dominant" void size. Then he selects an initial void fraction value which, when plugged into the Buchert equations, will generate reasonable output parameters. If he plugged in a slightly different initial void fraction, the resulting output parameters would be slightly different. Why doesn't that meet your test?

Wallace said:
He gets as far as a general form, then fixes the parameter values from the data. Without actually solving the equations for a realistic density field it is impossible to actually judge if the 'energy gradients' etc that he claims causing the apparent acceleration are anywhere big enough to do the job.

Can you please point more specifically to where in his calculations he stops solving equations and starts using templates? I don't see it. He admits in his 2/07 paper that he integrated forward to calculate results; but in his 9/07 paper he replaced the integration process with an exact calculation of the Buchert equations.

Jon
 
  • #42
An interesting corollary to Wiltshire's model occurs to me. Imagine two separate universes, one exactly at 'critical density' with flat geometry, the other at below critical density with negative curvature. I think his model means that there is no meaningful difference in the expansion rate of the two universes. To the extent that an observer of both universes measures the 'open' universe to be expanding faster, it is merely an artifact of the different clock rates in the two universes. A 'common' clock would show both universes to be expanding at exactly the same rate. [All of this assumes some hypothetical observer who can measure both universes concurrently]. [Edit: Well, not necessarily. If one counts clock ticks based on some constant periodic event, such as the orbital period of a hydrogen electron, the same number of elapsed ticks would correlate to the combined absolute scale factor and density of every universe, even if there were no observer in a position to count both sets of ticks concurrently.]

A further conjecture: If the second universe were instead at above critical density and contracting, then would its clock rate need to literally 'run backwards' in order to align the closed universe's negative expansion rate to the expansion rate of the flat universe?

Of course if there is no absolute metric of time, then "it's all relative." What one observer describes as a clock running backwards can be described by another observer merely as a clock running relatively slower. Hmmm... food for thought.

This suggests that the flow of time and cosmic expansion are causally inseparable; they truly are different physical manifestations of a single phenomenon. Every constant periodic event in any universe bears a fixed metric relationship to the combination of that universe's scale factor and density.
Jon
 
Last edited:
  • #43
jonmtkisco said:
Maybe you understand his equations better than I do

Agreed :p

jonmtkisco said:
Wiltshire starts with a crisp global definition of 'finite infinity' (valid for any data set), and sets the 'true' critical density (as evolved by FLRW) at that location in space. He uses the CMB data as the data input to work backwards to this critical density value. That becomes his baseline for all the other equations. If the CMB data changes, he can adjust his critical density value accordingly. He then uses his baseline to calculate the 'bare' Hubble constant.

In the above, the emphasis in mine and this is where you've been mislead. This calculation is not done. I can't exactly point to the exact point where this isn't done now can I? He does 'calculate' this value assuming his equations are valid but does not do the required calculation to demonstrate this validity. You could therefore write down any old equation and make the same claim.

jonmtkisco said:
Based on observations, he selects a "dominant" void size. Then he selects an initial void fraction value which, when plugged into the Buchert equations, will generate reasonable output parameters. If he plugged in a slightly different initial void fraction, the resulting output parameters would be slightly different. Why doesn't that meet your test?

The Buchert equations you speak of are not solutions to the Einstein Field Equations for the density field in question. They are in principle forms of solutions. This is why plugging numbers in doesn't help, since these equations differ from FRW and hence will clearly give a different result. What needs to be demonstrated is that these equations themselves are valid. This is what has not been done! I'm not sure how many times I can say this. You can't prove that the form of an equation that has simply been written down is valid by fitting arbitrary parameters of it to the data. In this process you are fitting data to data!

To be clear, for instance, you say Wiltshire takes the void fraction observed as an observational input. The problem is that his equations that depend of the void fraction have not been shown to be valid. In other words, the proposition that the void fraction matters cannot be proven by showing that a model that relies on knowing the void fraction gives an accurate prediction. What needs to be demonstrated is how the void fraction matters. I know Wiltshire has said a lot of words about finite infinity and such, yet inescapably the equations he uses are not solutions to the field equations, and hence it cannot be shown that they are at all valid.

Can you please point more specifically to where in his calculations he stops solving equations and starts using templates? I don't see it. He admits in his 2/07 paper that he integrated forward to calculate results; but in his 9/07 paper he replaced the integration process with an exact calculation of the Buchert equations.

Jon

I'll turn it around. Can you show anywhere in which his equations are shown to be solutions to the Einstein Field Equations? Of course it is impossible to show where something is not done! Don't rely on the Buchert equations being gospel, they aren't solutions to the EFE, they are just metric equations (i.e. start with a metric and work out the dynamics, rather than find a metric that matches a density field and then find the dynamics).

So he solves equations yes, but the equations he solves (his step 1) do not come from a demonstrably valid description of a perturbed FLRW universe. If we can't establish step 1 then all the subsequent steps are not very useful.

To try and be clearer, I'm going to be a little absurdest for a moment. In a very exaggerated way, here is an example of the reasoning used (with apologies to the Flying Spaghetti Monster):

I have a proposition, that the number of pirates in the world is linked to global temperature (c.f. apparent acceleration is caused by inhomogeneities). I can't yet exactly calculate exactly the strength of this connection, but I think it might look like

temperature = A + B * Number of Pirates
(c.f. Wiltshires version of the Buchert equations with free parameters but no theoretical calculation of the values the parameters would have in a given density field representing the structure in the Universe)

Now, by looking at the temperature of the Earth and the population of pirates as a function of time we can see that A=(some number) and B=(some number) (c.f. the lapse function etc in Wiltshires equations).

Therefore the number of pirates determines the global temperature and the model makes an accurate prediction of this.

Obviously the example is silly but this somewhat analogous to the reasoning used. In the pirate case, what is lacking is an actual prediction, without any data of what the function temperature(pirates) looks like. It is the same here. Without actually demonstrating that the equations Wiltshire uses are actually solutions of the EFE for a particular density field we can have no faith in the form of the equations being correct. Likewise without a calculation from a hypothetical density field to the expected values of the parameters we can have no faith that the parameters WIltshire fits to the data have any physical significance.
 
Last edited:
  • #44
Wallace said:
The Buchert equations you speak of are not solutions to the Einstein Field Equations for the density field in question. ...

Can you show anywhere in which his equations are shown to be solutions to the Einstein Field Equations? Of course it is impossible to show where something is not done! Don't rely on the Buchert equations being gospel, they aren't solutions to the EFE...

OK Wallace, I understand that the Buchert equations are not exact solutions to the Einstein Field Equations. Several posts back I quoted Wiltshire saying that an exact solution to the EFE for an inhomogeneous universe is intractible. Probably no solution in our lifetimes or the next.

We can't expect any inhomogeneous model to be "solved" to that standard, so "serious" cosmologists will need to ignore all such models indefinitely. OK.

Jon
 
  • #45
jonmtkisco said:
OK Wallace, I understand that the Buchert equations are not exact solutions to the Einstein Field Equations. Several posts back I quoted Wiltshire saying that an exact solution to the EFE for an inhomogeneous universe is intractible. Probably no solution in our lifetimes or the next.

Right, a complete solution may be effectively impossible, but that doesn't mean we can't try to make approximations. The point is that the process needs to be open and progress needs to be considered honestly. A full solution may not be necessary, but what does need to be done are calculations showing quantitatively what the departure from the 'averaged' solution might be. Wiltshire does give a lot of qualitative justification but there is still not detail in the maths as to why the standard approach fails and how his ideas give an effect of a sufficient order of magnitude.

jonmtkisco said:
We can't expect any inhomogeneous model to be "solved" to that standard, so "serious" cosmologists will need to ignore all such models indefinitely. OK.

Jon

I detect a note of sarcasm here. Shame.

The issue is not that inhomogeneous models can't be solved though. That is not why, the as you drawl "serious" cosmologists aren't too interested. You are completely misrepresenting the views of the cosmology community. The reason is that while you can't solve the full equations you can use perturbation theory to show that the deviations from the averaged solution that a full inhomogeneous solution would predict are very small. So in fact the calculation, for all intents and purposes, has already been done. The hard task for Wiltshire and others is to demonstrate why perturbation theory should fail so spectacularly in a way that is unprecedented in physics.

So yes, I'm skeptical of this and other works but at the same time interested. It would be fantastic if this idea worked, but we've got to look carefully at the details rather than believe the hype as such in a desire for it to be right.
 
  • #46
Wallace said:
The reason is that while you can't solve the full equations you can use perturbation theory to show that the deviations from the averaged solution that a full inhomogeneous solution would predict are very small. So in fact the calculation, for all intents and purposes, has already been done. The hard task for Wiltshire and others is to demonstrate why perturbation theory should fail so spectacularly in a way that is unprecedented in physics.

Hi Wallace. As I've said several times, Wiltshire agrees that perturbations on an FLRW model would be too small to cause acceleration. He agrees that the backreaction models have been reasonably well proved to not generate results consistent with observations.

I'm just trying to emphasize that his model is based on an entirely different concept. Therefore it is unreasonable to point to the failure of the backreaction models as an independent reason to not treat Wiltshire seriously. I think many people are unaware of this point.

Wallace said:
It would be fantastic if this idea worked, but we've got to look carefully at the details rather than believe the hype as such in a desire for it to be right.

Agreed absolutely. As I've said repeatedly, I don't know if Wiltshire's model is right or wrong. I have no intention of being swayed by hype. Personally, for certain reasons I'd prefer it were wrong. But in my modest opinion it is a very solidly conceived model. In particular, understanding his work is helpful to anyone trying to get their arms around the general subject of inhomogeneity.

Jon
 
  • #47
jonmtkisco said:
Hi Wallace. As I've said several times, Wiltshire agrees that perturbations on an FLRW model would be too small to cause acceleration. He agrees that the backreaction models have been reasonably well proved to not generate results consistent with observations.

I'm just trying to emphasize that his model is based on an entirely different concept. Therefore it is unreasonable to point to the failure of the backreaction models as an independent reason to not treat Wiltshire seriously. I think many people are unaware of this point.

Most people (where by people I mean cosmologists) are aware of this point, and I was not referring to backreaction. From the 'standard' set of tools (perturbation theory etc) you can show why the inhomogenaities in the Universe don't cause any part of the FRW approximation to break down, either in a direct back-reaction sense or through any other means including any difference in clock rates, luminosity distance or anything else. I don't know why you thought I was talking about backreaction only? It's not a new idea to suggest that inhomogeneities alter the appearance of observable in relation to the FRW case by any number of causes.

jonmtkisco said:
Agreed absolutely. As I've said repeatedly, I don't know if Wiltshire's model is right or wrong. I have no intention of being swayed by hype. Personally, for certain reasons I'd prefer it were wrong. But in my modest opinion it is a very solidly conceived model. In particular, understanding his work is helpful to anyone trying to get their arms around the general subject of inhomogeneity.

Jon

Exactly, which is why I've been trying to help you (and by extension anyone else reading) to understand what Wiltshire's papers do and do not say.
 
  • #48
Wallace said:
From the 'standard' set of tools (perturbation theory etc) you can show why the inhomogenaities in the Universe don't cause any part of the FRW approximation to break down, either in a direct back-reaction sense or through any other means including any difference in clock rates, luminosity distance or anything else.

Thanks Wallace, it would greatly aid my understanding of this subject if you can please show us briefly how standard perturbation tools prove that the differential clock rate caused by negative curvature in a rapidly expanding void fraction is too insignificant to cause the temporary illusion of acceleration that Wiltshire describes. I'm not aware that anyone has published such a proof.

Jon
 
Last edited:
  • #49
It's like punching smoke to address every particular point you, Wiltshire or anyone else raises in the fashion you ask. However this is not necessary.

The standard argument (that can be found in any cosmology textbook, or for more details see Ishibashi & Wald 2006) starts by saying that on large enough scales the FRW metric is exact if the matter density field is spatial averaged. This is not in dispute. Now, we know that General Relativity re-produces Newtonian gravity in the limit of weak fields and low velocities, indeed GR would be simply wrong if it did not. Therefore, we can write down the 'Newtonian Perturbed FRW metric' which looks like this:

[tex]

d\tau^2 = -dt^2(1+2\Phi) + a^2(t)(1-2\Phi)(d\chi^2 + S_k(\chi)d\Omega^2)[/tex]

where [tex]\Phi[/tex] is the Newtonian potential.

To cut a long story short, if you plug in values of [tex]\Phi[/tex] that describe the strength of the gravitational field in the structures in the Universe, you find that these perturbations are quite small. Plugging this metric in the Einstein tensor, the new terms that appear that are not present in the simple non-perturbed FRW metric are very very small in comparison to the existing terms.

These new terms describe all of the effects that inhomogeneities have on either the 'true' background expansion (back-reaction) or any 'apparent' effects, such as described by Wiltshire and others. It is very easy to see that all of these effects are very small (again see any textbook or Ishibashi and Wald for details).

It is hard to see how this process fails, but it may fail if for instance the first term in a perturbation expansion is tiny so we ignore it but for some reason the second or higher order term becomes large. It is unusually but not impossible for an equation to do such a thing, particularly a highly non-linear equations such as the Einstein equation. However, despite this possibility this had never been demonstrated as a fact.

Wiltshires differential clocks rates due to negative curvature must, if they truly exist in general relativity, must be embodied by a term or terms In the Einstein tensor given a valid metric describing the Universe and furthermore the resultant Einstein tensor must equate to a stress-energy tensor that accurately describes the density field of the Universe. Now, as we have agreed, such a full solution is too ambitious to ask for, however it is not unreasonable to ask that if the voids in the Universe cause such a dramatic difference in clock rates, why does this not fall out naturally from the weak field metric describing a void in an expanding Universe? The strength and gradient of the Newtonian potentials in question are not large (of order 10^-5 at most in geometric units) and therefore the weak field metric should give the correct answer. If it does not, then GR is not going to the Newtonian limit and therefore must be wrong since Newtonian gravity is much better tested than GR and is known to be very accurate.

Wiltshire must address this question to convince 'the establishment' since it is the obvious first objection, as discussed in Ishibashi and Wald. Wiltshire argues that there 'may' be an effect due to curvature and potential gradients but does not show why this is effect should be so big and why the weak field metric does not work in a regime that is thoroughly weak field.

I've done in the past some simple calculations of how well the weak-field metric describes the Universe. By plugging in that metric to the Einstein tensor you can directly see what density field it implies through equating with the stress energy tensor. The differences between the density field that gave the potentials that went into the metric and the resultant Einstein Tensor from that metric were of order [tex]10^{-15}\%[/tex]. I'd call that a pretty good approximation!

To claim that there is something in standard general relativity that makes this process break down, but not demonstrate why this apparently excellent approximation breaks down by a factor of 2 is a brave call.
 
  • #50
Thanks for the explanation Wallace.

My sense is that the Newtonian approximation deals with energy gradients arising from mass differentials (which are too small to generate significant deviations), but it does not deal with energy gradients arising from spatial curvature. I believe that the latter is a subject which must be calculated entirely by means of GR. Since no suitable exact solution to the Einstein Field Equations is a available to us (and may not exist), in my opinion it comes down to trying to understand the Buchert averaging equations and assessing whether or not they are sound and Wiltshire is applying them properly. I hope that additional scholar(s) will weigh in on this specific subject, and not view the whole subject area as tarnished because of the failure of the backreaction solutions.

So I'm going to keep an eye out for future publications. I understand that you're not holding your breath. Wiltshire says he has a couple of new papers forthcoming. In any event, I think we've thoroughly beaten this subject to death. Until there's new news, its time to move on to a new subject!

Thanks again Wallace.

Jon
 
  • #51
jonmtkisco said:
Thanks for the explanation Wallace.

My sense is that the Newtonian approximation deals with energy gradients arising from mass differentials (which are too small to generate significant deviations), but it does not deal with energy gradients arising from spatial curvature. I believe that the latter is a subject which must be calculated entirely by means of GR.

Spatial curvature is caused by uneven mass density in GR, you can't separate them unless you want to invent a new theory of gravity. The weak field metric is GR, so you can't say that it is not applicable and we must be GR 'entirely'. This metric has been shown to be an accurate solution of the 'entire' Einstein Equation, including any curvature gradients.

You might benefit from taking a coherent course in GR in order to learn it from the basics up. This will help guide your 'sense' much better. It's hard to learn such a theory from pop sci explanations on forums or even from published journal articles, since they assume the basics are understood. The worst trap one can fall into is having confidence in an erroneous understanding on the basics. I thoroughly recommend you work your way through a book such as Hartle's 'Gravity' (although many other books are good as well).

jonmtkisco said:
Since no suitable exact solution to the Einstein Field Equations is a available to us (and may not exist), in my opinion it comes down to trying to understand the Buchert averaging equations and assessing whether or not they are sound and Wiltshire is applying them properly.

It must be established whether the Buchert equations themselves are of any use which has not yet been shown. You can't apply something 'properly' if it in itself is flawed.
 

Similar threads

Replies
31
Views
3K
Replies
11
Views
2K
Replies
24
Views
2K
Replies
2
Views
973
Replies
4
Views
3K
Replies
3
Views
2K
Replies
9
Views
1K
Back
Top