From time to timescape – Einstein’s unfinished revolution?

In summary, Garth proposes that the variance in clock rates throughout the universe is due to the gravitational energy gradients that Dark Energy is incorrectly presumed to be. However, Wiltshire argues that this is only a partial explanation, as differences in regional densities must also be taken into account in order to synchronize clocks and calibrate inertial frames. He claims that this means that the age of the universe varies depending on the observer, and supports his argument with three separate tests that his model universe passes. If his theory is correct, it would mean that the current widely accepted model of the universe, the Lambda CDM model, is misleading.
  • #36
The fear and hatred that people express towards the non-canonical is just as high as the Catholic Churches fear and hate of Galileo. I think we have discovered a truth about human nature. People hate new ideas.

They really don't. Part of the way that science works. If you are in a boxing ring and a prize fighter doesn't take a swing at you, then you really get disappointed. When I come up with a new idea, I spend about a day thinking about everything that could be wrong with it. Then I go to the person next door, and then we spend about a week trying to kill the idea. After a few weeks, if it passes the gauntlet, then eventually it gets published and everyone starts beating up on it.

Theorists love new ideas, but the way you come up with new ideas is to take an idea, put it into a gladiatorial arena and then toss lions at it.
 
Space news on Phys.org
  • #37
George at your pointer you get "published and mainstream". OK, so why has this whole thread not been deleted as it is "not mainstream"? I do like that there is a forum section at the bottom for basically "other stuff" maybe it should be there?
 
  • #38
edpell said:
How do we feel about this idea that inertia is defined only relative to other masses? Did Einstein think that? Does Wiltshire think that? Do you agree?

Personally if I'm understanding his paper (and I may not be) then it's a point which I find irrelevant and totally uninteresting (although other people may disagree). I'm more interested in the latter half of the paper where he writes down a metric and enough information where I can more or less do a calculation from it.
 
  • #39
twofish-quant said:
...the way you come up with new ideas is to take an idea, put it into a gladiatorial arena and then toss lions at it.

Love the phrase particularly the "toss lions at it" :)
 
  • #40
twofish-quant said:
The final reason I'm pretty sure that Wiltshire *is* invoking new physics is that he doesn't do any detailed calculations. If he *were* saying that inhomogenities are being handled incorrectly, then it wouldn't be hard to do a "we have a problem" calculation using standard GR. What I think he is doing is to use a new equivalence principle to create a new *class* of models, but since you have a class of parameterizible models rather than a single model, the next step is to try to put numbers in that let you do calculations.

OK, that sounds more reasonable. And there were indeed mutterings about the dangers of opening up of a "landscape" of new GR modelling if you give up the simplicity of existing cosmological calculation machinery.

Wiltshire is certainly pleased that he has just had funding for a new post-doc, Teppo Mattsson from Helsinki, who has calculational skills in this area.

And he threw up some slides which show places where his predictions and dark energy predictions should differ. "Baryon acoustic" and a few other things I didn't recognise.
 
  • #41
twofish-quant said:
I'm more interested in the latter half of the paper where he writes down a metric and enough information where I can more or less do a calculation from it.

Are you talking about his equation #2? What can you calculate from it?
 
  • #42
Dark Energy as simply a mis-identification of gravitational energy gradients and the resulting variance in clock rates?

If that were true then there would be no dark energy in our galaxy because there would be little variance in clock rates, but I'm sure there is no evidence that suggests dark energy dosn't exist in this galaxy.
I have my own theory on dark energy and it does have a great deal to do with clock rates, just not in the way you suggest, but not wanting to be labeled a crackpot as seems to be inevitable after reading comments on this thread I will leave it at that.
 
  • #43
edpell said:
George at your pointer you get "published and mainstream". OK, so why has this whole thread not been deleted as it is "not mainstream"? I do like that there is a forum section at the bottom for basically "other stuff" maybe it should be there?

As I said, it's a judgement call by the Mentors, and, so far, no Mentor has seen Wiltshire's work as sufficiently far from mainstream. This really is severely disrupting this thread by taking it far off topic.
twofish-quant said:
Personally if I'm understanding his paper (and I may not be) then it's a point which I find irrelevant and totally uninteresting (although other people may disagree).

I only have scanned the paper very, very quickly, but I it looks like I agree.
 
  • #44
apeiron said:
As I say, I had a good half hour conversation with Wiltshire and I think his belief is that he is doing GR more deeply - yes, a valid extension of the equivalence principle - rather than something which is new physics in the sense that anything was wrong or needs correcting at the equations level.

I'm staring at the metric that he wrote down, and I just don't see how it's consistent with standard GR. I *will* be interested to see how the people that do the cosmological simulations respond to the paper. If it's the case that Wiltshire believes that he is "doing GR correctly" (and by implication the people who are doing the simulations are doing GR incorrectly) then I think we'll have a "battle royale" and I'll sit back and munch popcorn and watch the fireworks.

I'm sure I will be told I'm wrong here. But I put it forward to be educated as to how I should be thinking about this. The timescape seems to say the universe is lumpy and so has local variations in spacetime curvature. But it could even be lumpy in a powerlaw fashion.

The standard FRW cosmology assumes that the universe is isotropic and homogenous. Now it isn't. So the LCDM model puts all of the lumpiness as first order perturbations, and models them as sound waves. In modeling perturbations as sound waves you ignore self-gravitation for the same reasons that you ignore self-gravitation when you model sound waves in air or ocean waves. It's too weak to make a difference.

Wiltshire says this is wrong, but people that invented LCDM didn't make these assumptions without careful thought. One problem is that if you don't separate out pressure effects from gravitation effects, you end up with a total mess and unable to calculate anything.

One other problem with Wiltshire's model is that I'm pretty sure that you would see some weird lensing effects. Also I'd expect to see acceleration for supernova Ia in back of voids to be very different from the acceleration of those that aren't in back of voids.

This connects with another long-running cosmo debate I could never follow - the apparent upset caused by fractal universe stories. All the debate about galactic walls, filaments, etc, and how large-scale cosmic structure would be a problem for the assumption of homogeneity, isotropy, what have you.

I think that a lot of the debate got garbled. First, we need to clearly define what a "fractal" is. A fractal is a shape that has a self-similar shape. You get self-similarity when you have tightly coupled chaotic, non-linear interacting systems. LCDM models pressure differences as "small" changes from the average. If we really did see fractals, then there would be something basically wrong with LCDM, but we see lumps, but they aren't fractal lumps.

Below that scale, the variation would have been scrambled and look close to gaussian (which would of course mean that the universe would not actually fit a pure powerlaw matter/curvature distribution over all scales).

Which is a good thing for LCDM.
 
  • #45
Reading some more...

http://adsabs.harvard.edu/abs/2009PhRvD..80l3512W

which IMHO is a much better paper, but it's a matter of taste. The interesting thing that I got out of this is that trying to explain acceleration as an artifact of GR inhomogenities isn't Wiltshire's idea but there is a whole group of people trying to do that, but that the basic problem is that these calculations are really, really hard to do. What Wiltshire does bring to the table is that he has a formalism that actually makes comparison with real data.

As far as whether what is proposes is new physics. Now that I've read his Phy Rev D, paper, it's pretty clear that he doesn't think so. The trouble is that I look at his equations, and I don't see how it is consistent with standard GR. The trouble with that is that the argument that I'd use to argue this involves some assumptions that Wiltshire and the people that he cites would consider invalid. To resolve this, you'd have to solve the full Einstein equations, and my bet would be that what you'd end up with when you do that is something much closer to Lambda-CDM than what Wiltshire is proposing but obviously he would disagree with that.
 
  • #46
George Jones said:
As I said, it's a judgement call by the Mentors, and, so far, no Mentor has seen Wiltshire's work as sufficiently far from mainstream.

What Wiltshire and the people he is citing is seems pretty clearly "non-mainstream." They are arguing that acceleration observed in supernova Ia may be due to GR related inhomogenities, which is a pretty radical and non-standard idea, but it's an interesting one worth thinking about.

I think what it boils down to is that Wiltshire has done his homework and so he has come up with fresh new ideas that aren't obviously wrong or untestable. That makes his ideas interesting.
 
  • #47
apeiron said:
And there were indeed mutterings about the dangers of opening up of a "landscape" of new GR modelling if you give up the simplicity of existing cosmological calculation machinery.

Which may not be a bad thing if it turns out that the current machinery is seriously flawed. The "standard LCDM" assumes that you can model density fluctuations as corrections to an average field, and if you go into Wiltshire's references, there are about a dozen people that are questioning that idea, and presented some things that suggest that maybe you can't. But there are no smoking guns. What the Wiltshire paper has done is three things:

1) put together a detailed model that *is* observationally different from the standard cosmological model
2) explained how that model is different from the standard model so that you can translate between the two
3) suggested a symmetry principle that his model holds, the standard LCDM model does not, and which he believes GR also holds

It's pretty clear that he and I both think about GR in very different ways. The way I think about it is very heavily influenced by the "membrane paradigm" by Kip Thorne. What Thorne did was to invent a way of thinking about black holes which (and there is the hard part) he showed was justified by Einstein's equations. It appears that no one has done the same thing with cosmology models. A lot of the work in GR that Throne and his colleagues have done could be titled "how a non super-math genius can think about GR without going crazy."

Wiltshire is certainly pleased that he has just had funding for a new post-doc, Teppo Mattsson from Helsinki, who has calculational skills in this area.

Cool. Here is one of this papers

http://arxiv.org/abs/0711.4264

And he threw up some slides which show places where his predictions and dark energy predictions should differ. "Baryon acoustic" and a few other things I didn't recognise.

Interesting. However looking over the papers, I don't see mention of what I think would be a big smoking gun. If the acceleration in the universe were caused by inhomogenity, then you should see supernova Ia next to known voids behave very differently than those that aren't, and there should be some sort of gravitational lensing effect.
 
  • #48
aggy said:
Dark Energy as simply a mis-identification of gravitational energy gradients and the resulting variance in clock rates?

Teppo Mattsson wrote a paper that describes the idea

http://arxiv.org/pdf/0711.4264

The idea is that it's known that clocks in regions of high density run more slowly than clocks in region of low density. So if we happen to be in a region of low density, the rest of the universe will appear to be speed up (i.e. you have the illusion of acceleration) even though it's just our clocks slowing down.

Now what Mattsson is saying is that you *can* get something like this to explain cosmic acceleration, but there is a price. You have to assume that we are in the middle of a giant spherical void, and you have to assume that the density of our patch of the universe evolved in a certain way to get the right numbers. The void has to be spherical because if we were in a non-spherical if we were off center, then galaxies in one half of the sky would look different than the other half. Ummm... Sounds fishy, and Mattsson knows it sounds fishy, so he spends the rest of the paper trying to come up with ideas that are less bogus sounding.

For example instead of being in the center of one big void, what happens if you are in the center of lots of little ones. But then you have another problem. If you assume that we are in the middle of one big void, then the math is easy. If you don't, then the math is very messy. Messy math is a bad thing.

The cool thing is that once you've proposed a theoretical model, you can think about it and come up with observational tests and build on that idea theoretically.

If that were true then there would be no dark energy in our galaxy because there would be little variance in clock rates, but I'm sure there is no evidence that suggests dark energy dosn't exist in this galaxy.

Mattsson is suggesting that maybe there isn't any dark energy anywhere. Our clocks are just slow. Also he wrote that paper in 2007, and things may have changed since then.

I have my own theory on dark energy and it does have a great deal to do with clock rates, just not in the way you suggest, but not wanting to be labeled a crackpot as seems to be inevitable after reading comments on this thread I will leave it at that.

The interesting thing here is that Mattson and Wiltshire are both coming up with wild and crazy ideas that are non-standard and non-mainstream, and this is a good example of how those ideas are handled. The important thing is that Mattson and Wiltshire are "playing the game." The papers don't say ***I HAVE FOUND THE SECRET TO THE UNIVERSE, BUT UNFORTUNATELY I DON'T KNOW ENOUGH MATH OR PHYSICS TO DO ANY DETAILED CALCULATIONS BUT IF YOU DISAGREE WITH ME YOU ARE BEING CLOSED MINDED***. It's a lot of "here are some interesting ideas, I've worked through these equations gotten these results, what do you think?"
 
  • #49
edpell said:
Or more generally there are two kinds of physicists 1) those whose status and self worth is tied to their mastery of an existing canon of theory and experimental data and who feel threatened by their current view being called into question and 2) those who enjoy learning and discovery and ideas (Ars Gratia Artis) an example would be Feynman.

I've never met anyone in the first category. Part of the thing is that in order to come up with a new and original and earth-shaking idea, you have to know a *HUGE* amount of data. If you aren't swimming in the existing canon of theory and experimental data, you are going to come up with stuff that people thought of fifty years ago and rejected for very good reasons. The neat thing is that all of the existing canon of theory and experimental data is now online. All you need is a tour guide that goes through the papers does some translations. That's where I come in.

The other thing is that there is much too much for anyone person to know so a lot of the conversations involve interactions with people that have very different information pools.

People don't get Nobel prizes for being unoriginal, but being original is a lot harder than it sounds.

But the hate is a psychological problem of the hater. If you have some rational for filtering something tell me about it but I have no interest in hearing your hate or using your hate as a filter.

But people in physics have weird ways of expressing love. If you go into any physics department, you'll find people *screaming* at each other in ways that make you think that they are going to kill each other, but then after about an hour they stop, shake hands, and then go out for drinks. It's really cool to watch two experts go at each other like that.

If physicist really thinks that you have an interesting idea, they are going to try to blow it to smithereens. If you get into the ring with a heavyweight champion and he tries to beat the living stuffing out of you, it's not because he hates or disrespects you. If he really hated or disrespects you, he *wouldn't* be trying to beat the living stuffing out of you.

One important rite of passage is the Ph.D. defense. That's when you get in a room with five or so of your teachers, and they take the work that you have been doing for the last five years, and try to rip it to shreds. If you've ever been in that situation it's a lot like the kung fu movies in which you have the hero in the center of the ring while five people try to bash him to shreds. The whole point of the process is to see If you can fight back and hold your ground. If you can, then you get the Ph.D.
 
  • #50
twofish-quant said:
For example instead of being in the center of one big void, what happens if you are in the center of lots of little ones. But then you have another problem. If you assume that we are in the middle of one big void, then the math is easy. If you don't, then the math is very messy. Messy math is a bad thing.

Wiltshire was definitely thinking not of a single void, but a foamy story where there are voids over all scales above 200 megaparsecs.
 
  • #51
So the physical universe has some structure some lumps and bumps (or more correctly voids and walls and filaments) and this means at some level of accuracy simple calculations based on simple uniform distributions are not accurate enough. Understandablely the folks doing the computations do not want harder work and so resist the idea. Until some hungry young guy/gal thinks hey if I do the work and it is important I will be a winner. Then they do it and receive acclaim or find they wasted five years of effort.

Why is this viewed as such a complex calculation? You make a series of Monte Carlo model universes and do the integration at several points and compare? It is the computer that is doing the work.
 
  • #52
edpell said:
So the physical universe has some structure some lumps and bumps (or more correctly voids and walls and filaments) and this means at some level of accuracy simple calculations based on simple uniform distributions are not accurate enough.

Or maybe they are. Not clear right now.

Understandablely the folks doing the computations do not want harder work and so resist the idea.

Utter and total non-sense.

The first thing that you try to do when you have a problem like this is to do a quick "is this a totally nutty idea or not" calculation which was what I was planning to do when I read Wiltshire's paper. However Teppo Mattsson already did the calculation that I was planning on doing on page 13 and 14 of the paper that I referenced earlier. What he is showing that if you are sitting in a big empty bubble that's 300 Mparsec's wide, that yes it clocks can slow down to make it look like the universe is accelerating. Now this probably *isn't* anything like the real universe. But it's a quick toy calculation that says that this is a half-decent idea that we need to look into further.

What Wiltshire is trying to do is to take things from being a "toy model" into something that you can actually compare to real experiments. Now that I understand what he is trying to do, it's a decent idea. One problem with the way that Wiltshire is going about it is that he is using math that's great for human number crunchers but totally awful for computers.

Until some hungry young guy/gal thinks hey if I do the work and it is important I will be a winner. Then they do it and receive acclaim or find they wasted five years of effort.

If someone goes through the effort of figuring out whether or not it works or not, and it doesn't, it's not a wasted effort. If nothing else you understand how inhomogenities in GR work. If someone spends about five years and then comes up with an airtight argument why none of this will work, that's worth a Ph.D. Also the cool thing is that while you are looking for X, you invariably stumble onto Y.

Why is this viewed as such a complex calculation? You make a series of Monte Carlo model universes and do the integration at several points and compare? It is the computer that is doing the work.

Well computers need programmers. We are talking about 10 coupled non linear equations *just for the gravity* in a 10,000x10,000x10,000 cube with maybe 100,000 time steps. If you run the full simulation, it's just not doable with current technology. So you end up with clever ways of reducing computer time, which "cross your fingers" don't actually destroy the calculation.

These simulations can eat up a month of supercomputing time. If you just dump the equations into a computer, changes are that the computer will just spit out "I can't do this calculation" and give you random noise. The first time you do a test run, the simulation will invariably not work. So you spend a few months debugging, and debugging, and finally you come up with something that looks reasonable. But is it?

And even getting to the point where you can code it is a challenge.

For example, one problem with the way that Wiltshire does the problem is that he splits things into "calculations you do at the voids" and "calculations you to in the non-voids". If you try to put it into a computer program, then chances are the computer will go nuts at the boundary conditions. Also you don't want if statements in a computer program. The computer chips like to add arrays of numbers. If you have branching statements, then the chip has to go down two different code paths, your pipelines get trashed, your L1 caches get overwritten, and a calculations that would have taken two weeks, now will take a year and can't be done. Also he does a lot of averaging. Averaging is bad. What do you average? How do you average?
 
  • #53
twofish-quant said:
Well computers need programmers. We are talking about 10 coupled non linear equations *just for the gravity* in a 10,000x10,000x10,000 cube with maybe 100,000 time steps. If you run the full simulation, it's just not doable with current technology.

I would love to know the computational size of this problem versus the computational size of the calculations done by the lattice gauge folks to compute particle masses. I think the lattice gauge folk go as far as building special purpose compute hardware for the specific calculation.
 
  • #54
There is a nice intro to numerical relativity at Cal Tech http://www.black-holes.org/numrel1.html

From the pages it is clear this is a new area.
 
Last edited by a moderator:
  • #55
twofish-quant said:
Wiltshire and the people he is citing ...are arguing that acceleration observed in supernova Ia may be due to GR related inhomogenities, which is a pretty radical and non-standard idea ...

But not nearly as radical as contradicting the
?[PLAIN]http://nasascience.nasa.gov/astrophysics/what-is-dark-energy[/URL] that
NASA said:
roughly 70% of the Universe is dark energy

You've been very helpful in clarifying what Wiltshire is doing, TQ. But you seem to imply that it is only the S1A results which Wiltshire is taking to be an artefact of GR in a lumpy universe.

What about the 70% invisible stuff that helps to flatten the universe?
 
Last edited by a moderator:
  • #56
I would disagree with the phrase that dark energy is a "very widely held view". I would agree that many people are aware of the idea. But since we have zero direct experimental data I doubt that everyone is on the bandwagon.
 
  • #57
edpell said:
I would disagree with the phrase that dark energy is a "very widely held view". I would agree that many people are aware of the idea. But since we have zero direct experimental data I doubt that everyone is on the bandwagon.

On the contrary, it is a very widely held view amongst the Cosmological Community.

The standard model is call the LCDM or [itex]\Lambda[/itex]CDM model; the L or better [itex]\Lambda[/itex] stands for the cosmological constant, shorthand for DE, whatever it may finally turn out to be.

In the standard model DE is necessary to bolster the 4% baryonic matter and 23 % Dark Matter to make the total near 100% critical density to account for the observed flatness or near flatness of the geometry of space (WMAP observations etc.)

Also, with an equation of state of [itex]\omega[/itex] = -1, DE explains the observed acceleration in cosmological expansion.

Garth
 
Last edited:
  • #58
Garth said:
In the standard model DE is necessary to bolster the 4% baryonic matter and 23 % Dark Matter to make the total near 100% critical density to account for the observed flatness or near flatness of the geometry of space (WMAP observations etc.)

I would like to understand this better. There are two uses of flat (I think) one meaning uniformity of density and one meaning a certain topological shape. I think you mean the latter the topological shape of the universe? How does WMAP tell us the topological shape of the universe [I am not disagreeing this is just new subject matter for me]?
 
  • #59
edpell said:
I would like to understand this better. There are two uses of flat (I think) one meaning uniformity of density and one meaning a certain topological shape. I think you mean the latter the topological shape of the universe? How does WMAP tell us the topological shape of the universe [I am not disagreeing this is just new subject matter for me]?
By 'flat' I do mean the geometric shape of the 3D space foliation (slice) of the 4D space-time of the universe.

The surface could be spherical (a 3D version of the Earth's 2D surface), flat, or hyperbolic (saddle shaped), depending on how much average density there is in the universe. This is a basic property of the cosmological solution of Einstein's GR Field Equation.

You can tell the type of surface that you are living in by studing the geometry around you.
A flat surface has triangles whose interior angles sum to 1800, a spherical surface where they sum to more than 1800 and a hyperbolic surface where they sum to less than 1800. Try it in 2D on different curved surfaces.

The WMAP observations are consistent with a flat surface.

This would require an average density equal to, or very nearly equal to, the critical density in Einstein's equations.Garth
 
  • #60
edpell said:
I would disagree with the phrase that dark energy is a "very widely held view". I would agree that many people are aware of the idea. But since we have zero direct experimental data I doubt that everyone is on the bandwagon.

I was wrong I withdraw the above statement.
 
  • #61
What happened to the cosmic coincidence issue for dark energy?

Has this been resolved? I hear very little talk of it these days. And it was a major reason for being cautious about dark energy.
 
  • #62
Garth said:
This would require an average density equal to, or very nearly equal to, the critical density in Einstein's equations.

In some thread the statement was made that in 1998 people realized that the shape (open versus closed versus flat) of the universe could be independent of the density of the universe. Was that a correct statement? You seem to be using the traditional shape is a function of density. They may have been saying that before 1998 people thought in terms of light and dark matter and now they have a new degree of freedom dark energy to work with.
 
  • #63
edpell said:
In some thread the statement was made that in 1998 people realized that the shape (open versus closed versus flat) of the universe could be independent of the density of the universe. Was that a correct statement? You seem to be using the traditional shape is a function of density. They may have been saying that before 1998 people thought in terms of light and dark matter and now they have a new degree of freedom dark energy to work with.

If the 1998 statement was an informed statement, I do not know of it personally, then it must be referring to the amount of matter density in the universe.

It was in 1998 that the SN Ie paper was published that indicated that we were living in an accelerating universe, which required some form of DE to be cosmologically predominant. Perhaps it was that that you were referring to...

The cosmological geometry is necessarily connected with the average cosmological density of the sum of all constituents of the universe.

Garth
 
  • #64
Garth said:
The cosmological geometry is necessarily connected with the average cosmological density of the sum of all constituents of the universe.

Glad to hear this. This makes more sense to me.
 
  • #65
edpell said:
In some thread the statement was made that in 1998 people realized that the shape (open versus closed versus flat) of the universe could be independent of the density of the universe. Was that a correct statement?

I don't recall any PF thread that said that.

One thing that may help is to distinguish between the spatial shape and the 4D shape that describes the future as well.

The shape of space is determined by the overall density. Space might be closed, and have a finite volume. It might for example be a hypersphere, the 3D analog of the 2D surface of a balloon.
Or space might be flat. Or it might have negative curvature, analogous to a saddle surface.
Which case, definitely depends on the overall density of matter and energy! I think this has always been clearly acknowledged in any PF thread I'm familiar with :biggrin:.

But the future of expansion is not so determined. We could be living in a universe which is spatially closed but "open" in the sense of being destined to expand forever.

This was what was generally realized around 1998. Before that, many treatments of this did not take account of the possibility of a positive cosmological constant, or dark energy.
It was assumed that if matter density was enough to guarantee a spatially closed universe then it was also sufficient to cause eventual collapse. Expansion would turn around and there would be a big crunch.

So in many people's minds, "closed" came to mean "destined to crunch".

What was generally realized in 1998 was that spatial closure does not necessarily imply destined to crunch.

Maybe it sounds obvious to you. And some people were always aware of the possibility of accelerated expansion a là dark energy, it just wasn't as generally realized as it is today.

The main thing is to be clear what you mean by "closed".

Spatial closure does not entail the crunch-closure of the future (as many used to think.)
 
  • #66
Despite the many clarifications and ramifications of this thread, I'm still trying to figure out how important (or not) Wiltshire's approach is. This is where I'm at now:

It seems to me that there are two sides to the concept of Dark Energy.

The first is its big-gorilla-in-the-room aspect -- the accepted view that it makes up 70% of all gravitating mass/energy in the always-near-flat FLRW model universe. This aspect is strongly motivated by the interpretation of the WMAP results --- which emanate from the early universe.

Dark Energy's second (and for me less striking) aspect is that it can explain a late-epoch acceleration of the universe's expansion, indicated by a slight non-linearity at the upper end of the Hubble plot, where S1a supernovae are used as standard candles.

Although Wiltshire does not make this distinction explicitly, he seems to be considering only the contribution of inhomogeneities like sheets and voids to the second, acceleration, aspect of dark energy. To me this looks like finessing the mystery of dark energy in an early flat universe by omission.

Is Wiltshire implying that there are (at least) two different kinds of dark energy -- the big-gorilla unknown kind in an early flat universe and the GR (when properly modeled with lumps and voids) kind in our present-day universe?
 
  • #67
oldman said:
Although Wiltshire does not make this distinction explicitly, he seems to be considering only the contribution of inhomogeneities like sheets and voids to the second, acceleration, aspect of dark energy. To me this looks like finessing the mystery of dark energy in an early flat universe by omission.
...

That's how it looks to me as well. There are other reasons to entertain the idea of dark energy. It makes other things work out. As you indicate, our matter density is only 30% of what is needed (without dark energy) to get the observed flatness.
So it is not merely an explanation of the slight late-time acceleration---but Wiltshire seems concerned with that alone.

I may be doing him an injustice. Someone who has read his papers more thoroughly and thought more about his proposal could help by jumping in here. I would be glad to be corrected.

I admire Wiltshire's nerve and think he is doing exactly what mainstream people ought to, now and then, which is raise hell and kick the envelope.
 
  • #68
oldman said:
Despite the many clarifications and ramifications of this thread, I'm still trying to figure out how important (or not) Wiltshire's approach is.

The big contribution that Wiltshire is making is that he has come up with a way of doing the calculation so that you can get results that you can compare with observational data. Looking at the references that Wiltshire has in his paper, that's pretty huge. Previous papers used a very, very simplified and unrealistic model of the universe in order to get a result that there might be an issue.

What Wiltshire had done is to take that idea and come up with realistic calculations and then "translated" the results into standard observational outputs.

Although Wiltshire does not make this distinction explicitly, he seems to be considering only the contribution of inhomogeneities like sheets and voids to the second, acceleration, aspect of dark energy. To me this looks like finessing the mystery of dark energy in an early flat universe by omission.

Ummm... Now that you mention it, I think he is doing that. His model doesn't work at all for the early universe, because you don't have any inhomogenities that can cause issues.

Is Wiltshire implying that there are (at least) two different kinds of dark energy -- the big-gorilla unknown kind in an early flat universe and the GR (when properly modeled with lumps and voids) kind in our present-day universe?

I don't think he has gotten that far yet.
 
  • #69
marcus said:
I may be doing him an injustice. Someone who has read his papers more thoroughly and thought more about his proposal could help by jumping in here. I would be glad to be corrected.

I don't think that the idea that inhomogenities could be the cause of acceleration is really his idea, since it seems to be the idea of various people that he cites. But it really can't be the cause of dark energy in the early universe, in a straightforward way.

The basic idea is that from a global reference frame, clocks seem to slow down when you are near a gravitationally strong object. So if you assume that we are in a void, then as the void develops your clock and the clocks of nearby objects seem to speed up, which gives rise to the illusion of acceleration. What you *can* do is to create a density evolution that matches supernova Ia observations.

However, for this to work, you have to assume that the local density of the universe is really, really low (i.e. almost zero). I have problems with this assumption. The other problem is that to match the observations, you have to fine tune the void in certain ways that look suspicious (i.e. it has to be spherical and we have to be in the middle of it). Now you *may* be able to create a more realistic model of voids by assuming a scattering of lots of small voids instead of one big one. However, this makes the math really messy, and Wiltshires contribution is to show a way of doing that calculation.

I admire Wiltshire's nerve and think he is doing exactly what mainstream people ought to, now and then, which is raise hell and kick the envelope.

It's also good that he is in a "publication cluster" of people that are working on the same idea. If it turns out that they get lucky, then one of them will get the Nobel prize. The reason that I'm interested in what they are doing is that even if they don't get lucky and it turns out that they are totally wrong, a lot of the Wiltshire is doing seems to be salvagable for GR calculations.
 
  • #70
edpell said:
I would disagree with the phrase that dark energy is a "very widely held view". I would agree that many people are aware of the idea. But since we have zero direct experimental data I doubt that everyone is on the bandwagon.

Most people are, since the cosmological models just won't work without either dark energy or something else happening. Dark energy is the "least weird" of the options right now. Also we have tons and tons of experimental data, and it just won't work unless you assume dark energy or some alternative.

Most people didn't think that it was necessary to include dark energy in cosmological models before the 1998 supernova observations. The reasons those are important is that they are pretty direct and don't involve model assumptions. There are other reasons to think that dark energy exists from the early universe, but the problem with those is that you are using a model to infer things about the observations and so there is a good chance that you may end up with circular reasoning.

The 1998 supernova observations were important because supernova have nothing to do with cosmology, and the observations don't require any sort of cosmological assumptions to process. In 1997, it was possible to argue that dark energy and end the conversation, but it wasn't possible in 1999. You can argue dark energy doesn't exist, but you have to purpose an alternative, which you didn't have to do in 1997, and most models before 1997 just assumed that there wasn't dark energy or any alternatives.
 

Similar threads

Replies
9
Views
1K
Replies
10
Views
3K
Replies
2
Views
974
Replies
31
Views
3K
Replies
2
Views
2K
Replies
9
Views
1K
Replies
2
Views
2K
Replies
4
Views
2K
Back
Top