Perlmutter & Supernovae: Debunking the Myth of Accelerating Galaxies

  • Thread starter gork
  • Start date
  • Tags
    Supernovae
In summary: Perlmutter says that galaxies are accelerating away from each other. He bases this on the fact that things that are farther away from us are moving faster than things which are closer. The problem is that we see things farther away from us as they were farther in the past. So quasars at the edge of the visible universe were traveling at .9c 13.7 billion years ago. Galaxies half as far away were traveling half that speed 7 billion years ago, or whatever the numbers are. Andromeda is actually moving closer to us and that is still 2.5 million years ago.The evidence seems to me to indicate, not that things are accelerating, but that they are slowing down. We have no idea what
  • #36
twofish-quant said:
That just doesn't make any sense to me. The supernova data is just one piece of the puzzle and has to be understood in the context of other data. In order to do anything with the supernova data, you have to make hundreds of assumptions, and there is no way that you can justify those assumptions without reference to other data. If you had only the supernova data, then you could come up with a lot of alternative explanations.

Thank you for conceding my point.

twofish-quant said:
It turns out that none of those observations work in light of other data. If Perlmutter had published and then a year later it turns out that supernova Ia were not standard candles and there is large scale evolution of SN Ia or if it turned out that dark flows were much stronger than expected, it would have been a "merely interesting" paper but not worth a Nobel.

Now as it happens subsequent experiments have tightened error bars, and WMAP shows consistent CMB evidence.

Also note that Perlmutter went in with a hundred assumptions. He was trying to measure the deceleration parameter, which was expected to be positive.

As long as you confine yourself to unmodified GR, I would agree that the alternative explanations do not suffice in light of WMAP.

twofish-quant said:
I'm an astrophysicist, not a mathematician. I don't deal with proof. I can't "prove" that the universe is accelerating any more than I can "prove" that the Earth is round.

No single scientific observation is "proof" of anything. You have to view observations as fitting within a model, and right now the idea that universe is accelerating is the best model that people have come up with, and the fact that people have tried really hard and failed to come up with alternative models should tell us something.

I can't *prove* that there isn't a simple explanation that explains everything. I can show that people have tried and failed to come up with an alternative explanations, and the most obvious model right now is pretty darn interesting.

I can't "prove" something is true. I can "prove" something is false, and Perlmutter kills CDM with Lambda=0, which was the standard cosmological model in 1995. The simplest theoretical patch is to assume that Lambda > 0.

Well said.

twofish-quant said:
Very strongly disagree. If Perlmutter et al. just got distance moduli for large redshift supernovae and got what people expected, that wouldn't be worth a Nobel. They got the Nobel because they did the observations and got results that *no one* expected.

Of course it's a value judgment, but I think getting that data is extremely valuable even if it had shown what we expected.


twofish-quant said:
The revolutionary part was that Perlmutter came up with numbers that cannot bet explained without really weird stuff happening. Even if it turns out that the universe is not accelerating, the way that we thought the universe worked in 1997 just will not work with his observations.

Agreed, so again, give him the Nobel for what he did, not some particular inference.
 
Astronomy news on Phys.org
  • #37
You're arguing semantics or something Ruta. The observations by perlmutter & others leads directly to the conclusion of an accelerating expansion. Quoted from nobelprize.org: "The Nobel Prize in Physics 2011 was divided, one half awarded to Saul Perlmutter, the other half jointly to Brian P. Schmidt and Adam G. Riess for the discovery of the accelerating expansion of the Universe through observations of distant supernovae".

I'm going to side with the Nobel Prize commitee on this one.
 
  • #38
RUTA said:
As long as you confine yourself to unmodified GR, I would agree that the alternative explanations do not suffice in light of WMAP.

I don't know of any modifications of GR that will let you avoid an accelerating universe (Whitshire model claims not to modify GR). The modified theories of gravity that I'm aware of, namely the f(R) models, attempt to explain acceleration without invoking dark energy, but the universe is still accelerating. The problem is that the sign is wrong. Any modified gravity model would be expected to be close to GR at short distances and different at far distances. However, the observations show maximum acceleration at short distances and lower acceleration at long distances. So if you are trying to show that the acceleration isn't real, then you have to modify gravity the most at short distances which then runs into problems in that we have a lot of data that suggests that GR works at short distances.

If you want to modify GR to explain how the acceleration came about, that's not hard, and there is an entire industry devoted to that, and hundreds of papers on that topic. If you want to modify GR to argue that the acceleration doesn't exist, that's really, really, really hard, and I don't know of anyone that has been able to do that.

The other thing is that the supernova data has changed the definition of GR. In 1995, if you asked people to write down the equations of GR, they would have written it with the cosmological constant = 0. Einstein called the cosmological constant his biggest mistake, and for sixty some years people agreed with that. Today, "unmodified GR" has a non-zero cosmological constant.

Something that is very important about the data is that the signal is huge. If he found that the q=-0.01 or even q=-0.1, then you can come up without much difficult with reasons why the universe may not be accelerating, and that this whole this is just some misinterpretation of the data. As it is, q=-0.6 which is way, way bigger than anything that people have been able to come up with.
 
  • #39
twofish-quant said:
I don't know of any modifications of GR that will let you avoid an accelerating universe (Whitshire model claims not to modify GR). The modified theories of gravity that I'm aware of, namely the f(R) models, attempt to explain acceleration without invoking dark energy, but the universe is still accelerating. The problem is that the sign is wrong. Any modified gravity model would be expected to be close to GR at short distances and different at far distances. However, the observations show maximum acceleration at short distances and lower acceleration at long distances. So if you are trying to show that the acceleration isn't real, then you have to modify gravity the most at short distances which then runs into problems in that we have a lot of data that suggests that GR works at short distances.

If you want to modify GR to explain how the acceleration came about, that's not hard, and there is an entire industry devoted to that, and hundreds of papers on that topic. If you want to modify GR to argue that the acceleration doesn't exist, that's really, really, really hard, and I don't know of anyone that has been able to do that.

The other thing is that the supernova data has changed the definition of GR. In 1995, if you asked people to write down the equations of GR, they would have written it with the cosmological constant = 0. Einstein called the cosmological constant his biggest mistake, and for sixty some years people agreed with that. Today, "unmodified GR" has a non-zero cosmological constant.

Something that is very important about the data is that the signal is huge. If he found that the q=-0.01 or even q=-0.1, then you can come up without much difficult with reasons why the universe may not be accelerating, and that this whole this is just some misinterpretation of the data. As it is, q=-0.6 which is way, way bigger than anything that people have been able to come up with.

I agree. My point was simply that if there is any hope of doing away with accelerated expansion, I think it's safe to say at this point it would have to deviate from GR cosmology.
 
  • #40
Drakkith said:
You're arguing semantics or something Ruta. The observations by perlmutter & others leads directly to the conclusion of an accelerating expansion. Quoted from nobelprize.org: "The Nobel Prize in Physics 2011 was divided, one half awarded to Saul Perlmutter, the other half jointly to Brian P. Schmidt and Adam G. Riess for the discovery of the accelerating expansion of the Universe through observations of distant supernovae".

I'm going to side with the Nobel Prize commitee on this one.

Once you have the understanding explicated by twofish, you're good to go, i.e., you know what was actually done and whence the conclusion of accelerated expansion. However, if you're a layperson who hasn't been exposed to a discussion as in this thread, you might well conclude the supernova data constituted a direct measurement of acceleration, e.g., velocity as a function of time as in drag racing. Now you realize that the supernova data in and of itself does not necessitate accelerated expansion, but must be combined with the assumption of GR cosmology and other data. The conclusion of accelerated expansion follows from a robust set of assumptions and data, but the Nobel recipients were not responsible for that entire set, only the supernova data. As evidence of the potential for confusion caused by statements such as found in the Nobel prize citation, just look at how this thread started.
 
  • #41
RUTA said:
I agree. My point was simply that if there is any hope of doing away with accelerated expansion, I think it's safe to say at this point it would have to deviate from GR cosmology.

Much, much more serious than that. The q acceleration parameter doesn't assume anything about the gravity law. The only assumption is that the universe is at large scales isotropic and homogenous. Any theory of gravity that is isotropic and homogenous at large scales (GR or no) is not going make a difference.

In fact at small scales, the universe isn't isotropic and homogenous, which is why the first calculation that people did was to see what the impact isotropic and inhomogenity would have. Right now the only think that would kill the results are essentially data related issues (i.e. we really aren't measuring what we think we are or massive underestimates in anisotropy and inhomogenity in the near region).
 
  • #42
RUTA said:
However, if you're a layperson who hasn't been exposed to a discussion as in this thread, you might well conclude the supernova data constituted a direct measurement of acceleration, e.g., velocity as a function of time as in drag racing.

Well it is.

The measurements of supernova acceleration are no less "direct" than any other scientific measurement. If you try to measure the speed of a speeding car, you are bouncing radar waves off the car or doing some other processing.

Yes, you can misinterpret the results, but this is no worse than any other measurement, and the reason I'm hammering on this issue is that the supernova measurement *AREN'T* any less direct than a traffic cop using a radar device to track speeders.

Now you realize that the supernova data in and of itself does not necessitate accelerated expansion, but must be combined with the assumption of GR cosmology and other data.

Except you *DON'T* have to assume GR cosmology. You *do* have to make some assumptions, but those assumptions are no worse than those that you have to make if you try to measure velocity with a radar gun. If you try to measure the speed of a drag racer with a radar run, you have to assume that the speed of light is a particular value, certain things about Doppler shift, etc. etc.

The *ONLY* reason that people started questioning the assumptions of the measurements to the extent that they did was that results were so weird. If you clock a drag racer going 0.1c with your radar gun, your first reaction is going to be that your radar gun is broken.

The conclusion of accelerated expansion follows from a robust set of assumptions and data, but the Nobel recipients were not responsible for that entire set, only the supernova data.

So what?

And they weren't responsible for all of the supernova data. In fact both data was gathered with teams of dozens of people, and practically the entire astronomy community was involved in checking and cross-checking the results. The reason I got to listen to the conversations is that my adviser happens to be one of the world's foremost experts on supernova Ia, and the other person in the room was an accretion disk effort that dabbles in cosmology, and we were trying to figure out whether or not the !a evolution or inhomogenity would kill the results.

The Nobel prize does contribute to the misconception of the lone scientific genius, but that's another issue. I suspect one reason I'm getting emotional about this particular issue is that I was involved in figuring out what was going on, and in some sense when they gave the Nobel to the supernova researchers, they were also giving it to me and several thousand other people that were involved at putting together the results.

As evidence of the potential for confusion caused by statements such as found in the Nobel prize citation, just look at how this thread started.

The thread started when you had someone that was simply unaware of what data existed. Also one reason I think that the Nobel prize wording is correct is that sometimes there is too much skepticism. The supernova results are extremely solid and hedging on the wording suggests that there is reasonable room for scholarly debate as to whether or not they are measuring acceleration when in fact there isn't.

This matters less for supernova, but it matters a lot for evolution and global warming. There is no reasonable scholarly debate as to whether or not evolution happens or that global warming is happening because of CO2 input. A lot of debate on the details, and one nasty political tactic is to take debate on the details as people disagreeing with the main premise.

In any case, you aren't going to "less confusion" because no matter how you teach your students, once they leave your lectures and attend mine, they are going to hear me very strongly contradicting your choice of wording.
 
  • #43
I think you are misinterpreting my position. You asked for *proof* that the universe is expanding, and by that I interpret as mathematical proof and you can't prove physical results mathematically, but you should be asking for levels of evidence. Let's use legal terminology.

The Perlmutter results and everything known as of 1998, would in my opinion establish that the universe is expanding by the preponderance of the evidence. What that means that if you had a civil court case in which a $1 billion dollars would change hands if the universe is accelerating then I'd vote guilty but based on the evidence in 1998, I would not vote to sentence a person to death based on the evidence in 1998 since that requires "proof beyond a reasonable doubt." Today, I think that the evidence does establish legal proof beyond a reasonable doubt, so if I was on jury in which someone was subject to the death penalty based on evidence of the accelerating universe, I'd vote to convict.

Now in order to get from raw data to inference, you have to run through several dozen steps, and there is the possibility of human error and misinterpretation at each step. One important thing about the original results were that they were released by two separate teams, which is important. The results are so weird, that if only one team had released them the reaction of most people is that they just messed up somewhere. Maybe there was interference from a television set. Maybe someone didn't take into account the motion of the earth. Maybe someone cleaned the telescope without the team knowing. (These aren't hypotheticals, they've actually happened.) Once you have two separate teams with different telescopes, different computers, different algorithms, different people come up with the same answer, then a lot of the alternative explanations disappear. It's not a computer bug.

Now suppose we have a magic oracle tell us that the got everything to the distance modulus versus redshift right, and let's suppose that we have this magic oracle tell us that we are in fact measuring lookback time versus velocity. At that point by definition, *something must be accelerating*. If the redshifts are actually due to velocity, and the distance modulus translates to look back, we see the velocities decrease over time, and that's the definition of acceleration. The question is "what is accelerating"? It could be the Earth or it could be the galaxies. It could also be the Hubble flow or it could be something else.

Based on the best numbers in 1998, non-Hubble acceleration of the Earth can only account for about a third of the signal. Now maybe those numbers are wrong, but people have checked since then, and the limits haven't changed. There are also some other possibilities. The big one in my mind is since we do not know exactly what causes SNIa, there is the possibility that SNIa might change radically over time. However, for this to influence the results, you would have to have SNIa evolution that has never been observed. Still it's a possible hole, and that hole has been filled as we have distance measures that have nothing to do with SNIa which show the same results.

Note here that I've not mentioned GR or dark matter or anything "esoteric." That's because none of that influences the results. The point that I'm making is that the SN results are as "direct" a measurement as you can make, and there is no more room for model related skepticism than there is for mistrusting your GPS.
 
  • #44
Again, I'm not questioning the data, but to say that it constitutes a direct measurement of acceleration in the same sense as in measuring objects moving on the street outside my house is wrong. There are assumptions one has to make with cosmology that one does not have to make with observations here on Earth. For example, you say cosmological z produces a velocity, but you can only make that connection in the context of a cosmology model. I don't have nearly that degree of uncertainty with rendering a Doppler z for race cars. In fact, I can use people making spatiotemorally local measurements on the car to get velocity. A cosmology model is also required to turn mu into distance. I don't have nearly that degree of uncertainty in knowing how far the race car is going down the road, I can measure that distance directly by actually traversing the spatial region personally.

I'm sure you feel the assumptions you are making are reasonable. Brahe and proponents of geocentricism also thought their assumptions were reasonable. When it comes to announcements concerning cosmology, I think we better stick as closely as possible to what we actually observe and distinguish clearly between those observations and model-dependent inferences.
 
  • #45
RUTA said:
Again, I'm not questioning the data

You should.

but to say that it constitutes a direct measurement of acceleration in the same sense as in measuring objects moving on the street outside my house is wrong.

One important point here was that the important quantity that people were trying to measure was q, which is different from measuring the acceleration of an individual galaxy. If you had an oracle that gave you the acceleration of each galaxy, you'd still have to strip out the peculiar acceleration of each galaxy to get the to Hubble flow.

What people were trying to find was

q = (speed galaxy 1 - speed galaxy 2) / (time galaxy 1 - time galaxy 2) averaged over galaxies at time 1 and time 2. That's different from the rate of change of a specific galaxy, which you can't get. However, even if you could get it, you'd still have to do statistics to get to the number you are really interested in.

There are assumptions one has to make with cosmology that one does not have to make with observations here on Earth. For example, you say cosmological z produces a velocity, but you can only make that connection in the context of a cosmology model.

This is wrong. It's just Doppler shift. There are some corrections that are model dependent, but these aren't large, and can be ignored if you are getting rough numbers. You *do* have to make some assumptions (i.e. the shift comes from velocity and not from gravity), but those assumptions can be cross checked, and are independent of the cosmological model.

I don't have nearly that degree of uncertainty with rendering a Doppler z for race cars. In fact, I can use people making spatiotemorally local measurements on the car to get velocity

You really should be more skeptical of local measurements.

All your measurements are still indirect in the since that you are making assumptions in order to get the numbers. If you are looking at the car in front of you, you are still interact with the car using various forces and fields. And it turns out that you can get things wrong. I've had to deal with broken speedometers.

A cosmology model is also required to turn mu into distance.

A model is necessary to turn mu into distance, but it's not a cosmological model. There are assumptions that you have to make in order to turn mu into distance, but those are not cosmological.

I'm sure you feel the assumptions you are making are reasonable.

That's funny because I'm not. Don't assume. What you do is to show that number x comes out of calculation a, b, c, and d. You then go back to each item and see if a, b, c, and d can be justified with observational data. Also you ask yourself, suppose we are wrong with assumption A. What would be the impact of being wrong.

The issue with the acceleration universe is that if turns out that the signal is so large, that if you kill some major assumptions, then it doesn't matter. OK, let's assume that GR is wrong. Does that change the conclusion. No, it doesn't. Let's assume that we add a large amount of dark flow, does that change the conclusion. No it doesn't.

Brahe and proponents of geocentricism also thought their assumptions were reasonable.

And it turns out that the Tychoian system works just as well as the Coprehnican system, since they are mathematically identical.

In any case, I'm jumping up and down, because this is precisely what people are *NOT* doing. You go through the results and see how the conclusions are impacted by the assumptions. You'll find that a lot of the assumptions don't make a difference. If it turns out that the big bang never happened and we are in a steady state universe, then it doesn't impact the results.

You then have a list of things that might impact the results, and then you go back and double check those results.

When it comes to announcements concerning cosmology, I think we better stick as closely as possible to what we actually observe and distinguish clearly between those observations and model-dependent inferences.

You are observing raw CCD measurements. The big question marks are in deriving Z and distance.

The point that I'm making that the results are nowhere as model dependent as you seem to think they are. You don't have to assume GR. You don't have to assume any particular cosmological model. You *do* have to make some assumptions, and the original papers did a good job at listing all of them. The thing about Perlmutter is that the observation is so "in your face" that when you ask what happens if you make different assumptions, then the signal just does not disappear.

Also, as a matter of observation, you *can't* distinguish clearly between observations and inferences. You aren't getting a z-meter. You have CCD readings. To get them into z, requires two or three dozen steps, each of them contain assumptions. Some of those assumptions are things that are "obvious" (i.e. you have to subtract out the motion of the earth), but it turns about that you have to keep a list because they could give you spurtious results if you do them wrong.

Look the problem that I have is when skepticism which is healthy becomes "stick your head in the sand and ignore reality" which very unhealthy. For supernova data, this doesn't matter that much, but you see the same thing with climate models and evolution which does matter a lot. The point that I'm making is that the supernova results *DO NOT DEPEND ON THE SPECIFIC COSMOLOGICAL MODEL*. The signal is just too strong.
 
Last edited:
  • #46
The reason I'm hitting on this issue is that people tend to think of cosmology as a form of philosophy which is bad because cosmology is an *observational* science. We have a ton of data that is coming in from all sorts of instruments, and once you have a ton of data, then the room for "random speculation" goes down.

Once you get to the point where you are measuring distance modulus and redshifts, and once you have convinced yourself that the redshift is in fact a velocity (which has nothing to do with your cosmological model), then the only assumption that you need to make to get to the accelerating universe is that of homogenity and isotropy. The way you deal with this is the same way you deal with the round earth. You can do a rough calculation assuming the Earth is perfectly spherical, and then you figure out how much the known differences from perfectly spherical shape change your result.

You do the same for cosmology. You do a calculation assuming that the universe is perfectly homogenous and isotropic, you get a number. You then put in the known deviations from perfect smoothness, the result that get is that those deviations can change things by q=0.3 at most, which still gets you an accelerating universe. You calculate how much of a deviation from smoothness will kill the result, and it turns out that this is excluded.

Note that all of this is based on *data*. The reason that I'm hammering on this issue is that you don't want to give the public the idea that things are more uncertain than they are. The types of uncertainties that you get from cosmological measurements are the same types of uncertainties you get measuring anything else, and they are no more "indirect" than measuring speed with a Doppler radar. You verify correctness the same way we do with any Earth based measurement which is to have multiple independent methods and see if they give you the same answer, and we have the data to do this.
 
  • #47
To fit mu vs z (data) using GR cosmology you find the proper distance (Dp) as a function of redshift z using your choice of a particular model, which gives you the scale factor as a function of time a(t), and 1/(1+z) = a(te) where te is time of emission (this assumes current a(to) = 1, which you're free to choose). This form of redshift is independent of your choice of GR cosmology (but not independent of your choice of cosmology model in general) and it is a redshift (not blueshift) if the scale of the universe is larger now than at emission regardless of the relative velocites of emitter and receiver at emission or reception, i.e., this is NOT a Doppler redshift. Next you have to convert Dp to luminosity distance (DL), which is a model-dep relationship, then DL gives you mu. As you can see, this fit is highly dependent on your choice of cosmology model. If it wasn't, how could the data discriminate between the models?

And, think about it, the model that best fits this data tells us the universe was first decelerating and then changed to acceleration. If you believe that's true and you also (erroneously) believe the data gives you time rate of change of velocity directly and independent of model, then you would see the acceleration in nearby (small z) galaxies and deceleration in the most distant galaxies (large z). But if that was true, we would've known about the accelerated expansion all along, the large z would only be used to find the turning point between acceleration and deceleration. But that's not what happened, decelerating models fit the small z fine and no one suspected accelerated expansion. It's the large z that keeps us from fitting a decelerating model.

I'll stop here and let you respond. I'm on the road and posting is non-trivial so I apologize for the terse nature of this response.
 
  • #48
RUTA said:
Next you have to convert Dp to luminosity distance (DL), which is a model-dep relationship, then DL gives you mu. As you can see, this fit is highly dependent on your choice of cosmology model.

But my point is that the model dependence does not affect the conclusion. You can assume that GR is wrong and then create an alternative theory of gravity by changing the parameters in the equations. GR may be wrong but we have enough observations so that we know that GR is a good enough approximation to within some error.

Once you do that you quickly figure out that the signal is strong enough so that if you put in any plausible non-GR model (i.e. one that isn't excluded by other observations), you still end up with acceleration. The only way out is if you assume that there is some effect which is not taken into account by your parameterization. If you just use a single scale factor then you aren't taking into account dark flows and voids. Once you take those into account, you still can get rid of the signal without some very tricky arguments. At that point you try to think of anything else that you may have missed, and after ten years of doing that, you start to think that you didn't miss anything.

What you can do is to make list all of the assumptions that go into the conclusion. We don't know if GR is correct, but we know that the "real model of gravity" looks like GR within certain limits. You then vary the gravity model within the known observational limits, and it turns out that it doesn't make that much difference. There are other parts of the problem that make a lot more difference than gravity model (like the assumption that SN Ia are standard candles).

If your point is that the supernova measurements cannot be interpreted without reference to other data, sure. But that's true with *any* measurement. I have a GPS device. That device gives me my position, but it turns out that there are as many if not more assumptions in the GPS result than in the supernova measurement. If I assume GR is wrong and Newtonian gravity is correct, it turns out that this doesn't change my conclusions w.r.t. supernova. However it does change my GPS results.

if the scale of the universe is larger now than at emission regardless of the relative velocites of emitter and receiver at emission or reception, i.e., this is NOT a Doppler redshift.

Then we get into another semantic argument as to "what is a Doppler redshift?" and then "what is an acceleration?". You can do a lot of cosmology with Newtonian gravity. It's wrong but it's more intuitive. There is a correspondence between the term acceleration in the Newtonian picture and the GR picture. Similarity there is a correspondence between the concept of "Doppler shift" in Newtonian cosmology and that of GR.

The reason that these correspondences are important is that it let's you take observations that are done in "Newtonian" language and then figure out what it means in GR language. Sometimes semantics are important. If you do precision measurements, then you have to very clearly define what you mean by "distance", "brightness", and "acceleration."

But in this situation it doesn't matter. You use any theory of gravity that isn't excluded by observation and any definition of acceleration that you want, and you still end up with a positive result.

And, think about it, the model that best fits this data tells us the universe was first decelerating and then changed to acceleration. If you believe that's true

That doesn't make sense to me. In order to get scientific measurements, you have to make assumptions, and it's important to know what assumptions you are making, and to minimize those assumptions.

In order to do the conversion from brightness distance to some other distance, you have to make assumptions about the theory of gravity from distance 0 to the location that you are looking at. You *don't* have to make any assumptions about anything more distant.

Now it turns out that if you assume that you have a cosmological constant, you get a nice fit, but it's really, really important point that this assumption was not used to get the to the conclusion that the universe is accelerating. This matters because in order to interpret the supernova results, you have observations that limit what you can do to gravity. Now if you go into the early universe, you can (and people do) make up all sorts of weird gravity models.

It's important to keep things straight here, to make sure that you aren't doing any sort of circular reasoning. If it turns out that assuming GR is correct was critical to getting the conclusions we are getting, then that is a problem because things get circular.

You also (erroneously) believe the data gives you time rate of change of velocity directly and independent of model

The observers were measuring q. You get q by performing mathematical operations on the data. Now what q means, is something else. Within the limit of models of the universe that are not excluded by other observations, the observed q=-0.6 means that you have an accelerating universe.

I'm asserting is that for these particular results, gravity model dependence doesn't introduce enough uncertainty to invalidate the conclusions. The model *does* have influence the numbers, but whether or not that matters is another issue. For the supernova situation, the model dependencies aren't enough to allow for non-acceleration.

What I'm asserting is that if you plot the results on a graph and then include all possible values for the acceleration/deceleration of the universe, then anything with non-acceleration is excluded.

But if that was true, we would've known about the accelerated expansion all along, the large z would only be used to find the turning point between acceleration and deceleration.

No we wouldn't because for anything out past a certain distance we don't have any good independent means of measuring distance other than redshift. For anything over z=1, all we have is z, and there isn't any independent way of turning that into a distance. We can make some guesses based on things that have nothing to do with direct measurements, but unlike the supernova measurements those are just guesses that could very easily be wrong.

Also this matters because the assertion that the universe is decelerating at early times and that this deceleration turned into an acceleration *is* heavily model dependent. If we've gotten our gravity models wrong, then most of the evidence that indicates that the universe is decelerating at early times just evaporates. Now people are extending the supernova data to regions where we should see the universe decelerating (interesting things with GRB's).

I suppose that's one more reason to make the distinction between what we "know" and what we are guessing. Up to z=1, we know. We might be wrong but we know. For z=7, we are guessing.

This is also where the gravity models come in. For z=1, you look at the list of gravity models that are not excluded by observation, and the impact of the gravity model turns out not to be important. There is an impact, but it doesn't kill your conclusions. At z=5, then it does make a huge difference.

But that's not what happened, decelerating models fit the small z fine and no one suspected accelerated expansion. It's the large z that keeps us from fitting a decelerating model.

Decelerating models fit small Z (z<0.1) fine. Accelerating models also fit small Z (z<0.1) fine. The problem is that before we had supernova, we had no way of converting between z and "distance". We do now.

I'll stop here and let you respond. I'm on the road and posting is non-trivial so I apologize for the terse nature of this response.

The problem is the dividing line between what we "know" and what we are guessing. If we were talking about using the WMAP results to infer the early expansion of the universe, then we are guessing. In order to go from WMAP to expansion rate, we have to make a lot of assumptions, and those assumptions are not constrained by data. We get nice fits if we assume GR, but GR could be wrong, and the for z=10, the possibility exists that we've gotten gravity wrong is enough so that it could totally invalidate our conclusions.

I'm trying to argue that this is not the situation with supernova data.
 
  • #49
One other thing is that may be a little confusing is that both Perlmutter and Reiss reported their results in the language of GR. That doesn't mean that GR is essential for their results to be correct, any more than the fact that they used earth-centered coordinates to report the positions of their objects means that they think that the Earth is in the middle of the universe. It so happens that those are the most convenient coordinates to report your results in. We don't know if GR is correct. We do know that the "real theory of gravity" looks a lot like GR at short distances.

So the way that I'd read the Riess and Perlmutter results as being "here are the numbers that you get if GR were correct". Now if you have another model of gravity, you can "translate" those numbers into "effective omega." At that point the theorists go crazy and see if they can come up with a model of gravity that matches those numbers. You run your new model of gravity and then calculate that "omega" in your model means the same thing as "omega effective" in GR, and that makes it easy to describe 1) how different your model is from GR and 2) how well your results match those the SN results.

What happens when you try this exercise is that you find that it's hard enough to much the supernova data with alternative gravity that gives different results that most theorists have even up trying. You can come up with lots of theories of alternative gravity, but everyone that I know of ends up concluding that the universe is accelerating at z < 1.0, and the name of the game right now is to come up with models that give results that match GR at "short" distances where we have a ton of data, and which might be very different at long distances where you can make up anything you want because we don't have strong data. Everything that's been said about the supernova data I agree if we were talking about WMAP because we don't have "direct" measurements of expansion, and everything is very heavily model dependent.

But my point is that even though they use a particular model to describe their results, it turns out that their results are not sensitively dependent on their model. They use geocentric coordinates to identify the objects that they are observing, and they assuming Newtonian gravity to describe brightnesses, but those are merely convenient coordinate systems and you can see what happens if you use a different model and "translate" the results. It turns out that it doesn't make a difference.

The reason I'm jumping up and down is that it turns out that the supernova results *aren't* sensitive to gravity models. Which is different from the situation once you go outside of the SN results.
 
  • #50
What I mean by "expansion rate" is given by the scale factor in GR, i.e., a(t). This is responsible for the deceleration parameter q and the Hubble "constant" H in GR cosmology. If, as is done with SN, one produces luminosity distance as a function of redshift and I want to know whether or not that indicates accelerated expansion, I have to find the GR model that best fits the data and the a(t) for that model then tells me whether the universe is accelerating or decelerating. A GR model that produces a good fit to the SN data is the flat, matter-dominated model with a cosmological constant, and a(t) in this model says the universe was originally decelerating and is now accelerating.

You're claiming (?) that I can skip the choice of cosmology model and render a definition of expansion rate in terms of ... luminosity distance and redshift directly? Ok, suppose you do that and claim the universe is undergoing accelerated expansion per your definition thereof. I'm willing to grant you that and concede you have a direct measurement of acceleration by definition. I'm not willing to grant you that it's a model-independent result. You've tacitly chosen a model via your particular definition of "acceleration" that involves luminosity distance and redshift.

Because, again, what GR textbooks define as q involves a(t), so someone could convert your luminosity distances and redshifts to proper distances versus cosmological redshifts in their cosmology model (as in GR cosmology) and obtain a resulting best fit model for which a(t) says the universe isn't accelerating. Thus, I have one model telling me the universe is accelerating and one that says it's decelerating, i.e., the claim is dependent on your choice of cosmology model.

Note by the way that, given your direct measurement of accelerated expansion, this ambiguity doesn't merely arise in some "crazy" or "inconceivable" set of circumstances. If we only had small z data and you employed your kinematics, you would conclude the universe is accelerating. However, the flat, matter-dominated GR model without a cosmological constant is a decelerating model that fits luminosity distance vs z data nicely for small z. Therefore, we would need the large z data to discriminate between the opposing kinematical conclusions.

Therefore, I disagree with your claim that your definition of acceleration puts your SN kinematical results on par with terrestrial physics. I do not run into an ambiguity with the definition of acceleration in intro physics.
 
  • #51
Let me attempt to articulate my objection to the claim that cosmological kinematics are on par with terrestrial kinematics. In terrestrial physics we can make local measurements of position versus time and the spacetime metric is not a variable. In cosmological physics the spacetime metric is a variable and we can't directly measure position versus time, which is a local quantity in our theory of spacetime. That is, the metric is spatiotemporally local in GR and an important variable in cosmology, yet we have no way to do local, direct measurements of spacetime intervals in cosmology. So, when you try to put these two kinematics on equal footing, I strongly object because the differences are too pronounced.
 
  • #52
RUTA said:
What I mean by "expansion rate" is given by the scale factor in GR, i.e., a(t).

First disagreement. You end up with a scale factor if you have *any* model of the universe that is isotropic and homogenous. You can assume that the universe is Newtonian or Galliean or whatever. As long as you assume that the universe is isotropic and homogenous, then you end up with a scale factor. Now GR provides a specific set of equations for a(t), but you can put alternative ones in the equation.

Now there are non-GR principles that you can use to constrain a(t). For example, if a(t) results in local velocities that exceed the speed of light you have problems. If a(t) is not monotonic you end up with shell colliding with each other. Etc. Etc.

This is responsible for the deceleration parameter q and the Hubble "constant" H in GR cosmology.

Disagree. H and q have nothing to do with GR at all. Just like a(t) has nothing to do with GR, H and q have nothing to do with GR. Now GR provides a specific equation for a(t), but you don't have to use that equation.

If, as is done with SN, one produces luminosity distance as a function of redshift and I want to know whether or not that indicates accelerated expansion, I have to find the GR model that best fits the data and the a(t) for that model then tells me whether the universe is accelerating or decelerating.

a(t) has nothing to do with GR.

You're claiming (?) that I can skip the choice of cosmology model and render a definition of expansion rate in terms of ... luminosity distance and redshift directly?

Not exactly. If turns out that the specifics of GR enter into the equation because GR asserts that gravity changes geometry and because gravity changes geometry, you don't have have a 1/r^2 power law for brightness. So you have to correct for geometry effects. I'm claiming that these corrections are not huge and once you put "plausible" geometry corrections you quickly figure out that you still have acceleration and that this really isn't one of the parameters that causes a lot of uncertainty in the results.

Now what do I mean by "plausible" geometry. We do know that GR is correct at galactic levels from pulsar observations. We do know that gravity is locally Newtonian. We are pretty sure that information can't travel faster than the speed of light. Using only those principles, you can already pretty tightly constrain the possible geometry to the point that there isn't a huge amount of uncertainty.

There another way of thinking about it. You can think of GR = Newtonian + correction terms and then you can think of "real gravity" = Newtonian + known GR correction terms + unknown corrections. We know that "unknown corrections" in the limit of galactic scales = 0. We can constrain the size of the "unknown corrections" via various arguments. If gravity "suddenly" changes, then you ought to see light refract. My claim is that if you feed the "unknown gravity effects" back into the equation, they aren't huge and they aren't enough to get rid of acceleration.

I'm not willing to grant you that it's a model-independent result. You've tacitly chosen a model via your particular definition of "acceleration" that involves luminosity distance and redshift.

In order to define "acceleration", your merely assumes needs to assume isotropy and homogenity. Once you assume isotropy and homogenity, then you get a scale factor a(t). Once you get a(t), you get q, and you get a definition of "cosmic acceleration".

Note the isotropy and homogenity, are "round earth" assumptions. We *know* that the universe is not perfectly isotropic and homogenous, so we know our model doesn't *exactly* reflect reality. So then we go back at check if it matters, and it doesn't (at least so far).

Also note, people are much more interested in investigating isotropy and homogenity than gravity models. You can constrain gravity pretty tightly. The assumption of isotropy and homogenity are more fundamental, and the constraints are less severe. For example, we are pretty sure that gravity doesn't change based on the direction you go in the universe (or else a lot of weird things would happen) but it's perfectly possible that we are in a pancake shaped void.

Because, again, what GR textbooks define as q involves a(t), so someone could convert your luminosity distances and redshifts to proper distances versus cosmological redshifts in their cosmology model (as in GR cosmology) and obtain a resulting best fit model for which a(t) says the universe isn't accelerating.

No you can't, for any model that reduces to GR (and Newtonian physics) at t=now. The problem is that the data says that q(now) = -0.6. I can calculate q_mymodel and q_GR, and if mymodel=GR for t=now, then q_mymodel must equal q_GR for t=now. (I'm fudging a bit, because the data really is q(almost now), but you get the point.)

Now if you assert that GR doesn't work for t=now, then we can drop apples and I can pull out my GPS. Also if you accept that GR is correct at t=now, then that strongly limits the possible gravitational theories for t=(almost now).

Also, if your model assumes that gravity is the same in all parts of the universe at a specific time, then you can mathematically express the difference between GR and your model by mathematically describing the differences in a(t).

However, the flat, matter-dominated GR model without a cosmological constant is a decelerating model that fits luminosity distance vs z data nicely for small z.

Acceleration is a second derivative which means that if you have data at only one point, you can't calculate it. If you have only z point, the mathematically you can't calculate acceleration. If you have three points that are close to each other, then you need extremely precise measurements of z to get acceleration, and there is a limit to how precise you can get z measurements.

If your z measurements are all small redshift, then your error bars are large enough so that you can't say anything about q, which is why people didn't.

Therefore, I disagree with your claim that your definition of acceleration puts your SN kinematical results on par with terrestrial physics. I do not run into an ambiguity with the definition of acceleration in intro physics.

Note here that terrestrial physics is important. I would claim that knowing *only* that GR is correct within the Milky Way and about a half dozen "reasonable" assumptions (isotropy, homogenity, causality), that you can exclude acceleration. Once you've established that GR is correct within the Milky Way, then causality limits how much geometry can change, and how different your model can be with GR.
 
  • #53
RUTA said:
In terrestrial physics we can make local measurements of position versus time and the spacetime metric is not a variable. In cosmological physics the spacetime metric is a variable and we can't directly measure position versus time, which is a local quantity in our theory of spacetime.

It gets a little messy. What you end up having to do is to define several different definitions of "distance" and "time" and it's important to keep those definitions straight. "Brightness distance" for example ends up being different from "light travel distance".

But one important mathematical characteristic of any definition is that as you go to small distances, all of the different definitions of distance have to converge, and that turns out to give you a lot of constraints.

My claim (and a lot of this involves being around people that do modified gravity models and I haven't worked this out myself), so that at z=1, "distance ambiguity" isn't enough to kill the observations. Now z=10, you have a different story.

That is, the metric is spatiotemporally local in GR and an important variable in cosmology, yet we have no way to do local, direct measurements of spacetime intervals in cosmology.

We have no way of doing direct local measurements of Mars or Alpha Centauri. Other than we have more data about Mars or Alpha Centauri, I don't see why it's different.

And then there is GPS. For GPS to work GR has to work to very, very tight tolerances, but figuring out where you are using GPS involves no local, direct measures of spacetime intervals, and it turns out that getting the metrics right is pretty essential for GPS to work. I don't see why cosmological measurements are more "suspicious" than GPS other than the fact that people run GPS measurements more often.
 
  • #54
Thanks for your extensive replies. I could nitpick several points, but they don't bear on the main issue -- there are significant assumptions needed to do cosmological kinematics that are not needed in terrestrial kinematics and your post only serves to support this fact.
 
  • #55
RUTA said:
Thanks for your extensive replies. I could nitpick several points, but they don't bear on the main issue -- there are significant assumptions needed to do cosmological kinematics that are not needed in terrestrial kinematics and your post only serves to support this fact.

OK. Let's list them

1) the universe is large scale isotropic
2) the universe is large scale homogenous
3) SR is correct locally (which implies that causality holds)
4) QM is correct locally
5) The true theory of gravity reduces locally to GR and then to Newtonian mechanics
6) There are no gravitational effects in redshift emission

I claim that with those assumptions that you can read off the scale factor directly from the supernova results. I also claim that none of these assumptions are non-testable. In particular, we know that the universe isn't perfectly isotropic and homogenous, and we can test the limits.

One way of doing showing this is to do things in the Newtonian limit with a tiny bit of special relativity.

http://spiff.rit.edu/classes/phys443/lectures/Newton/Newton.html

Look specifically at the derivation of the luminosity equation.

No GR at all in that derivation and you get out all of the numbers. The only thing that's close to GR is when they talk about the Robertson-Walker metric but you can that out of "isotropy+homogenity+local SR". If you assume that isotropy and homogenity hold and that special relativity works locally, then you end up with an expression for proper time.

So what I'm asserting is that to get the result that the universe is accelerating, you don't have to assume a precise cosmological model. You just have to assume that the isotropy + homogenity + gravity model reduces to Newtonian + some SR.
 
  • #56
I should point out that the assumption of isotropy and homogenity are pretty big assumptions.

What you are essentially saying is that if you can show that the laws of physics are X, Y, Z, *anywhere*, then they are true *everywhere* at a given time. This means that if you want to know what happens if an apple drops at quasar 3C273, you don't have to go to 3C273. You drop an apple at Earth, and whatever it does on Earth, it's going to do that at 3C273. Having isotropy and homogenity in space allows for the laws of physics to change over time, but not by much. We know for example, that the fine structure constant and gravitational constant didn't change by much over the last five billion years on earth, and with the "magic assumption" this means that the fine structure constant and gravitational constant didn't change *anywhere*.
 
  • #57
Again, thanks for taking the time to explain exactly what you understand are the assumptions needed to measure q. And, again, I could nitpick some of your statements, but I think it's easiest to simply compare your list of assumptions with those necessary to measure the acceleration of a ball rolling down an incline plane.

1. Newtonian mechanics holds in the lab

And, I have direct access to all spatiotemporal regions needed to make the spatial and temporal measurements for the ball on the incline while, as you admit, you do not have comparable access in cosmology. Thus, we can find statements such as:

The first question is whether drifting observers in a perturbed, dust-dominated Friedmann-Robertson-Walker (FRW) universe and those following the Hubble expansion could assign different values (and signs) to their respective deceleration parameters. Whether, in particular, it is theoretically possible for a peculiarly moving observer to ‘‘experience’’ accelerated expansion while the Universe is actually decelerating. We find that the answer to this question is positive, when the peculiar velocity field adds to the Hubble expansion. In other words, the drifting observer should reside in a region that expands faster than the background universe. Then, around every typical observer in that patch, there can be a section where the deceleration parameter takes negative values and beyond which it becomes positive again. Moreover, even small (relative to the Hubble rate) peculiar velocities can lead to such local acceleration. The principle is fairly simple: two decelerated expansions (in our case the background and the peculiar) can combine to give an accelerating one, as long as the acceleration is ‘‘weak’’ (with 1<q<0–where q is the deceleration parameter) and not ‘‘strong’’ (withq<1)—see Sec. II C below. Overall, accelerated expansion for a drifting observer does not necessarily imply the same for the Universe itself. Peculiar motions can locally mimic the effects of dark energy. Furthermore, the affected scales can be large enough to give the false impression that the whole Universe has recently entered an accelerating phase.

in Phys Rev (Sep 2011 Tsagas paper referenced earlier) concerning our understanding of the cosmological kinematics while no comparable publications will be found concerning the acceleration of balls rolling down inclined planes. And, while we have ck'd Newtonian physics, SR, GR, and QM on cosmologically small scales, any of these theories can be challenged on large scales simply because we don't have cosmological access. Modified Newtonian dynamics was proposed to explain dark matter and in this paper: Arto Annila. “Least-time paths of light.” Mon. Not. R. Astron. Soc. 416, 2944-2948 (2011), the author "argues that the supernovae data does not imply that the universe is undergoing an accelerating expansion." http://www.physorg.com/news/2011-10-supernovae-universe-expansion-understood-dark.html.

Now you can argue that these challenges are baseless, but they were published in respected journals this year and I cannot say I've seen one such publication concerning the conclusion that balls accelerate while rolling down inclined planes in the intro physics lab.

Why is that? Because the assumptions required to conclude the universe is undergoing accelerating expansion are significant compared to those required to conclude a ball is accelerating as it rolls down an incline plane. Thus my claim that cosmological kinematics is not on par with terrestrial kinematics.
 
  • #58
Keep in mind that I'm an insider here, i.e., I got my PhD in GR cosmology, I teach cosmology, astronomy and GR, I love this stuff! I've been doing some curve fitting with the Union2 Compilation, it's great data! I'm VERY happy with the work done by you guys! So, I don't want to sound unappreciative. I'm only saying what I think is pretty obvious, i.e., cosmology faces challenges that terrestrial physics doesn't face. Here are two statements by Ellis, for example (Class. Quantum Grav. 16 (1999) A37–A75):

The second is the series of problems that arise, with the arrow of time issue being symptomatic, because we do not know what influence the form of the universe has on the physical laws operational in the universe. Many speculations have occurred about such possible effects, particularly under the name of Mach’s principle‡, and, for example, made specific in various theories about a possible time variation in the ‘fundamental constants’ of nature, and specifically the gravitational constant (Dirac 1938). These proposals are to some extent open to test (Cowie and Songaila 1995), as in the case of the Dirac–Jordan–Brans–Dicke theories of a time-varying gravitational constant. Nevertheless, in the end the foundations of these speculations are untestable because we live in one universe whose boundary conditions are given to us and are not amenable to alteration, so we cannot experiment to see what the result is if they are different. The uniqueness of the universe is an essential ultimate limit on our ability to test our cosmological theories experimentally, particularly with regard to the interaction between local physics and the boundary conditions in the universe (Ellis 1999b). This therefore also applies to our ability to use cosmological data to test the theory of gravitation under the dynamic conditions of the early universe.


Appropriate handling of the uniqueness of the universe. Underlying all these issues is
the series of problems arising because of the uniqueness of the universe, which is what
gives cosmology its particular character, underlying the special problems in cosmological
modelling and the application of probability theory to cosmology (Ellis 1999b). Proposals
to deal with this by considering an ensemble of universes realized in oneway or another are
in fact untestable and, hence, of a metaphysical rather than physical nature; but this needs
further exploration. Can this be made plausible? Alternatively, how can the scientific
method properly handle a theory which has only one unique object of application?


Clearly, that's not an issue with balls rolling down inclined planes, so while I love cosmology, I keep it in proper perspective.
 
  • #59
RUTA said:
And, I have direct access to all spatiotemporal regions needed to make the spatial and temporal measurements for the ball on the incline while, as you admit, you do not have comparable access in cosmology.

I claim that you have comparable experiments. If gravity was markedly non-Newtonian at small scales, then you end up with very different stellar evolution. The supernova mechanism is very sensitive to gravity.

And, while we have ck'd Newtonian physics, SR, GR, and QM on cosmologically small scales, any of these theories can be challenged on large scales simply because we don't have cosmological access.

So let's look at cosmologically small scales. If you take the latest supernova measurements and bin them, you can see acceleration at z<0.4 and z<0.1. OK, you might be able to convince me that "something weird" happens at z=1. But at 0.1 < z < 0.3, (v/c)^2 < 0.1, GR becomes Newtonian, and if something weird happens, then it's got to be very weird.

http://www.astro.ucla.edu/~wright/sne_cosmology.html

Also you have this paper...

Model and calibration-independent test of cosmic acceleration
http://arxiv.org/PS_cache/arxiv/pdf/0810/0810.4484v3.pdf

Now you can argue that these challenges are baseless, but they were published in respected journals this year and I cannot say I've seen one such publication concerning the conclusion that balls accelerate while rolling down inclined planes in the intro physics lab.

That's because sometimes things are so obvious that they aren't going to be published. For example, the Tsagas paper was published in Phys Rev. D. I really doubt that it would have been published in Ap.J. without some revision because the stuff in that paper was "general knowledge."

I haven't read the MNRAS paper, but my first reaction is "good grief, not another tired light model." The problem with tired light models is that anything that says "something weird happens to the light from supernova" means "something weird happens from the light from things beyond supernova." Now I haven't read the paper so if the first thing he says is "I know that you aren't in the mood to see another tired light model, and I know the standard flaws with tired light but..." then I'm interested. If in reading the paper, he doesn't seem to have any notion of the standard problems with tired light models, then it goes in the trash.

Why is that? Because the assumptions required to conclude the universe is undergoing accelerating expansion are significant compared to those required to conclude a ball is accelerating as it rolls down an incline plane.

OK, let's forget about the ball going down hill. What about GPS? What about observations of Alpha Centauri?

Also as far as what gets published where, that goes a lot into the sociology of science. And there is really no need for going into "proof by sociology". Write down all of the assumptions that go into GPS. Write down all of the assumptions that go into the accelerating universe. I claim that the lists aren't very different.

It's also bad to get into generalizations.

One other thing is goes with the Columbus analogy. The 1997 low-z SN studies didn't see the expanding universe because their measurements are not precise enough. However, if you restrict yourself to z<0.3, you can see the universe accelerate very clearly with 2011 data.

What that means is that Perlmutter and Riess were in some sense lucky. If Columbus didn't discover America someone other person would have. If you don't do high-z supernova studies and just do z <0.3, then someone would have spotted the acceleration by 2004, that that person would have gotten the Nobel.

That also means that Perlmutter/Riess shouldn't have got the Nobel for high-z supernova studies any more than Columbus gets known for being a good sailor.
 
  • #60
RUTA said:
Keep in mind that I'm an insider here, i.e., I got my PhD in GR cosmology, I teach cosmology, astronomy and GR, I love this stuff!

I'm also an insider. I got my Ph.D. in supernova theory.

particularly with regard to the interaction between local physics and the boundary conditions in the universe (Ellis 1999b). This therefore also applies to our ability to use cosmological data to test the theory of gravitation under the dynamic conditions of the early universe.

Which is true but in this situation irrelevant. We are aren't talking about the early universe. For z=0.3, we are talking about lookback times of 3 billion years. There are rocks that are older than that. If you want to convince me that gravity was really different 10 billion years ago, that's all cool. If you want to convince me that gravity was really different 3 billion years ago, then that's going to take some convincing.

Underlying all these issues is
the series of problems arising because of the uniqueness of the universe, which is what
gives cosmology its particular character, underlying the special problems in cosmological
modelling and the application of probability theory to cosmology

Again I don't see the relevance of this to supernova data. The universe is unique, but supernova, galaxies, and stars aren't.

Part of the way that you deal with difficult problems is to figure out when you can avoid the problem. There are a lot of deep theoretical problems when you deal with the early universe. The nice thing about supernova data is that you aren't dealing with the early universe. By the time you end up with supernova, you are in a part of the universe in which stars form and explode, which means that it's not completely kooky.

Proposals
to deal with this by considering an ensemble of universes realized in oneway or another are
in fact untestable and, hence, of a metaphysical rather than physical nature; but this needs
further exploration. Can this be made plausible? Alternatively, how can the scientific
method properly handle a theory which has only one unique object of application?


Clearly, that's not an issue with balls rolling down inclined planes, so while I love cosmology, I keep it in proper perspective.

It's also not a problem with supernova. Also this is why the possibility that supernova Ia evolve is a much bigger hole than gravity. I would be very, very surprised if gravity worked very differently 3 billion years ago. I *wouldn't* be surprised if supernova Ia worked very differently 3 billion years ago since we don't really know what causes supernova Ia.

The fact that supernova Ia seem to be standard candles is an empirical fact, but we have *NO IDEA* why that happens. It's an assumption. We have observational reasons for that assumption, but it's an assumption.

Part of the reason why the supernova (and galaxy count) data is so strong is that we are *NOT* in weird physical regimes.
 
  • #61
twofish-quant said:
Which is true but in this situation irrelevant. We are aren't talking about the early universe. For z=0.3, we are talking about lookback times of 3 billion years. There are rocks that are older than that. If you want to convince me that gravity was really different 10 billion years ago, that's all cool. If you want to convince me that gravity was really different 3 billion years ago, then that's going to take some convincing.
It's not the time evolution of the dynamical phenomena I’m questioning here (although, that is something people play with in cosmology), it's the fact that distance is not directly measureable at these scales. We can't lay meter sticks along the proper distance corresponding to z = 0.3, which in the GR flat, dust model with age of 14 Gy is 5.2 Gcy, i.e., 12% of the way to the particle horizon (42 Gcy). We certainly can't bounce radar signals off objects at z = 0.3, we can’t even bounce radar signals off the galactic center 30,000 cy away. Direct kinematical measurements are just not possible. And the various distance measures are already starting to differ significantly at z = 0.3. The light was emitted when the universe was 9.44 Gy old (same model), i.e., when the universe was only 2/3 its current age. Thus, the light traveled for (14 – 9.44)Gy = 4.6 Gy (where did you get 3 Gy?), so the time-of-flight distance is 4.6 Gcy which differs from the proper distance of 5.2 Gcy by 12%. And the difference between luminosity distance and proper distance is 30% in this model, i.e., lumin dist = (1+z)(prop dist).

So, yes, concerns with the stability of physical law over cosmological time scales is an issue and we hope that the laws were at least consistent since the formation of Earth (4.6 Gy ago). Of course, we don’t know that and can’t ever check it directly, that’s a limitation inherent in cosmology as Ellis points out. But, I’m also pointing out that we don’t know about the applicability of the laws as they currently stand over cosmological distances and we can’t check that directly either.

I would be less skeptical if we had a well-established model of super unified physics. But, we don’t have super unified physics and we don’t know what such a theory might hold for our understanding of current physics, so it might be that the dark energy phenomenon is providing evidence that could help in our search for new fundamental physics. Therefore, I’m not willing to close theoretical options.

There seems to be a theme in our disagreement. You’re saying I should be more skeptical of the data and I’m saying you should be more skeptical of the theory. Sounds like we just live in two different camps :smile:
 
  • #62
RUTA said:
It's not the time evolution of the dynamical phenomena I’m questioning here (although, that is something people play with in cosmology), it's the fact that distance is not directly measureable at these scales. We can't lay meter sticks along the proper distance corresponding to z = 0.3, which in the GR flat, dust model with age of 14 Gy is 5.2 Gcy, i.e., 12% of the way to the particle horizon (42 Gcy).

We can't lay meter sticks to Alpha Centauri either.

And the difference between luminosity distance and proper distance is 30% in this model, i.e., lumin dist = (1+z)(prop dist).

So go down to z=0.1. The moment you move past the "local peculiar motions" you should (and as the latest measurements indicate that we do) see the acceleration of the universe. Also the luminosity distance equation is derivable from special relativity, so you *don't* need a specific cosmological model to get it to work.

What I don't get is how measurements of the cosmological constant are that different from measurements of say intergalactic hydrogen.

So, yes, concerns with the stability of physical law over cosmological time scales is an issue and we hope that the laws were at least consistent since the formation of Earth (4.6 Gy ago). Of course, we don’t know that and can’t ever check it directly, that’s a limitation inherent in cosmology as Ellis points out.

And this is where I disagree. If G or the fine structure constant were "different enough" at cosmological distances and times we'd see it.

You keep using the word "directly" as if there were some different between direct and indirect measurements, and I don't see where that comes from.

I would be less skeptical if we had a well-established model of super unified physics.

I'd be skeptical of any model of super unified physics. I don't trust theory. What I'm arguing is that in the case of this specific data, I don't have to. Which is a good thing since these results depend crucially on the idea that SN Ia are standard candles, which is something that we have *NO* theoretical basis to believe.

There seems to be a theme in our disagreement. You’re saying I should be more skeptical of the data and I’m saying you should be more skeptical of the theory. Sounds like we just live in two different camps :smile:

Actually I would have thought that it was the opposite. I think you should be less skeptical of the data and more skeptical of the theory.

It's pretty obvious that we have some deep philosophical disagreement on something, but right now it's not obvious what that is.
 
  • #63
twofish-quant said:
So go down to z=0.1. The moment you move past the "local peculiar motions" you should (and as the latest measurements indicate that we do) see the acceleration of the universe.

If I confine myself to z < 0.1 in the Union2 Compilation and fit log(DL/Gpc) vs log(z) with a line I get R = 0.9869 and sum of squares error (SSE) of 0.208533. If I fit the flat, dust model of GR I get SSE of .208452 for Ho = 68.6 km/s/Mpc (only parameter). If I fit the LambdaCDM model, I get SSE of .208086 for Ho = 69.0 km/s/Mpc and OmegaM = 0.74 (two parameters here). That is, both an accelerating and decelerating model fit the data equally well. Now using all the Union2 data (out to z = 1.4), I find a best fit line with R = 0.9955 and SSE of 1.95. LCDM gives SSE of 1.79 for Ho = 69.2 and OmegaM = 0.29. The flat, dust model of GR gives SSE of 2.68 for Ho = 60.9. Now it's easy to see that the accelerating model is superior to the decelerating model. But, you need those large z and that's where assumptions concerning the nature of distance matters.

twofish-quant said:
Also the luminosity distance equation is derivable from special relativity, so you *don't* need a specific cosmological model to get it to work.

DL = (1+z)Dp only in the flat model. DL depends on spatial curvature in GR cosmology, so it's related differently to Dp in the open and closed models. Here is a nice summary:

http://arxiv.org/PS_cache/astro-ph/pdf/9905/9905116v4.pdf

twofish-quant said:
You keep using the word "directly" as if there were some different between direct and indirect measurements, and I don't see where that comes from.

So, you need large z to discriminate between accelerating and decelerating GR models and the relationship between what you "measure" (DL) and what tells you the universe is accelerating or decelerating (Dp) is model dependent at large z. Therefore, without a means of measuring Dp directly, your conclusion that the universe is undergoing accelerating expansion is model dependent.

You claim to have a super general model in which you can detect acceleration at small z using only the six assumptions given earlier. If your model is super general, then it must subsume the GR models I'm using above (they certainly meet your assumptions). Thus, if you can indeed show acceleration at z < 0.1 using your super general model, there must be a mistake in my calculations. Can you show me that mistake?
 
  • #64
RUTA said:
You claim to have a super general model in which you can detect acceleration at small z using only the six assumptions given earlier. If your model is super general, then it must subsume the GR models I'm using above (they certainly meet your assumptions). Thus, if you can indeed show acceleration at z < 0.1 using your super general model, there must be a mistake in my calculations. Can you show me that mistake?

I can't but Seikel and Schwarz have written a paper on this topic

Model- and calibration-independent test of cosmic acceleration
http://arxiv.org/PS_cache/arxiv/pdf/0810/0810.4484v3.pdf

Their claim is that with 0.1<z<0.3 and the assumption of isotropy and homogenity, the universe is accelerating. They don't try to fit to a GR model, but rather use nearby supernova to compare against those that are far away.

Also I seem to have misread their paper. They can show that the acceleration holds if you *either* take the low redshift sample with a flat or closed universe *or* if you take the all the data and then vary the GR model. They didn't explicitly cover the case if you take both low redshift samples *and* vary the model parameters.

However, the question is that if you can't see acceleration at z=0.1 and you can with z=1.4, what's the minimum set of data that you need to see acceleration, and the answer seems to be closer to z=0.1 than z=1.4.

The other point is that there is an industry of papers that try to make sense of the supernova data with model independent approaches. adswww.harvard.edu with the terms "model independent" and supernova gets you this...

Bayesian Analysis and Constraints on Kinematic Models from Union SNIa
http://arxiv.org/abs/0904.3550

A Model-Independent Determination of the Expansion and Acceleration Rates of the Universe as a Function of Redshift and Constraints on Dark Energy
http://adsabs.harvard.edu/abs/2003ApJ...597...9D

Improved Constraints on the Acceleration History of the Universe and the Properties of the Dark Energy
http://adsabs.harvard.edu/abs/2008ApJ...677...1D

(One cool thing that Daly does is that she looks at angular distance.)

Model independent constraints on the cosmological expansion rate
http://arxiv.org/PS_cache/arxiv/pdf/0811/0811.0981v2.pdf

The general theme of those papers is that instead of fitting against a specific model, they parameterize the data figure out what can be inferred from the data.

Here is a cool paper.

Direct evidence of acceleration from distance modulus redshift graph
http://arxiv.org/PS_cache/astro-ph/pdf/0703/0703583v2.pdf
 
Last edited:
  • #65
The other thing is that we need to be careful about the claims:

1) What *can* be shown with current SN data?
2) What *was* shown in 1998, 2002, 2008 with supernova data?
3) What can be shown with other data?

Also establishing what happens at 0.1<z<0.3 is important because somewhere between z=0.3 and z=0.5, the acceleration turns into a deceleration.

The other thing I think we agree on (which is why I'm arguing the point) is that if it turns out that you need to fit GR expansion curves to z=1 / 1.4 to establish that there is acceleration at low z's, then you are screwed.
 
Last edited:
  • #66
Something else that I noticed. If you do best fit of the union supernova data, you are getting H_0=69.0 at z=0.1 regardless of model. However, if you measure the Hubble constant to the nearest galaxies, you end up getting H_0=74.0 +/- 3.0

http://hubblesite.org/pubinfo/pdf/2011/08/pdf.pdf

Hmmmmmm...

Now since you have data, I'd be interested in seeing what your fits look like if you fix z=0, H_0=74.0. Once you fix that number, my guess is that decelerating models no longer fit the nearby supernova data. We can get the number of H_0 from the type of measurements that de Vauculeurs and Sandage have been doing since the 1970's.

Now you can argue that we don't really know that z=0, H_0=74.0, since there are local measurements that are lower than that, or you could argue that there is some apples/orange effect. These are valid arguments, but they involve observational issues that have nothing to do with the gravitational model.
 
Last edited:
  • #67
Thanks for the Seikel and Schwarz reference, hopefully I can use this to clarify my philosophical position.

I have no qualms with their analysis or conclusion which means that, given their assumptions, I agree the SN data out to z = 0.2 indicates accelerated expansion. I don’t contest their assumption of homogeneity and isotropy, and they take into account positive and negative spatial curvature. The assumption I want to relax (there could be others) is DL = (1+z)Dp in flat space, i.e., the assumed relationship between what we “measure,” luminosity distance (DL), and what we use to define expansion rate, proper distance (Dp). They make this assumption in obtaining Eq 2 from Eq 1 (Dp = (c/Ho) ln(1+z) in the empty universe), with the counterparts in open and closed universes assumed in Eq 8. But, suppose that DL = (1+z)Dp is only true for ‘small’ Dp. Then the challenge is to find a DL as a function of Dp for a spatially flat, homogeneous and isotropic model (so as to keep in accord with WMAP data) that reduces to DL= (1+z)Dp for ‘small’ Dp and, therefore, doesn’t change kinematics at z < 0.01 (so as not to affect Ho measurements), and that gives a decelerating universe with the SN data. Does this require new physics? Yes, but so does accepting an accelerating universe (requires cosmological constant which is otherwise unmotivated, quintessence, f(R) gravity, etc).

Thus, I’ve been arguing for more theoretical skepticism. By subscribing to the belief that we’ve “discovered the accelerating expansion of the universe,” we’re ruling out theoretical possibilities that involve decelerated expansion (the one I’ve pointed out and possibly others). Why would you restrict your explanation of the data to accelerating options when either way you’ve got to invoke new physics? That strikes me as unnecessarily restrictive. That’s my point.
 
  • #68
Perhaps I am mistaken, as I don't have a good grasp of the math of all this, but isn't the accelerating universe model the "best fit" to the data? Would assuming that DL=(1+z)Dp is true only for small Dp be a less reasonable assumption than assuming it is true for all values? Do we have any real reason for believing that?
 
  • #69
Drakkith said:
Perhaps I am mistaken, as I don't have a good grasp of the math of all this, but isn't the accelerating universe model the "best fit" to the data?

I have not seen an alternative to accelerated expansion that fits the data as well as the concordance model (LambdaCDM).

Drakkith said:
Would assuming that DL=(1+z)Dp is true only for small Dp be a less reasonable assumption than assuming it is true for all values? Do we have any real reason for believing that?

It is an example of an alternative assumption that might be made because we don't measure Dp directly. Whether someone would consider alternatives to the assumptions required to render an accelerated expansion depends on their particular motivations. I'm not here to argue for or against any particular assumption, I'm using this as an example to convey a general point. If you keep all the assumptions that lead to accelerated expansion, then you're left having to explain the acceleration. So, why close the door on alternative assumptions motivated by other ideas for new physics that lead to decelerated expansion? But, when the community says they've discovered the accelerated expansion of the universe, that's exactly what they're doing. If in, say, 20 years we have a robust unified picture of physics and it points to and explains accelerated expansion, I will be on board. I'm not arguing *against* accelerated expansion. I'm arguing for skepticism.
 
  • #70
RUTA said:
The assumption I want to relax (there could be others) is DL = (1+z)Dp in flat space, i.e., the assumed relationship between what we “measure,” luminosity distance (DL), and what we use to define expansion rate, proper distance (Dp).

And that's a perfectly reasonable thing to do. However, one thing that you quickly figure out is that in order to fit the data, you quickly end up with relationships that are not allowed by GR. Basically to explain the data, you have to assume that space is negatively curved more than it allowed by GR.

One other thing is that there are observational limits on what you can assume for DL. You can argue all sorts of weird things for the relationship between DL and Dp, it's much harder to argue for weird things in the relationship between DL and Da (angular distance), and there are observational tests for angular distance. Also, if you have a weird DL/DP relationship then there are implications for gravitational lensing.

But, suppose that DL = (1+z)Dp is only true for ‘small’ Dp. Then the challenge is to find a DL as a function of Dp for a spatially flat, homogeneous and isotropic model (so as to keep in accord with WMAP data)

Whoa. This doesn't work at all...

It's known that you *cannot* come up with a DL/Dp relationship that reduces to general relativity. You try every DL-Dp relationship that is allowed by GR, and it doesn't work. Basically you want to spread out the light as much as possible. If the universe is negatively curved, that spreads out light more, but maximum negative curvature occurs when the universe is empty, and even then, it's not going to fit.

So you can throw out GR. That's fine, but if you throw out GR, then you have to reinterpret the WMAP data with your new theory of gravity, at which point there is no theoretical evidence for a flat, homogenous, isotropic model since you've thrown out the theoretical basis for concluding that there is a flat, homogenous, isotropic model.

The "problem" with the cosmic acceleration is that it's not a "early universe" thing. If you throw out all of the data we have for z<0.5, then everything fits nicely with a decelerating universe. Acceleration only starts at between z=0.3 and z=0.5, and increases as you go to z=0.0. This poses a problem for any weird theory of gravity, because you'd expect things to go in the opposite direction. The higher the z, the more weird gravity gets.

But that's not what we see.

that reduces to DL= (1+z)Dp for ‘small’ Dp and, therefore, doesn’t change kinematics at z < 0.01 (so as not to affect Ho measurements), and that gives a decelerating universe with the SN data.

And then you end up having to fit your data with gravitational lensing statistics and cosmological masers. The thing about those is that they give you angular distance.

Also as we get more data, it's going to be harder to get things to work. New data is coming in constantly, and as we get new data, the error bars go down.

Does this require new physics? Yes, but so does accepting an accelerating universe (requires cosmological constant which is otherwise unmotivated, quintessence, f(R) gravity, etc).

Sure. I don't have a problem with new physics, but new physics has got to fit the data, and that's hard since we have a lot of data. One reason I like *this* problem more than talking about quantum cosmology at the t=0 is that for t=0, you can make up anything you want. The universe was created by Fred the cosmic dragon. There is no data that tells you otherwise.

For cosmic acceleration, things are data driven.

Thus, I’ve been arguing for more theoretical skepticism. By subscribing to the belief that we’ve “discovered the accelerating expansion of the universe,” we’re ruling out theoretical possibilities that involve decelerated expansion

And the problem with those theoretical possibilities is that for the most part they don't fit the data. The data is such that no gravitational theory that reduces to GR at intermediate z will fit the data. That leaves you with gravitational theories that don't reduce to GR, at which point you are going to have problems with gravitational lensing data.

Also, there *are* viable theoretical possibilities that don't involve weird gravity. The most likely explanation of the data that doesn't involve acceleration are that we are in an odd part of the universe (i.e. a local void) or that there is weird evolution of SN Ia. However, in both those cases, one should expect that they become either less viable or more viable as you have new data.

Why would you restrict your explanation of the data to accelerating options when either way you’ve got to invoke new physics?

Because once you try to invoke new physics, you find that it doesn't get rid of the acceleration or blows up for some other reason (so people have told me, I'm not an expert in modified gravity).

Where the signal happens is important. If you tell me that gravity behaves weird at z=1, then I'm game. If you tell me that gravity behaves weird at z=0.1, then you are going to have to do a lot of explaining to do.

Also you don't have to invoke new physics. There are some explanations for the data that invoke *NO* new physics. The two big ones are local void or SN Ia evolution.

That strikes me as unnecessarily restrictive. That’s my point.

And people have been thinking about alternative explanations. The problem is that for the most part, they don't fit the data.

The other thing is that there are some things that have to do with the sociology of science. Working on theory is like digging for gold. There is an element of luck and risk. Suppose I spend three years working on a new theory of gravity, and after those three years I come up with something that fits the data as of 2011. The problem is that this is not good enough. The error bars are going down, so I'm going to have to fit the data as of 2014, and if it turns out that it doesn't, then I've just wasted my time that I could have spent looking for gold somewhere else.

On the other hand if I spend my time with local void and SN Ia models, then even if it turns out that they don't kill cosmic acceleration, I still end up with something useful at the end of the effort.
 

Similar threads

Replies
7
Views
1K
Replies
12
Views
2K
Replies
2
Views
1K
Replies
6
Views
2K
Replies
13
Views
2K
Replies
12
Views
2K
Replies
1
Views
1K
Replies
2
Views
2K
Back
Top