Calculating the age of the universe without a model?

In summary, the author of this post suggests that one can estimate the age of the universe from observational data alone, without relying on models such as General Relativity. This is done by extrapolating the distance-redshift relation back to zero scale, and then adding up the time increments associated with the observed redshifts.
  • #1
wabbit
Gold Member
1,284
208
This is a follow up to another thread (https://www.physicsforums.com/threads/something-about-calculating-the-age-of-the-universe.807250/), but I post it as a separate thread since it is not clear to me (a) this is actually correct, and (b) assuming it is, whether it actually provides a meaningful result when applied to the data - these are my two questions here.

Models based on GR are what is actually used in combination with observations for serious estimations of the age of the universe, but I was wondering : can we also estimate, with less precision of course, the age of the universe from observational data alone, without referring to GR or FRW ? The purpose isn't really to do without a model as such, it is more to do it with only elementary tools, so no ODE either, and the integral below might perhaps be replaced with a finite sum too.

It seems to me, barring egregious errors below, we can - or at least we can estimate how long ago the light from ancient galaxies we see originated.

If we assume that the Hubble law is due to expansion, and that recession velocities are due to that alone, the observational data from supernova studies gives us a relation between :
- velocities or more precisely redshifts which directly give the scale factor ## a(t)=\frac{\lambda_{emitted}} {\lambda_{observed}}## (t being an unknown)
- distances derived from the apparent luminosity ## L ## of standard candles, ## d=\sqrt{\frac{L_{standard}}{L_{observed}} }##

The distance measured this way is, if I am not mistaken, $$ d=c\int_{t_{emitted}}^{t_{now}}\frac{dt }{a(t)}$$ so that after smoothing the scale-distance observed relation, we have approximately, over a "small" interval
$$ \Delta d=\frac{c}{a} \Delta t, \text{ i.e. } \Delta t=\frac{a}{c}\Delta d $$
This gives us a "model free" estimate of the time elapsed between different redshifts, and summing over this gives the time from the most distant galaxy where a standard candle can be seen.

We can also extrapolate the relation between distance and redshift beyond the observed range, and if we extrapolate back to zero scale, we get a sum that is an estimate of the age of the universe.

Ignoring GR isn't of course very smart, and extrapolation is notoriously unreliable. I am obviously not claiming this as a substitute for better, more sophisticated approaches. Still, I find it interesting to see what can be obtained in this way. However I do not have the actual data, so I don't know what the result might be.

So does this work ?Edited the post above which used S(t) as I realized it is better here to forget about stretch factors and use a(t) instead.
 
Last edited:
  • Like
Likes Jimster41
Space news on Phys.org
  • #2
I like this analysis! It is clear and easy to picture.
I suppose the model comes in when one extrapolates beyond the range where one has standard candles.

Your distance is comoving distance is, I guess, co-moving distance as measured now using not the raw luminosity but the luminosity corrected for the redshift, which one also can measure. Just saying the obvious, I think. It's implied. I will use D for comoving distance so I don't confuse it with the little d in "dt".

So I picture dividing the universe as it is NOW up into shells of a certain thickness ΔD.
I can do that for the spherical region all the way out to the farthest candle.
and each shell is labeled with a scale factor a(D), because I an measure the redshift of those candles and the galaxies they lived in.
To me this seems clear and straightforward.
And to each shell one can associate its TIME THICKNESS Δt = (a/c)ΔD
because the speed of light in comoving distance terms is (c/a). It originally traveled speed c but then the distance covered was enlarged by factor 1/a.

So one can proceed shell by shell adding up Δt for as far as one has standard candles and therefore can calculate (a/c)D.

So one finds out, so to speak, "the age of the oldest standard candle."

But now one must take a leap and estimate the age of the universe.

To follow the same method, I suppose one must guess a curve that shows the dependence of the scale factor a(D) on the comoving distance D. Because it is the time increment Δt = (a/c)ΔD that one is adding up.

It's a bit vague in my mind how one could guess that curve. One might draw a partial curve thru the section of D and a(D) data that one has, and it might evoke some...some familiar, say, hyperbolic trig function, which one could then use to complete the curve all the way to where a(D) would finally hit zero. I can see how that might work :woot::wink:
 
  • #3
Thanks for your comments.
Your distance is comoving distance is, I guess, co-moving distance as measured now using not the raw luminosity but the luminosity corrected for the redshift, which one also can measure.
That's something I missed, could you elaborate ? I thought "raw" apparent luminosity would follow an ## 1/d^2 ## law per this distance ## d ## , but it was a bit handwavy and I must have missed a step : I was associating to this ## d ## a sphere of radius ## d ## over which the light is soread out, but this may be wrong.
 
Last edited:
  • #4
To follow the same method, I suppose one must guess a curve that shows the dependence of the scale factor a(D) on the comoving distance D.
Right, and this is obviously a dangerous step : )
In keeping with the spirit of the method, this could either be
-best fit of a functional form (*) to the observed points (a, D), and use the value of the function for a=0
-Or use just smoothing (spline etc) over the observed interval, and then extrapolate to 0 with your preferred assumption
This is something probably best decided from looking at the actual data and the look of that (a,D) point chart. I haven't seen one, what is usually shown is the velocity-distance relationship, which can look different.(*) if you see a sinh^2/3 just obviously jumping off the chart when you look at it, I might suspect you of not being completely unbiased : )
 
  • #5
Just an added thought, this method amounts to using a formula
$$ t_0-t_1=\frac{1}{c}\int_{a_0}^{a_1} a dD $$
But you can integrate this by part to give
$$ t_0-t_1=\frac{1}{c}(a_1D_1-a_0D_0+\int_{a_0}^{a_1} D da )$$
Which suggests an other way to view the calculation : the age of the universe times c, is the area under the curve of the function D(a). (The terms a1D1 and a0D0 vanish in this case, if D0 doesn't blow up as it would in a de Sitter universe) $$ T=\frac{1}{c}\int_0^1D(a)da $$

Not that I have an interpretation for that, but stated as this I find it intriguingly simple, especially as it is based only on directly observable quantities...(with curve fitting of course)
 
Last edited:
  • #6
wabbit said:
Thanks for your comments.

That's something I missed, could you elaborate ? I thought "raw" apparent luminosity would follow an ## 1/d^2 ## law per this distance ## d ## , but it was a bit handwavy and I must have missed a step : I was associating to this ## d ## a sphere of radius ## d ## over which the light is soread out, but this may be wrong.
I think not only does it get spread over that sphere but it loses energy because its wavelengths get stretched out. (maybe your definition of luminosity already has corrected for this but otherwise the watts per square meter or whatever could be corrected using the number a)

BTW I like your formula for T as integral of D(a)da. Earlier today I was trying to see how to write a curve for D(a) because thinking of that integral. Maybe you can solve for D in terms of a. I got stuck and then my wife and I had a late lunch and I got involved with other things.

Can you solve for D(a) in the hyper-trig function context? Then we could imagine fitting a curve to the segment of data based on standard candles. and using that as a guess extrapolation for the rest of the D(a) curve.
 
Last edited:
  • #7
Ah yes I see what you meant. Indeed my notion of luminosity was, implicitly, "total energy over the EM spectrum" or something like that, not a specific measure. So yes indeed, this should be implemented as a redshift-corrected luminosity, a notion I confess I was unaware of. Not sure how this is implemented over a continuous spectrum, although the correct definition is obvious for line emissions. Thanks for the clarification.
 
  • #8
Regarding D(a) in the matter-lambda model I don't think it can be expressed in terms of elementary functions. (*) The funny thing is that while we see here some very simple relation between age and observable data, within the model (or rather, expressed in the variables that are natural within the model), it looks more complex. I didn't know about these simple relations and was more or less expecting to get nowhere or at least to some horribly complex formula - the result surprised me really.

Would you know where one might get a hold of the data ? If this is to work it should be done with real date from supernova surveys... Not sure I'd have the courage to go through the grunt work, but given some time I might. Or better, a website that does the whole thing already would be perfect for the lazy rabbit : )

(*) there should be an expression in terms of standard elliptic integrals (some work needed to get there), would that help ?
 
Last edited:
  • #9
Sorry one more question : do you know what is the redshift of the farthest observed supernovas ? This is one limit here, and I am starting to fear it might just be too small for an extrapolation to be sensible.
 
  • #10
Found this :
hubblepapertrans.gif


I think I will explore this website a bit, looks like this could be just what the doctor ordered.
http://www-supernova.lbl.gov
 
Last edited:
  • Like
Likes Jimster41
  • #11
wabbit said:
do you know what is the redshift of the farthest observed supernovas ?
The answer I found from the lbl site is z=1.71 as of 2012.
This is a=0.38 and T~4 bn yrs.
So the data would cover at best the range [0.4, 1] and before that it's extrapolation.

Did a quick check : this isn't a deal breaker. Applied to a matter universe, linear extrapolation below a=0.4 leads to underestimating the age by less than 10 % .

Also to guide extrapolation, ## D(a)\propto(1-\sqrt{a}) ## for such a matter dominated universe (not a bad proxy for a<0.4), but that's cheating a bit.

Precision Measurement of The Most Distant Spectroscopically Confirmed Supernova Ia with the Hubble Space Telescope
 
Last edited:
  • #12
wabbit said:
Found this :
hubblepapertrans.gif


I think I will explore this website a bit, looks like this could be just what the doctor ordered.
http://www-supernova.lbl.gov
What is "effective [itex]{ m }_{ B }[/itex]" in the plot, "Magnitude"? If so what does the subscript [itex]B [/itex] denote? The residuals are w/respect to the fits obviously, which look like log fits to me. Gotta love the data. X-axis would then be... estimated distance?
 
  • #13
This plot is a teaser : ) I think mb is indeed (some version of) magnitude but I haven't checked yet what exactly it represents. I'm kinda hoping x-axis is redshift for if it is, then this plot is close to what we re looking for.

Now the good news is, the data is indeed available on that site, in plain downloadable ASCII table form.
http://supernova.lbl.gov/Union/ has lots of resources, and it seems the data we seek is here :
http://supernova.lbl.gov/Union/figures/SCPUnion2.1_mu_vs_z.txt
 
Last edited:
  • #14
I found this in my Copy of "Principles of Physical Cosmology" Peebles. Princeton University Press, 1993

p.24 "... the redshift of the spectrum of a galaxy is the sum of a cosmological part proportional to the galaxy distance [itex] r [/itex] and a Doppler shift due to line of sight component of the motion of the galaxy relative to the mean flow..."

[itex]c\left( \frac { { \lambda }_{ 0 } }{ { \lambda }_{ e } } -1 \right) \equiv cz \cong { H }_{ 0 }r+v [/itex]

[itex]\frac { { \lambda }_{ 0 } }{ { \lambda }_{ e } } [/itex] is the ratio of the observed wavelengths, [itex]{ \lambda }_{ 0 } [/itex], of features in the spectrum to the corresponding laboratory wavelength, [itex]{ \lambda }_{ e } [/itex], that are presumed to be the wavelength at emission as measured by an observer at rest in the observed galaxy. The second expression is the redshift, [itex]z [/itex], defined so [itex]z=0 [/itex] if there is no shift in the spectrum. The line-of-sight component of the motion of the galaxy relative to the mean, which is called the peculiar velocity, is [itex]v [/itex]. The peculiar velocity produces an ordinary first order Doppler shift [itex]\frac { \delta \lambda }{ \lambda } =\frac { v }{ c } [/itex]. The constant of proportionality in the cosmological redshift term [itex]{ H }_{ 0 }r [/itex] is Hubble's constant. This linear relation between redshift and distance applies when [itex]{ H }_{ 0 }r [/itex] and [itex]v [/itex] are small compared to the velocity of light. The value of Hubbles constant still is subject to debate , so the standard practice, going back at least to Kiang (1961), is to write is as

[itex]{ H }_{ 0 } = 100h km{ s }^{ -1 }{ Mpc }^{ -1 },\qquad 0.5\lesssim h\lesssim0.85[/itex]"Not to throw random confusing canonical stuff in there but if it's helpful for reference. As far as I can tell it doesn't undermine the approach here at all, except to say that if you wanted to look at real object and date it via redshift, you would need to account for it's "peculiar velocity" the non-expansion velocity component of apparent wavelength distortion. That seems way over zealous for the purpose here. I'm not sure how the dimensionless parameter h is related exactly
 
  • #15
True though this is more of a concern when studying a single object. Here the idea is to plot the data and assume that peculiar velocities just add random noise to the underlying relation - i.e. they are part of those residuals.

They do of course widen the error bars, and there are probably ways to correct for this to some extent, but this is beyond the ambition of the simple approach here - in any case, we have a big extrapolation step so the approach can't claim any precision.
 
Last edited:
  • #16
I'm just glad I understand roughly what's going on.
 
  • #17
I believe the subscript 'b' in Mb stands for bolometric.
 
  • #18
Chronos said:
I believe the subscript 'b' in Mb stands for bolometric.
Thanks - would that be a total luminosity across the spectrum then ?
I would tend to assume the one they use in the chart / data is the best one for distance estimate, and that it includes any relevant correction.
 
  • #20
Perfect, thanks.
 
  • #21
It turns out the age of the universe is between 5 bn years and infinity :wink:
Assuming I read the SCP data correctly, after trying a couple basic fits it appears the extrapolation is clearly too big a factor, and the data too noisy, for this "model free" approach to produce a quick and dirty plausible estimate. Of course more sophisticated methods work, but then this is well beyond this modest attempt.
But it was instructive to try :)
 
  • #22
@marcus or someone else around here, I suspect I'm misinterpreting the data http://supernova.lbl.gov/Union/figures/SCPUnion2.1_mu_vs_z.txt and/or missing something before that.
They describe it as "Redshift, Distance Modulus". For the latter I was assuming
a) this translates into a distance ##d_L=10^{1+\mu/5}## in parsecs;
b) ##D=d_L##
But this doesn't fit any plausible FRW model.
Using instead
b')##D=d_L/(1+z)##
Seems to work.

This seems related to a correction I saw mentionned elsewhere to the ##L\propto 1/D^2## law I was assuming above, which appears to be incorrect.

My understanding now is that the number of photons on a sphere is ##N\propto 1/D^2## but ##L\propto N/(1+z)^2\propto 1/(D(1+z))^2## so that the relation b' above makes sense.

Is this correct? Thanks.
 
  • #23
With the corrections above, this is what I get with the Union dataset, fitting a FRW matter-lambda D(a) function;
The chart is D vs a; the lower curve is matter-only with a ##H_0=0.070## best fit to low-z supernova, and the second curve keeps that same ##H_0## and a best fit ##H_{\infty}=0.059## (which gives ##\Omega_{\lambda}=0.71, \Omega_m=0.29##).

The corresponding age is then ##T_0=\frac{1}{H_\infty}\tanh^{-1}(\frac{H_\infty}{H_0})=13.9 Gy##

On the other hand, a "model-free" fit using, arbitrarily, a quadratic polynomial in a, while almost as good on the data, yields an age of 6 Gy only.
 

Attachments

  • Union2.jpg
    Union2.jpg
    10.7 KB · Views: 432
Last edited:
  • #24
The expansion of the universe has, in two ways, diminished the energy flux that we receive. The energy of light is inversely proportional to its wavelength. As the supernova's light travels to us, the expansion of the universe expands the wavelength of the light by a factor of 1+z. Also, the expansion of the universe decreases the rate at which we receive photons, as compared to the rate at which photons left a supernova, by another factor of 1+z
 
  • Like
Likes marcus
  • #25
Thanks for the clarification !
 

FAQ: Calculating the age of the universe without a model?

How is the age of the universe calculated without a model?

The age of the universe is calculated by measuring the expansion rate of the universe and extrapolating backwards to determine when the universe began. This method is known as the "Hubble's Law" and it relies on the observation that galaxies are moving away from each other at a constant rate.

What evidence supports the calculation of the age of the universe without a model?

The evidence for the age of the universe calculation comes from various observations such as the cosmic microwave background radiation, the abundance of light elements in the universe, and the redshift of distant galaxies. These all support the idea of an expanding universe and help to determine its age.

How accurate is the age of the universe calculation without a model?

The current estimate for the age of the universe is around 13.8 billion years, with a margin of error of about 100 million years. While this may seem like a large range, it is actually quite accurate considering the vastness of the universe and the limitations of our technology.

Can the age of the universe be calculated without a model in the future?

As technology and scientific understanding continue to advance, it is possible that the age of the universe calculation without a model will become more accurate. However, it is also important to note that the universe is constantly evolving and changing, so the age will always be an estimate rather than a precise measurement.

How does the calculation of the age of the universe without a model impact our understanding of the universe?

The calculation of the age of the universe without a model is a crucial aspect of our understanding of the universe. It provides us with a timeline of the universe's history and allows us to better understand how it has evolved over time. It also helps to support theories such as the Big Bang and provides evidence for the expansion of the universe.

Similar threads

Replies
4
Views
1K
Replies
14
Views
2K
Replies
3
Views
2K
Replies
9
Views
2K
Back
Top