Simple no-pressure cosmic model gives meaning to Lambda

In summary, the size of the universe is accurately tracked by the function u(x), which scales distances down by the cosmological constant a(x). "Dark energy" (as Lambda is sometimes excitingly called) is here treated simply as a time scale. Multiplying an ordinary time by ##\sqrt{\Lambda/3}## is equivalent to dividing it by 17.3 billion years, so to take an example, suppose your figure for the present is year 13.79 billion, then the time x-number to use is: x_{now} = \sqrt{\Lambda/3}\ 13.79\ billion\ years=
  • #1
marcus
Science Advisor
Gold Member
Dearly Missed
24,775
792
The size of the universe (any time after year 1 million) is accurately tracked by the function
$$u(x) = \sinh^{\frac{2}{3}}(\frac{3}{2}x)$$
where x is the usual time scaled by ##\sqrt{\Lambda/3}##

That's it. That's the model. Just that one equation. What makes it work is scaling times (and corresponding distances) down by the cosmological constant. "Dark energy" (as Lambda is sometimes excitingly called) is here treated simply as a time scale.

Multiplying an ordinary time by ##\sqrt{\Lambda/3}## is equivalent to dividing it by 17.3 billion years.
So to take an example, suppose your figure for the present is year 13.79 billion. Then the time x-number to use is:
$$x_{now} = \sqrt{\Lambda/3}\ 13.79\ billion\ years= \frac{13.79\ billion\ years}{17.3\ billion\ years}=0.797$$
Basically you just divide 13.79 by 17.3, get xnow= 0.797, and go from there.

When the model gives you times and distances in terms of similar small numbers, you multiply them by 17.3 billion years, or by 17.3 billion light years, to get the answers back into familiar terms. Times and distances are here measured on the same scale so that essentially c = 1.

EDIT: George Jones introduced us to this model last year and I lost track of the post. I recently happened to find it again.
https://www.physicsforums.com/threads/hubble-radius-and-time.760200/#post-4787988
 
Last edited:
  • Like
Likes davidbenari, julcab12 and wabbit
Space news on Phys.org
  • #2
Needless to say, we don't know the overall size of the universe, so what this function u(x) tracks is the size of a generic distance. Specifically distances between things that are not somehow bound together and which are at rest with respect to background. Cosmic distances expand in proportion to u(x).
Over a span of time where u(x) doubles, distances double.

Since it's a very useful function, it's worth taking the trouble to produce a normalized version that equals 1 at the present xnow=.797.
All we have to do is evaluate u(.797) = 1.311, and divide by that. The normalized scale factor is called a(x)
$$a(x) = \frac{u(x)}{1.311} = \frac{sinh^{2/3}(\frac{3}{2}x)}{sinh^{2/3}(\frac{3}{2}0.797)}$$

a(x) at some time x in the past is the size of any given cosmic distance then, compared with its size now. a(x)=1/2 means that at time x distances were half their present size, and a(xnow) = 1

1/a(x) is the factor by which a cosmic distance has expanded between time x, and now.

This fact about 1/a(x) let's us write a useful formula for the distance a flash of light has traveled since time xem that it was emitted, until now. We divide the time between xem and xnow into little dx intervals, during which the light traveled cdx. Then we just have to add up all the little cdx intervals scaled up by how much each one has been enlarged. That would be ##\frac{cdx}{a(x)}##
Let's use the same scale for distance as we do for time so that c=1 and we can omit the c.
$$D_{now}(x_{em}) = \int_{x_{em}}^{x_{now}} {\frac{dx}{a(x)}} = 1.311\int_{x_{em}}^{x_{now}} {\frac{dx}{\sinh^{2/3}(1.5x)}}$$
 
Last edited:
  • Like
Likes wabbit and Jorrie
  • #3
marcus said:
The size of the universe (any time after year 1 million) is accurately tracked by the function
$$u(x) = \sinh^{\frac{2}{3}}(\frac{3}{2}x)$$
where x is the usual time scaled by ##\sqrt{\Lambda/3}##

That's it. That's the model. Just that one equation. What makes it work is scaling times (and corresponding distances) down by the cosmological constant. "Dark energy" (as Lambda is sometimes excitingly called) is here treated simply as a time scale.

...
I want to clarify that a bit. What I mean is that one equation sums up how the universe expands over time (especially after radiation in the hot early universe eases off, and the contents become predominantly matter---that's when it becomes a really good approximation.)
that's the basic equation and formulas to calculate other things can be derived from it. If you are comfortable with the calculus of differentiating and integrating then there's nothing more to memorize.

For example we might want to know how to calculate the fractional distance growth rate H(x) at any given time x. H(x) = a'(x)/a(x), the change in length divided by the length, the change as a fraction of the whole.
Remember that a(x) is just that function u(x) divided by 1.311 to normalize it. So u'(x)/u(x) works too.
I'm using the prime u' to denote d/dx differentiation.
$$u(x) = \sinh^{\frac{2}{3}}(\frac{3}{2}x)$$
$$u'(x) = \frac{\cosh(1.5x)}{\sinh^{1/3}(1.5x)} $$
$$H(x) = \frac{u'(x)}{u(x)} = \frac{\cosh(1.5x)}{\sinh^{1/3}(1.5x)\sinh^{2/3}(1.5x)} = \frac{\cosh(1.5x)}{\sinh(1.5x)}$$
So that means in this model both the Hubble time 1/H and the Hubble radius c/H are given by the hyperbolic tangent function tanh(1.5x). We can use the google calculator to find stuff about the history of the cosmos, for a range of times. I'll put up a table, in a moment.

BTW I wanted a distinctive notation for the HUBBLE TIME that wouldn't let it get confused with the actual time the model runs on. It is a reciprocal growth rate. Hubble time 10 billion years means distances are growing at a fractional rate of 1/10 per billion years, or 1/10,000 per million years. I don't know if this was wise or not but I decided to avoid subscripts and just use a totally new symbol, capital Theta.
$$\Theta(x) = \frac{1}{H(x)} = \tanh(1.5x)$$
Code:
x-time  (Gy)    a(x)    S=1/a   Theta   (Gy)     Dnow     Dnow (Gly)
.1      1.73    .216    4.632   .149    2.58    1.331       23.03
.2      3.46    .345    2.896   .291    5.04     .971       16.80
.3      5.19    .458    2.183   .422    7.30     .721       12.47
.4      6.92    .565    1.771   .537    9.29     .525        9.08
.5      8.65    .670    1.494   .635   10.99     .362        6.26
.6     10.38    .776    1.288   .716   12.39     .224        3.87
.7     12.11    .887    1.127   .782   13.53     .103        1.78
.797   13.787  1.000    1.000   .832   14.40    0            0
The S = 1/a column is useful if you care to compare some of this model's billion year (Gy) and billion light year (Gly) figures with those rigorously calculated in Jorrie's Lightcone calculator. There the number S (which is the redshift z+1) is the basic input. In Lightcone, you get out times and distances corresponding to a given distance-wavelength stretch factor S. So having an S column here facilitates comparison. Our numbers ignore an effect of radiation which makes only a small contribution to overall energy density except in the early universe.
By contrast, Lightcone embodies the professional cosmologists LambdaCDM model. What surprised me was how close this simplified model came (within a percent or so) as long as one doesn't push it back in time too close to the start of expansion.
 
Last edited:
  • #4
This model appears to have only one adjustable parameter, the cosmological constant Λ. (Actually there is another which one doesn't immediately notice, we needed an estimate for the age of the universe ~13.79 billion years--I'll discuss that in another post.)

Based on latest observations, the current estimate for Lambda is a narrow range around 1.002 x 10-35 second-2.

Lambda, as an inverse square time or inverse square distance, appears naturally in the Einstein GR equation and was included by Einstein as early as 1917. It is a naturally occurring curvature term.

In this model we scale times from billions of years down to small numbers (like 0.797 for the present era) using ##\sqrt{\Lambda/3}##
That is the main parameter and the key to how it works.
Using the current estimate, dividing by 3 and taking square root, we have
$$\sqrt{\Lambda/3} = 1.828 \times 10^{-18}\ per\ second$$
That is a fractional growth rate, sometimes denoted H and it is the growth rate towards which the Hubble rate is observed to be tending.
If you take the reciprocal, namely 1/H, it comes out to about 17.3 billion years.
Multiplying a time quantity by ##\sqrt{\Lambda/3}## is the same as dividing it by 17.3 billion years.

So the model has one main adjustable parameter, which is the time/distance scale 17.3 billion (light) years. And we determine what value of that to use by observing the longterm eventual distance growth rate 1.83 x 10-18 per second,
or equivalently by observing the cosmological constant Lambda (that is how the value of Lambda is estimated, by seeing where the growth rate is tending.)
 
Last edited:
  • #5
Since this is the main model equation (practically the only one, the rest can be readily be derived from it)
$$u(x) = \sinh^{2/3}(\frac{3}{2}x)$$
I'll post a plot of it. This shows how the size of a generic distance increases with time. Recall the present is about 0.8.
You can see a switch from convex to concave around 0.45 which is where distance growth gradually stops decelerating and begins to accelerate.
sinh^(2:3).png

The plot is by an easy to use free online resource called Desmos.
 
  • #6
Where does this model come from?

I'm not sure how surprising it is. If you assume that the density is equal to the critical density, then expansion in the ΛCDM model is given by the fraction of matter and the Hubble constant only, right? But the Hubble constant just scales everything, for the scale factor I guess we don't need it. Which leaves the cosmological constant as parameter, in the same way this function has a free parameter.

The shape is also not surprising:
You start with sinh(x)≈x, so ##u(x) \approx x^{2/3}## as you would expect in a matter-dominated universe.
In the far future we have sinh(x)≈ex/2, so ##u(x) \approx \frac{1}{2}e^{2/3 x}## as you would expect in a dark energy dominated universe.
 
  • #7
It is the standard Friedmann equation model for a spatial flat, matter-dominated universe.
I personally was surprised that it came so close (within a percent or so) to the numbers given by Jorrie's Lightcone calculator, because that takes radiation into account.
Whereas this simplified model is only matter---some people would call it the pressure-less dust case.
Maybe I shouldn't have been surprised (at present radiation is only about 1/3400 of the matter+radiation energy density.) :smile:

I particularly like the simple (easily graphable) form of the equations and think this presentation of the standard flat LambdaCDM (matter dominated case) has potential pedagogical value.
What do you think?
 
  • #8
I'll save some discussion here:
I think the way to understand "dark energy" also known as cosmological curvature constant Lambda is to look at its effect on the growth history of the universe. It could just be a small inherent constant spacetime curvature built into the geometry---a tendency for distances to expand, over and above other influences---or it could arise from some type of energy density we don't know about. But the main thing is to look at its effect.

That's why I plotted the expansion history of a generic distance a couple of posts back. In case any newcomers are reading, this distance was one unit right around year 10 billion (that is about x=0.6 on our scale). And you can look and see that at present (x=0.8) it is around 1.3 units. You can look back and see what it was at earlier times like x=0.1. Distance growth is proportional, so the unit here could be any large distance, a billion lightyears, a billion parsecs, whatever. People on another planet who measure distance in some other unit could discover the same formula and plot the same curve. This is the history of any large-scale cosmic distance. I mean the righthand (expansion) side of the graph is.

So back at time x = 0.1 the distance was 0.3 units, at time x=0.3 it was 0.6 units, at time 0.6 it was 1 unit.
The really interesting thing, I think, about this plot of our universe's expansion history is that around time x=0.45 you can see it change from convex to concave that is from decelerating to accelerating.
That has to do with the growth rate, which is flattening out.
Here the x-axis is time (in units of 17.3 billion years, as before).
The y-axis shows the growth RATE in fractional amounts per billion years. It levels out at 0.06 per billion years, which is (1 per 17.3 billion) the long term rate determined by the cosmological constant.
4aprilcoth.png

Around x=0.45 the percentage growth rate reaches a critical amount of flatness so that practically speaking it is almost constant. And you know that growth at a constant percentage rate is exponential. A savings account at the bank grows by increasing dollar amounts because the principal grows, and it would do so even if the bank were gradually reducing the percent interest rate, as long as it didn't cut the rate too fast.
So growth decelerates as long as the percentage rate is declining too steeply, and then starts to accelerate around x=0.45 when the decline levels off enough.

It happens because that is when the MATTER DENSITY thins out enough. A high density of matter, by its gravity, slows expansion down. The matter thinning out eventually exposes the inherent constant rate that provides a kind of floor. I hope this makes sense. I am trying to relate the growth RATE history plot here to the resulting growth history plot in the previous post).
 
Last edited:
  • #9
In another thread I tried calculating the "particle horizon" or radius of the observable universe using this model.
The distance a flash of light could in principle have covered emitted at the start of expansion: x = 0
$$D_{now}(x_{em}=0) = \int_{0}^{x_{now}} {\frac{dx}{a(x)}}$$
For the stretch factor 1/a(x) I used the usual matter-dominant version with sinh2/3(1.5x) for time 0.00001 to the present time 0.8
But for the early universe segment from time 0 to time 0.00001 , since radiation was dominant, I used sinh1/2(2x)$$ 1.311\int_{0}^{0.00001} {\frac{dx}{\sinh^{1/2}(2x)}} + 1.311\int_{0.00001}^{0.8} {\frac{dx}{\sinh^{2/3}(1.5x)}}$$

It gave the right answer for the particle horizon, namely an x-distance 2.67 which in conventional terms (multiplying by 17.3 billion light years) a bit over 46 billion light years.

Picking the x-time 0.00001 corresponds to choosing the year 173,000 for when we want to switch.
Before that we consider radiation the dominant contents of the universe. After that, matter. In reality there was a smooth transition at about that time. Jorrie gives S=3400 as the moment of matter radiation equality.
Lightcone says that corresponds to year 51,000. Well, close enough
 
Last edited:
  • #11
I was wondering, how much difference there is between $$ 1.311\int_{0}^{0.00001} {\frac{dx}{\sinh^{1/2}(2x)}} + 1.311\int_{0.00001}^{0.8} {\frac{dx}{\sinh^{2/3}(1.5x)}} $$
and the approximation $$ 1.311\int_{0}^{0.8} {\frac{dx}{\sinh^{2/3}(1.5x)}}=1.311\int_{0}^{0.00001} {\frac{dx}{\sinh^{2/3}(1.5x)}} + 1.311\int_{0.00001}^{0.8} {\frac{dx}{\sinh^{2/3}(1.5x)}} $$

Using the fact that ## \sinh(x)\simeq x \text{ for } x\ll 1##, we get $$1.311\int_{0}^{\epsilon} {\frac{dx}{\sinh^{1/2}(2x)}}\simeq 1.311 (2\epsilon)^{1/2}\simeq 0.0059\text{ for }\epsilon=0.00001$$
$$1.311\int_{0}^{\epsilon} {\frac{dx}{\sinh^{2/3}(1.5x)}}\simeq 1.311\cdot 2(1.5\epsilon)^{1/3}\simeq 0.0647\text{ for }\epsilon=0.0001$$
So using the approximation overstates the result by ##0.0647-0.0059\simeq0.059##
 
  • #12
Hi Wabbit!
Let's try "Number Empire" online definite integral. I think it will blow up if we want to integrate the matter-era stetch factor S(x) starting at zero :oldbiggrin:
If this works it should give the radius of the observable, about 46 billion light years
http://www.numberempire.com/definiteintegralcalculator.php?function=17.3*1.311*csch(1.5*x)^(2/3)&var=x&a=0&b=.797&answers=

If it doesn't work we can change the lower limit from x=0 to x=0.00001
http://www.numberempire.com/definiteintegralcalculator.php?function=17.3*1.311*csch(1.5*x)^(2/3)&var=x&a=.00001&b=.797&answers=

With the 0.00001 cut-off we get 46.00 billion light years.
But with lower limit zero there is something fishy. I feel it should blow up and give infinity, but it gives 47!

It makes some approximation. But I think the answer is not in accord with nature because let's shift to
the radiation era form
17.3*1.311*csch(2*x)^(1/2) and integrate from 0 to 0.00001, it should not make a whole billion light year difference. It should only add a small amount.

Yes. Here is the calculation:
http://www.numberempire.com/definit...*csch(2*x)^(1/2)&var=x&a=0&b=0.00001&answers=
It amounts to only a TENTH of a billion light years

So the main integral (matter era from 0.00001 onwards) is 46.0 billion LY
and the little bit at the beginning (radiation era 0 to 0.1) is 0.1 billion LY
So the total distance a flash of light covers from start of expansion to present (if it is not blocked) is 46.1 billion LY.
This is what I expect from reading references to the "particle horizon" and "radius of observable universe".
I could be wrong. It's basically stick-in-the-mud blind prejudice. I'm used to 46 and can't accept 47 even from numperembire. The Numpire must have used an invalid approximation to get a finite answer like that. Excuse the mouth-foaming I will calm down in a few minutes :oldbiggrin:
 
Last edited:
  • #13
csch ?? ... Ah, hyperbolic cosecant ! Not sure I ever seen that used before : )
 
  • #14
About trying the numerical integrator here for a singular integrand : heh, sometimes my pen and paper still wins over these newfangled contraptions :biggrin: Edit : nope, it's a tie.

But the answer is finite, if the result starting at 0.00001 is 46 then the total from 0 is 46.06 47, same as what you got from the integrator, the integral of sinh^(-2/3) does converge, and the approximation with x^1/3 should be pretty good in this range (must admit I didn't do an error estimate)

Edit : ooops sorry forgot to multiply by 17.3. Corrected now.
17.3×0.06=1 so i get the same result 47 from the analytical approximation.
Humble apologies for underestimating the power of that integrator.
 
Last edited:
  • #15
It may be overly fancy to use csch instead of 1/sinh, All it does is save writing a minus sign in the exponent
at the serious risk of confusing some readers. I'm just trying it out and maybe will go back to
S(x) = 1.311(sinh(1.5x))-2/3

BTW I really like that Dnow(x) is simply the integral of S(x)dx from emission time xem up to present.
the distance the flash of light has traveled is a nice neat
$$\int_{x_{em}}^{0.797} S(x)dx$$
It seems to validate S(x) stylistically.

BTW I just went to Lightcone and asked about the current "particle horizon" which Lightcone calls "Dpar"
It is in the "column selection" menu and not usually displayed, so you have to select for it being displayed, check a box.

According to Lightcone the current radius of the observable is 46.28.
It shifts over to radiation-era integrand by some rule (keeps track of the composition of the energy density).
Maybe your analysis is right and we can go with the matter-era integrand all the way to zero---I'm still confused and a little dubious. It seems like one OUGHT to have to shift over to the radiation-era integrand when one gets that close to zero.
 
Last edited:
  • #16
Yeah I don't think csch helps, it adds a "what is this ? " step for many readers, and anyway if I needed to do a calculation with it, the first thing I'do would be to substitute 1/sinh for it.
As for S(x) its true, I thought it a bit bizarre at first, but it was just lack of familiarity. I still think it's best not to introduce notations until they're about to really earn their keep - so in my view it would depend if it just helps for this formula or if it gets used a lot afterwards. Maybe even just the two or three related integrals with different bounds (thinking of those Lineweaver lists) make it worth it.
 
  • #17
wabbit said:
About trying the numerical integrator here for a singular integrand : heh, sometimes my pen and paper still wins over these newfangled contraptions :biggrin: Edit : nope, it's a tie.

But the answer is finite, if the result starting at 0.00001 is 46 then the total from 0 is 46.06 47, same as what you got from the integrator, ...
My soul is at rest. Indeed, the derivative of 3x1/3 is x-2/3 so x-2/3 is integrable. And sinh(x) is almost identical to x, close to zero. So the integral of sinh(x)-2/3 converges, and pen and paper (with a cool head and classical logic) prevails.

But the wicked are still punished! Because if they take the matter-era integrand S(x) = 1.311sinh(1.5x)-2/3 all the way back to zero they get 47!

And 47 is the wrong answer. It should be 46-or-so.
 
  • #18
Ah yes, but pen-and-paper is one step ahead, for we saw in post 11 that the radiation-adjusted integral from 0 to 0.00001 is about one tenth the unadjusted result, and this yields 0.0059x17.3=0.10 so that we should get about 46.1 and not 47 : )

In a small way, this back-and-forth reminds me of what Wilson-Ewing is doing in his LCDM paper where he moves between analytic and numerical approximations and uses one to validate the other : ) actually these are almost the same approximations we're discussing here so it's not very far.
 
Last edited:
  • #19
It is the custom of the alert Wabbit to be one jump ahead. bTW I think you are right about not bothering with the hyperbolic cosecant, and sticking with the more familiar 1/sinh.

For one thing "csch" is very hard to pronounce. The noise might be alarming and frighten many people away.
 
  • #20
kschhhh ! kschhh ! Nope, doesn't sound like a rabbit at all
 
  • #21
marcus said:
I'll save some discussion here:
I think the way to understand "dark energy" also known as cosmological curvature constant Lambda is to look at its effect on the growth history of the universe. It could just be a small inherent constant spacetime curvature built into the geometry---a tendency for distances to expand, over and above other influences---or it could arise from some type of energy density we don't know about. But the main thing is to look at its effect.

That's why I plotted the expansion history of a generic distance a couple of posts back. In case any newcomers are reading, this distance was one unit right around year 10 billion (that is about x=0.6 on our scale). And you can look and see that at present (x=0.8) it is around 1.3 units. You can look back and see what it was at earlier times like x=0.1. Distance growth is proportional, so the unit here could be any large distance, a billion lightyears, a billion parsecs, whatever. People on another planet who measure distance in some other unit could discover the same formula and plot the same curve. This is the history of any large-scale cosmic distance. I mean the righthand (expansion) side of the graph is.

So back at time x = 0.1 the distance was 0.3 units, at time x=0.3 it was 0.6 units, at time 0.6 it was 1 unit.
The really interesting thing, I think, about this plot of our universe's expansion history is that around time x=0.45 you can see it change from convex to concave that is from decelerating to accelerating.
That has to do with the growth rate, which is flattening out.
Here the x-axis is time (in units of 17.3 billion years, as before).
The y-axis shows the growth RATE in fractional amounts per billion years. It levels out at 0.06 per billion years, which is (1 per 17.3 billion) the long term rate determined by the cosmological constant.
View attachment 81467
Around x=0.45 the percentage growth rate reaches a critical amount of flatness so that practically speaking it is almost constant. And you know that growth at a constant percentage rate is exponential. A savings account at the bank grows by increasing dollar amounts because the principal grows, and it would do so even if the bank were gradually reducing the percent interest rate, as long as it didn't cut the rate too fast.
So growth decelerates as long as the percentage rate is declining too steeply, and then starts to accelerate around x=0.45 when the decline levels off enough.

It happens because that is when the MATTER DENSITY thins out enough. A high density of matter, by its gravity, slows expansion down. The matter thinning out eventually exposes the inherent constant rate that provides a kind of floor. I hope this makes sense. I am trying to relate the growth RATE history plot here to the resulting growth history plot in the previous post).
Does a system coming to thermal equilibrium have a curve shaped like that? Not that lots of things don't look like logarithms... Just was expecting there was a chance you guys might say "Of course it does, that's what you would expect, since that's what it basically is" or "Just because it looks like an exponential/logarithmic function is completely coincidental/incidental/unsurprising and has nothing to do with anything." Either of which would be helpful information.

I just saw a plot of the Helmhotz Free Energy as f(time) in the book I started literally this morning on Complexity, which has already alluded to the Second Law as the driver of the increase in same, driven in turn by the expansion of the universe. It looked exactly the same shape-wise. I realize now that this guy Chaisson who planted a number of thoughts in my head years back... which I've probably completely distorted. Anyway reading this thread, which was pretty cool by the way, that shape just struck me.
 
Last edited:
  • #22
Hi Jimster! I remember you from previous threads in BtSM forum. Good to see you! I think you are right that there could be some qualitative similarity. The curve you mention is coth(1.5x)/17.3
which shows how the Hubble growth rate H(x) has evolved over time.
Time is scaled here using the cosmological constant. x-time is ordinary years time divided by 17.3 billion years, which means that the present, xnow, is 0.797 or about 0.8.

You can see that the present growth rate is about 0.07 per billion years. Just find x = 0.8.
And you can see that the longterm growth rate is leveling out at 0.06 per billion years.
4aprilcoth.png

It illustrates that the distance growth rate was much greater in the past. Back at time x=0.1 H(x) was 0.4.
That is, distances were growing at a rate of 40% per billion years.
Because to keep the formulas simple we are scaling time in units of 17.3 billion years, time x=0.1 is 1.73 billion years, so there were stars and galaxies and things looked pretty much like what we are used to, just a lot closer together. But the expansion was a lot more rapid!
Let's find out how much closer together things were back at time x=0.1 (aka 1.73 billion years). The righthand side here shows how the size of a generic distance increased over the same timescale. At present time (x=0.8) it is 1.3 so let's look back and see what it was at time x = 0.1
sinh^(2:3).png

At time x = 0.1 the distance was 0.3, so in the intervening time it has grown to 1.3.
The ratio (from 0.1 to present) is what we are calling S, the stretch factor. S(0.1) is about 13/3 = 4.33.
It is the factor by which wavelengths and distances have been enlarged since that time, compared with present.
The second graph is u(x) = (sinh(1.5x))2/3
it is generated by the first graph H(x) = coth(1.5x) which shows the fractional growth rate embodied in the second graph. If you differentiate u(x) and then divide by u(x) to get the fractional growth rate u'/u it comes out to be coth(1.5x).
Jimster41 said:
Does a system coming to thermal equilibrium have a curve shaped like that?
That's an interesting question. It is almost as if 0.06 per billion years is an EQUILIBRIUM GROWTH RATE of the universe, and the universe is settling down to that 6% per billion years rate of distance growth, following the first curve. And that is what has generated the actual expansion history (the second curve).
I think of it as an analogy rather than an explanation because I can't imagine what the universe and its growth rate could be coming into equilibrium WITH. :oldsmile:
 
Last edited:
  • #23
But which curve in that case ? I'd expect temperature for instance would be convex or concave all the way in simple cases, following T=T0+(Tf-T0)exp(-k t) , but some other aspect may well show an inflexion point.
Here the lambda-less expansion would be concave, so in the similar thermal case (identical ? After all the universe is a system approaching equilibrium, isn't it) we may also need to have something playing the role of lambda, i.e, a long range repulsive force / a gas that expands faster when highly diluted.
 
Last edited:
  • #24
I've been following along for the most part. I think the idea of normalizing makes makes sense. o_O.

Watching a Susskind video lecture the other day on Black Hole Entaglement. He went into how differently Entropy and "Complexity" scale for entangled QM objects, their maximum "Complexity" being oodles larger than Max Entropy - and the time to reach both differing accordingly. He drew a curve (I think) with an asymptote like the one you show above for growth rate 0.06 * 17.3 Billion years = 1/S?

Halfway through one by a colleague of his on Black Holes and Super Conductivity... which has me on the edge of my seat.

When I first started following your calculator post(s) I was wondering if one could estimate Entropy vs. Complexity of the Universe, just the relative gross curvature over time, like you have done with time, size and rate of expansion. I almost chimed in, but... I'm clueless. Susskind had a formula for QM "Entanglement Entropy?". I was surprised by that. Which made me go looking for the guy Chaisson again. Turns out in this new "Cosmic Evolution Book" Chaisson says he's going to show how "Complexity" can be analyzed quantitatively (at least as a qualitatively described process...). I am looking forward to learning... wth he's talking about.

Well, it could be reaching equilibrium with the ocean of entangled QM states comprising our future in the "Bulk". :confused:

Seriously though, this guy is one of Susskind's crew... trying to just figure out what he was talking about in this paper broke my head when it blew my mind.

Nuts and Bolts for Creating Space
Bartlomiej Czech, Lampros Lamprou
(Submitted on 16 Sep 2014)
We discuss the way in which field theory quantities assemble the spatial geometry of three-dimensional anti-de Sitter space (AdS3). The field theory ingredients are the entanglement entropies of boundary intervals. A point in AdS3 corresponds to a collection of boundary intervals, which is selected by a variational principle we discuss. Coordinates in AdS3 are integration constants of the resulting equation of motion. We propose a distance function for this collection of points, which obeys the triangle inequality as a consequence of the strong subadditivity of entropy. Our construction correctly reproduces the static slice of AdS3 and the Ryu-Takayanagi relation between geodesics and entanglement entropies. We discuss how these results extend to quotients of AdS3 -- the conical defect and the BTZ geometries. In these cases, the set of entanglement entropies must be supplemented by other field theory quantities, which can carry the information about lengths of non-minimal geodesics.
http://arxiv.org/abs/1409.4473
 
Last edited:
  • #25
sorry guys. Sometimes I realize too late when I don't make any sense out-loud... over excitement. I didn't mean to goof-out what was a really instructional thread.

I do realize that the idea of "Growth Rate" coming into or from "Thermal Equilibrium" is nonsensical on the face of it. There were a number of jumping bean thoughts included (but left out of communication) whereby somehow the rate of expansion of the universe (which does look like it's approaching an equilibrium-like asymptote) could turn out to be a caused by some non-equilibrium process related to deep fundamental interactions - with the thumbprint of entropy/free energy.

I really was (roughly) following your calculation. That curve shape just got the jumping beans going.
 
Last edited:
  • #26
I don't think it's nonsensical. The universe is cooling down from an initial hot state. Gravitational clumping can make thermodynamic thinking really tricky here but I don't think it's wrong if used carefully, and taking into accounts the limitations (ie defining the energy of the universe in a meaningful way etc.)
 
  • Like
Likes Jimster41
  • #27
To recap, in this thread we are examining the standard LambdaCDM cosmic model with some scaling of the time variable that simplifies the formulas, and makes it easy to draw curves: curves showing how the expansion rate (percent per billion years) changes over time, how a sample distance grows over time, and so on.
Time is measured in units of 17.3 billion years. If it makes it easier to remember think of that as a day in the life of the universe: a Uday.
I've been using the variable x to denote time measured in Udays,
Our unnormalized scale factor u(x) = sinh2/3(1.5x) shows the way a generic distance grows.
Normalizing it to equal 1 at present involves dividing by u(xnow) = 1.311.
The normalized scale factor is denoted a(x) = u(x)/1.311.
The fractional distance growth rate H(x) = u'(x)/u(x) = a'(x)/a(x) = coth(1.5x)

Note that the normalized scale factor a is something we observe. When some light comes in and we see that characteristic wavelengths are twice as long as when they were emitted, we know the light began its journey when a = 0.5, when distances were half the size they are today.

So by the same token the unnormalized u = 1.311a is observed. We just have to measure a and multiply it by 1.311.

Knowing the wavelength enlargement we can immediately calculate the TIME x (measured in Udays) when that light was emitted and started on its way to us. Here is x as a function of the number u.
$$x = \frac{2}{3}ln(\sqrt{u^3} + \sqrt{u^3+1})$$
Since u = 1.311a, we can also write time as a function of the more readily observed number a.
$$x = \frac{2}{3}ln(\sqrt{(1.311a)^3} + \sqrt{(1.311a)^3+1})$$
Here's a plot of the time x(a) when the light was emitted as a function of how much smaller the wavelength was at emission compared with the wavelength on arrival.

x(a)8Apr.png

You can see, for example, that if the wavelength at emission was 0.2 of its present size, then that light was emitted at time x = 0.1. since time unit is 17.3 billion years, that means emitted in year 1.73 billion.
On the other hand if the wavelength was originally 0.8 of its present size, which mean it has only been enlarged by a factor of 1.25, then the light was emitted more recently: at time x = 0.6.
Multiplying by our 17.3 billion year time unit we learn it was emitted around year 10.4 billion.

You might think the curve is useless to the right of a = 1 (which denotes the present). But it shows the shape and, come to think of it, tells something about the future. If today we send a flash of light which later is picked up in a distant galaxy by which time its wavelengths are 40% larger---1.4 of their present size---what year will that be?
You can see from the curve that it will be time x=1.1
which means the year will be 17.3+1.73=19.03---about year 19 billion.

And the curve also tells how long the universe has been expanding, just evaluate the time at a=1.
 
Last edited:
  • #28
So does the curve essentially "redshift date" a photon? And you start with the x-axis (independent variable) as the "intrinsic wavelength" (I'm guessing there is a precise term for that?) because, really that's how the Hubble Constant was discovered. We know what atomic spectra look like and it was obvious they are getting stretched when we look at objects we know (by other means, like cepheid variables, or similar standard candles) were farther away.

Couple times I've wished for axis labels btw.
 
  • #29
Jimster41 said:
So does the curve essentially "redshift date" a photon?
Nice way to put it - redshift dating, instead of carbon dating.

It works only if you know the original wavelength of the photon though (e,g. If it corresponds to a known emission line), or if you know what the original spectrum of a source looks like.

But to do it you must have established what curve to use first - and Hubble's law was established before that, observationally, as a redshift - distance relation (not a redshift - time relation, although this is equivalent since time and distance are in essence the same thing for light.).
 
Last edited:
  • #30
Thanks for reminding me of the fact that hubble's was and empirical discovery. When I first read about that, and it's still fun to think about, how the hairs on Sir Edwin's neck must have stood up when he saw that utterly non-random thing buried in starlight.
 
  • #31
True... I would assume he must have checked his data more than once before daring to publish such outrageous results : )
 
  • #32
I keep wondering about the curvature of the plot. I think it might serve a pedagogical purpose to show how almost linear it is, but curvature is a function of metric, sorta. And are we making the flatness, more prominent arbitrarily. What would it look like if the y-axis was not a linear scale, or something. My sense is that if you squashed it the curvature would be more noticeable, and "recognizable" (almost looks like a dynamic control response to me, if I mentally squash it, and tilt my head, and squint) not that the goal is to "recognize it". But I do think it it is a valid way of puzzling over the spectrum of things it could be saying.
 
Last edited:
  • #34
I'm somewhat interested in how one might find a data fitting value of H or equivalently 1/H, using this model, if all one was given to start with was the present-day Hubble rate H0 = 1/(14.4 Gy) and the redshift-distance data (e.g. from type IA supernovae).
I think Wabbit may have a more efficient solution to this problem, or someone else may. My approach is kind of clunky and laborious.

Basically I TRY different values of the Hubble time T, generate a redshift-distance curve numerically, and see which fits the data best.

So let's assume we know the current Hubble time T0 = 14.4 Gy, and we want to compare two alternatives T = 17.3 Gy and T = 18.3 Gy. Call them α and β.

First of all we have two different versions of xnow
xnowα = (1/3)ln((17.3 + 14.4)/(17.3 - 14.4)) = 0.7972...
xnowβ = (1/3)ln((18.3 + 14.4)/(18.3 - 14.4)) = 0.708799... = 0.7088

Next we apply the scale-factor function u(x) = sinh2/3(1.5x) to these two times.

uα = sinh2/3(1.5*0.7972) = 1.311
uβ = sinh2/3(1.5*0.7088) = 1.1759 = 1.176

And normalize the two scale-factors
aα(x) = sinh2/3(1.5x)/1.311
aβ(x) = sinh2/3(1.5x)/1.176

Now given a sequence of observed redshifts zi
we can solve, as in post #27 above, for the emission time for each
$$\frac{1}{1+z} = a_{\alpha}(x) = sinh^{2/3}(1.5x_{em\alpha})/1.311$$
$$\frac{1}{1+z} = a_{\beta}(x) = sinh^{2/3}(1.5x_{em\beta})/1.176$$

And then integrate to find the present-day distance to the emitter, in each case:

$$D_{now}(x_{em}) = \int_{x_{em}}^{x_{now}}\frac{cdx}{a(x)}$$

For the given sequence of redshifts, this provides two alternative sequences of distances to compare, to see which matches the measured redshift-distance data.
 
Last edited:
  • #35
What you describe actually sounds to me pretty close to the model fitting done to the same effect in the 1998/1999 papers though I found the similar estimation done in http://arxiv.org/abs/1105.3470 easier to follow (and they have charts. I'm a sucker for charts.)

One thing to note - though this is a bit of work to use and numerical integration is probably what I'll use in practice : $$ D(a)=\int_a^1\frac{da}{a^2H(a)}=\int_0^z\frac{dz}{H(z)} $$
For the matter-lambda model this is an elliptic integral, which (modulo some rather painful argument transformations) is available as a predefined function in mathematica etc. I couldn't find one in Excel, but looking for one I stumbled upon the paper below which includes a very short code claimed to be accurate for programming such a function :
http://arxiv.org/abs/0909.3880]Cylindrical[/PLAIN] Magnets and Ideal Solenoids ;
Norman Derby, Stanislaw Olbert
 
Last edited by a moderator:

Similar threads

Back
Top