Modelling density as a function of time?

In summary, the conversation discusses a simple model that attempts to understand the changing energy density of the universe over time. The model is based on basic assumptions, such as the present-day critical density and the Friedmann-Fluid equation. The results of the model align with accepted energy density values and observations from the WMAP data. However, the issue of present-day radiation energy density proved problematic within the model and further clarification is needed. The model also raises questions about the cause of the universe's expansion and the role of dark energy in early epochs.
  • #1
mysearch
Gold Member
526
0
This post simply represents a learning process, my own. As such, it is only attempting to understand how the energy density might have changed with time and what can be inferred from this. There are 3 diagrams attached to this post, figure-1 is an attempt to show the relative density of matter, CDM, radiation and dark energy as a function of time, figure-2 shows the value of (H) resulting from the combination of these densities, while figure-3 is sourced from the following site based on WMAP data: http://universe-review.ca/F02-cosmicbg.htm

As stated, I am not trying to pretend that this is anything but a very simple model and therefore its only real purpose is to try and confirm whether the basic assumptions on which it is based are, in principle, valid. This requires some explanation of how figure-1 was drawn. As a starting point, the present-day critical density is assumed to be 9.54E-27 kg/m^3, which is then converted to an energy density of 8.53E-10 joules/m^3. From this the corresponding energy density components have been defined

(1) Matter....4% 3.41E-11 joules/m^3
(2) CDM...23% 1.96E-10 joules/m^3
(3) Dark Energy... 73% 6.23E-10 joules/m^3

The issue of the present-day radiation energy density proved problematic within the model, but I will come back to this issue after introducing a few other basic assumptions. However, the following link defines the radiation energy density as 0.64E-13 joules/m^3 by way of reference: http://hyperphysics.phy-astr.gsu.edu/hbase/Astro/neutemp.html#c1

The Friedmann-Fluid equation outlines the rate of change of density with time as a function of (H), current density (rho) and the equation of state encapsulated by [w]. While this equation is normally shown with a pressure [P] term, this has been converted via P = w*rho*c^2. As such, the Fluid equation is presented in the form:

(4) d(rho)/dt = -3H*rho(1+w)

The following values of [w] were assumed:
(5) matter & CDM [w] = 0…….d(rho)/dt = -3H*rho
(6) radiation [w] = 1/3…………d(rho)/dt = -4H*rho
(7) dark energy [w] = -1……….d(rho)/dt = 0

As such, the current value of d(rho)/dt for each component was approximated by substituting the energy densities outlined above along with H=71km/s/Mpc or 2.31E-18 m/s/m. This gave a rate of change per second, which was then simply aggregated up to approximate a rate of change per 0.25 billion years. This provided a starting point on an assumed expansion timeline of 13.75 billion years. Using the aggregated value of d(rho)/0.25 billion years, the subsequent values at intervals of –0.25 billion years were calculated along with a corresponding value of (H) based on Friedmann’s equation for each 0.25 billion year increment:

(8) H^2 = 8/3*pi*G*rho[matter+cdm+radiation+dark energy]

While this model is only attempting to verify the basic principles at work, it results align with today’s accepted energy density values and the results showed in figure-3. As another reference point, the value of (H) was synchronised at 380,000 years to that given by the Morgan calculator for a redshift of 1090, which corresponds to a CMB temperature range of 3000K to 2.725K:
http://www.uni.edu/morgans/ajjar/Cosmology/cosmos.html

At first glance, the results in figure-3 seem to be reflected in figure-1. However, the only way I could make this fit was to assume a much higher value of the radiation density than that provided in the earlier ‘hyperphysics` link. The starting radiation energy density had to be set to 2.39E-12 j/m^3 rather than 0.64E-13 j/m^3 in order to arrive at the required 25% figure at 380,000 years. Given that the rate of change of radiation density is driven by equation (8), i.e. -4H*rho, it is not totally clear why this discrepancy occurred, therefore any clarification would be welcomed along with any other deeper insights to this process.

I have some additional issues which I have outlined in post #2. Thanks
 

Attachments

  • Figure-1.jpg
    Figure-1.jpg
    33.8 KB · Views: 570
  • Figure-2.jpg
    Figure-2.jpg
    20.4 KB · Views: 578
  • Figure-3.jpg
    Figure-3.jpg
    11.7 KB · Views: 542
Last edited:
Space news on Phys.org
  • #2
Follow-on from #1

While it is accepted that the model results, outlined in post #1, are based on a very crude approximation, they seems to reflect a process that would explain the phase transition from a universe whose expansion initially slows under gravity, due to matter and CDM, to a universe whose expansion starts to accelerate under the effects of dark energy. A rationalised form of the acceleration equation can be presented in the form:

(9) a/r = -(4/3)*pi*G*rho(1+3w)

Again, substituting for (w):
(10) matter & CDM [w] = 0……. a/r = -(4/3)*pi*G*rho
(11) radiation [w] = 1/3…………. a/r = -(8/3)*pi*G*rho
(12) dark energy [w] = -1………. a/r = +(8/3)*pi*G*rho

While the energy density of dark energy becomes comparable to matter+CDM in figure-1 around 9.5 billion years, equation (12) suggests that dark energy would be twice as effective per unit of energy density and act in opposition to gravity. Therefore, dark energy would have started to reverse the slow down due to gravity at an earlier time, i.e. 6-7 billion years?

As explained previously, the only purpose for this model is simply to help gain a more intuitive picture of the effects of expansion on energy density, which might hopefully then lead to a better understanding of other effects. However, there are a few things that puzzles me about this model:

1) What drove the expansion of the universe in the first place?

2) Figure-1 suggests that the gravitational energy density of CDM, radiation and matter would have overwhelmed the expansive effects of dark energy in the very early universe?

3) Equally, if dark energy only expands space, i.e. infers no momentum on the objects being moved apart, what else expanded the early universe?​

Again, any clarifications of these issues would be welcomed.
 
  • #3
Figure-1, in post #1, shows a plot of the relative component energy densities as a function of time. The thing that surprised me about the plot was the only component that I had thought explained expansion appears to become negligible as you turn the clock back in time. As such, the following question seems reasonable:

What caused the universe to expand in its earliest epoch?

Now Friedmann’s equation seem to allow the velocity of expansion to be calculated in terms of Hubble’s constant, e.g. velocity per unit distance. Now if I assume that (H) is primarily determined by observation to be 71km/s/Mpc, then a first order estimate of density substituted into Friedmann’s equation is 9.54E-27 kg/m^3 or 8.53 joules/m^3, if converted into energy density.

As I understand it, observation also appears to support the assumption of a near spatially flat universe, i.e. k=0, which leads to a form of the Friedmann equation that can predict (H) as a function of the scalefactor (a).

[1] [tex]H=H_0 \sqrt(\Omega_R a^{-4} + \Omega_M a^{-3} + \Omega_K a^{-2} + \Omega_\Lambda ) [/tex]

However, on the basis that a=1/(1+z), it would seem that the value of (H) can be determined based on measured redshift, e.g. CMB = 1090 corresponding to +380,000 years. The Morgan calculator gives a value of H=1,329,466km/s/Mpc in comparison to today’s figure of 71 km/s/Mpc.
http://www.uni.edu/morgans/ajjar/Cosmology/cosmos.html

However, how does the Friedmann equation explain this expansive velocity?

It has always struck me odd that Friedmann’s equation can be derived from the classical equations associated with the conservation of energy

[2] [tex]1/2mv^2 = GMm/r [/tex]
[3] [tex]v^2 = 2GM/r[/tex] where [tex]M=\rho * 4/3 \pi r^3 [/tex]
[4] [tex]H^2 = v^2/r^2 = 8/3 \pi G \rho [/tex]

Another odd fact about this equation, in a classical context, is that the velocity [v] corresponds to the free-fall velocity under gravity. Of course, the velocity associated with [H] is the velocity of expansion not contraction under gravity.

Yet another odd fact is determination of (H) seems to include the density of dark energy, which according to the classical equation [4] would be treated as a gravitational mass density. Whereas, dark energy is described as a net negative pressure, which would suggest that it acts like as analogous form of anti-gravity. As such, I would have thought it would work in the opposition to the other density components, at least wrt to [v].

Of course, figure-1 in post #1 suggests that the expansive effects of dark energy at z=1090 would have been negligible in comparison to the energy density contributing to gravitational collapse.

So, what is the missing factor that started the universe expanding so fast?

At this point, it is possibly worth highlighting that the big bang model seems to stress the importance that it should not be considered as an explosion imparting momentum, but rather the uniform expansion of space:

If so, there seems to be little scope left for any classical notion of inertia and momentum through which the expansion of the universe was maintained up until the emergence of dark energy as an effective energy density, i.e. ~6-7 billion years after the main event?
 
Last edited:
  • #4
Mysearch,
this is potentially a very constructive thread and you have put quite a bit of thoughtful work into it already, as well as listing some deep (and rather difficult) questions. This post is simply to provide an authoritative review paper as source material.

My own side comment: humans are currently in a very humble position as regards the universe. They (we) don't really understand the causes underlying the process of distance expansion, but are still in the business of simply mapping the history of it, using various models.

A good scholarly review paper will not just tell you what is known, but it will point out the extent of uncertainties, and the gaps. The aim is to give a survey and status report on how the investigation is going. Not to jump prematurely to conclusions. Not to overstate the extent of knowledge.

I have no sure expertise to choose the currently best most up-to-date review paper, but I will guess that it is this January 2008 one by Eric Linder. Linder is a prominent senior guy at UC Berkeley of the kind that people commission to do review articles summarizing their field. The article is not 100 percent accessible. It has comprehensible patches and also heavily technical parts. But it is the most recent authoritative review I know.

http://arxiv.org/abs/0801.2968
Mapping the Cosmological Expansion
Eric V. Linder
49 pages, 29 figures; Review invited for Reports on Progress in Physics
(Submitted on 18 Jan 2008)

"The ability to map the cosmological expansion has developed enormously, spurred by the turning point one decade ago of the discovery of cosmic acceleration. The standard model of cosmology has shifted from a matter dominated, standard gravity, decelerating expansion to the present search for the origin of acceleration in the cosmic expansion. We present a wide ranging review of the tools, challenges, and physical interpretations. The tools include direct measures of cosmic scales through Type Ia supernova luminosity distances, and angular distance scales of baryon acoustic oscillation and cosmic microwave background density perturbations, as well as indirect probes such as the effect of cosmic expansion on the growth of matter density fluctuations. Accurate mapping of the expansion requires understanding of systematic uncertainties in both the measurements and the theoretical framework, but the result will give important clues to the nature of the physics behind accelerating expansion and to the fate of the universe."

Being able to plot the history of expansion, that is, the history of the scalefactor a(t), is more or less equivalent I think to what you were asking about, that is, reconstructing the history of the density. Because as the scalefactor increases the various components of the density decrease in well-understood ways, as you have listed. So it comes down to determining the past history of a(t)---and this is what Linder is writing about.

See his very first figure, Figure 1 on page 5, which plots the past history of a(t) under various assumptions using various models.
 
Last edited:
  • #5
Part 1 of 4: Premise of the Model

Following the initial posts #1-4, posts #5-8 reflect the assumptions and issues surrounding the calculation of the basic timeline of the LCDM model. As such, I would welcome any verification of my assumptions. Thanks

In post #1, I outlined my first attempt to create a very simple LCDM model that showed the component energy densities as a function of time. While based on the Friedmann-Fluid equations, the method was only a crude approximation and depended on the age of the universe being input as a known parameter, i.e. 13.7 billion years. As such, I didn’t really understand how the age of the universe was being estimated or how the timeline to CMB decoupling could be calculated to be 370,000 years. After a bit of searching, I came across the following equation

[1] [tex] \Delta t = \frac {\Delta a}{aH_0 \sqrt{ (\Omega_M /a^3) + (\Omega_R /a^4) +(\Omega_\Lambda) } } [/tex]

This equation appears to be the basis of most of the following cosmology calculators, although this is not obvious by quickly looking at the Javascript code:

http://www.astro.ucla.edu/~wright/CosmoCalc.html
http://www.uni.edu/morgans/ajjar/Cosmology/cosmos.html
http://www.geocities.com/alschairn/cc_e.htm

Part-3 of this series of posts presents a cut-down version of a Javascript that implements just equation [1], which anybody can copy, test and modify. The results of this script are shown in Part-2 and basically align to the results of the calculators listed above. Of course, I would be interested in knowing if I have made any mistakes or wrong assumptions. The essence of equation [1] seems to be based on the following basic assumption:

[2] [tex] \frac {\Delta t }{t} = \frac {\Delta a }{a} [/tex]

This equation seems to suggest that the ratio of the rate of change of time [t] is proportional to ratio of the rate of change of the scalefactor (a). On the basis that [H=1/t] we would get:

[3] [tex] \Delta t = \frac {\Delta a }{aH} [/tex]

Here the value of [H] has to correspond to the time linked to the scalefactor (a). This is done as shown in the denominator of equation [1].
 
  • #6
Part 2 of 4: Results of the Model

Following the initial posts #1-4, posts #5-8 reflect the assumptions and issues surrounding the calculation of the basic timeline of the LCDM model. As such, I would welcome any verification of my assumptions. Thanks

For consistency, the results are presented in the format style of the earlier graphs in post #1, but as now attached to this post. To do this, these results were actually produced by a spreadsheet implementation of the Javascript presented in Part-3. The main difference in comparison with the earlier graphs is the value of the present-day radiation energy density has been changed to align to accepted WMAP data. This value is much smaller than my earlier assumption and results in the transition from a matter-to-radiation dominated universe being much more abrupt and within the immediate timeframe of CMB decoupling, i.e. +400,000 years.

However, the main point to highlight is that the estimate of the age of the universe is now calculated as the sum of all the [dt] values calculated. The following results are a snapshot of the output of the Javascipt in Part-3 with z=1090 as an input.

Code:
NOW Values: 
--------------
Scale Factor (a) = 1
Hubble (H) = 71 km/s/Mpc
Baryons = 4% 
CDM = 23% 
Radiation = 0.00824% 
Lambda = 73% 
Total = 100% 

THEN values 
--------------
Redshift (z) = 1090
Scale Factor (a) = 0.00091
Hubble (H) = 1553026 km/s/Mpc
Baryons = 11% 
CDM = 64% 
Radiation = 25% 
Lambda = 0% 
Total = 100% 

Age Estimates
---------------
Age at z = 1090 = 0.368336 Myrs
At Scale Factor (a) = 0.00001
Age of the Universe = 13622 Myrs
 

Attachments

  • Figure-4.jpg
    Figure-4.jpg
    33.9 KB · Views: 524
  • Figure-5.jpg
    Figure-5.jpg
    19.3 KB · Views: 531
  • #7
Part 3 of 4: Javascript of the Model

Following the initial posts #1-4, posts #5-8 reflect the assumptions and issues surrounding the calculation of the basic timeline of the LCDM model. As such, I would welcome any verification of my assumptions. Thanks

The Javascript below has been written in a very simple in-line format to reflect equation [1]. Therefore, if you wish to change the value of [z] input into the calculation, you will need to edit the value highlighted at the top of the script. While this differs from the more sophisticated approach of the formal calculators, I believe the results are essentially equivalent and hopefully make it a little easier to understand the key principles.

By way of instruction, all you need to do is cut and paste the following code into a HTML file on your own machine and then view it with your browser. However, you will need to enable javascripts that may flag the inherent security issues. Of course, it's your choice, but you have the option to fully check the code below first.

Code:
<html> 
<head> 
<title>Test 2</title> 

<script language="JavaScript">


//	Key Input: z = Matched Event
//	========================================================
	var	z = 1090;		//e.g. decoupling z=1090
//	========================================================


// 	convert z-event into scale factor
//	---------------------------------
	var	az = (1/(1 +z));	
	var 	az = Math.round(az*1000000)/1000000;
	var	zDone = 0;	


//	NOW variables
//	-------------
	var	a = 1;			// Scale factor

	var 	a = Math.round(a*10000000)/10000000;

	var 	H0 = 71;		// Hubble H(now) in km/s/Mpc

// 	Conversion Factor
//	-----------------
	var	TMyr = 974740;		// t = TMyr/H MegaYears	

//	Omega NOW Values
//	----------------
	var	Ob = 0.04;		// Omega baryon matter
	var	Od = 0.23;		// omega cold dark matter
	var	Or = 0.0000824;		// Omega Radiation
	var	Ol = 0.73;		// Omega Lambda/dark energy
	var	Ot = Ob + Od + Ol;	// Omega total

//	THEN variables to be calculated
//	-------------------------------
	var	H = 0;			// Hubble H(then)
	var	Oba = 0;		// Omega baryon matter
	var	Oda = 0;		// omega cold dark matter
	var	Ora = 0;		// Omega Radiation
	var	Ola = 0;		// Omega Lambda/dark energy
	var	Ota = 0;		// Omega total

//	internal variables
//	------------------
	var	dt = 0;			// Calculated time [dt]
	var	N = 100000;		// loop count
	var 	da = 1/N;		// scale factor increment

	var 	da = Math.round(da*1000000)/1000000;


// 	Output values at z-event
//	------------------------
	window.document.writeln("NOW Values: <br>" );
	window.document.writeln("--------------<br>" );
	window.document.writeln("Scale Factor (a) = " + a + "<br>" );
	window.document.writeln("Hubble (H) = " + H0 + " km/s/Mpc<br>" );
	window.document.writeln("Baryons = " + (Ob*100) + "% <br>" );
	window.document.writeln("CDM = " + (Od*100) + "% <br> " );
	window.document.writeln("Radiation = "); 
	window.document.writeln(+ Math.round((Or*100)*100000)/100000 + "% <br>" );
	window.document.writeln("Lambda = " + (Ol*100) + "% <br>" );
	window.document.writeln("Total = " + (Ot*100) + "% <br> " );
	window.document.writeln("<br>" );

//	--------------------------
//	START OF MAIN PROGRAM LOOP
//	--------------------------

// 	loops from 1 to 0 in increments of da
//	-------------------------------------
	while (a > (0+da))		
	{
// 		decrement scale factor from 1 to 0 by da
//		----------------------------------------
		a=Math.round( (a-da)*1000000)/1000000;


// 		Calculate relative omega values at scale factor
//		-----------------------------------------------
		Oba = Ob/Math.pow(a,3);		//baryon matter at 1/a^3
		Oda = Od/Math.pow(a,3);		//dark matter at 1/a^3
		Ora = Or/Math.pow(a,4);		//radiation at 1/a^4
		Ola = Ol/Math.pow(a,0);		//dark energy t 1/a^0
		Ota = (Oba+Oda+Ora+Ola);


// 		Calculate (dt) and accumulate running total
//		-------------------------------------------
		dt = dt + ((TMyr/H0)*(da/(a*Math.sqrt(Ota))));


// 		When Scale factor corresponds to redshift
//		-----------------------------------------
		if ( (a < az) && (zDone == 0) )
		{

// 			Save total time to z-event
//			--------------------------
			dtz = dt;	
			zDone =1;		// do this path once only

// 			Calculate H(then)
//			-----------------
			H = H0*Math.sqrt(Ota);


// 			Output values at z
//			------------------
			window.document.writeln("THEN values <br>" );
			window.document.writeln("--------------<br>" );
			window.document.writeln("Redshift (z) = " + Math.round(((1/az)-1)) + "<br>" );
			window.document.writeln("Scale Factor (a) = " + a + "<br>" );
			window.document.writeln("Hubble (H) = " + Math.round(H) + " km/s/Mpc<br>" );
			window.document.writeln("Baryons = " + Math.round((Oba/Ota*100)) + "% <br>" );
			window.document.writeln("CDM = " + Math.round((Oda/Ota*100)) + "% <br>" );
			window.document.writeln("Radiation = " + Math.round((Ora/Ota*100)) + "% <br>" );
			window.document.writeln("Lambda = " + Math.round((Ola/Ota*100)) + "% <br>");
			window.document.writeln("Total = " + Math.round((Ota/Ota*100)) + "% <br>" );
			window.document.writeln("<br>" );
		};
	};

//	------------------------
//	END OF MAIN PROGRAM LOOP
//	------------------------

	window.document.writeln("Age Estimates<br>" );
	window.document.writeln("---------------<br>" );
	window.document.writeln("Age at z = " + z + " = " );
	window.document.writeln(Math.round( (dt-dtz)*1000000) / 1000000  + " Myrs<br>" );
	window.document.writeln("At Scale Factor (a) = " + a + "<br>" );
	window.document.writeln("Age of the Universe = " + Math.round(dt) + " Myrs<br>" );

</script> 

</head> 
</body>
</html>
 
Last edited:
  • #8
Part 4 of 4: Analysis of the Model

Following the initial posts #1-4, posts #5-8 reflect the assumptions and issues surrounding the calculation of the basic timeline of the LCDM model. As such, I would welcome any verification of my assumptions. Thanks

The following is simply a list of issues that I am still reflecting on:

1. First and foremost, the adage of “garbage in, garbage out” can always be linked to the results from any program.

2. However, does this model broadly reflect how the age of the universe as calculated by most cosmologists or is it just another red herring to the masses?

3. It is also worth reiterating the link provided by Marcus (#4) to the article by Eric Linder: http://arxiv.org/abs/0801.2968. In comparison to the complexity of issues outlined in this article, this model is still very basic. This said, there is the possibility in the Linder article of losing sight the basic principles amongst all the complexity listed.

4. In principle, the model seems to show how the expansion of the universe can be linked to the energy density of the various components as a function of time. However, it has to be highlighted that all the results of this model are predicated on 4 component energy densities, i.e. baryon matter, radiation, cold dark matter and dark energy, of which particle physics only really understands the first 2.

5. With this caveat highlighted, equation [1] appears to allow the total length of the timeline of the universe to be estimated along with key events for which we have a measured z-value, e.g. CMB.

6. In contrast, the model can only express the expansion of the universe as a relative change in the scale factor, i.e. there appears to be no absolute measure of the physical size of the universe, e.g. the distances quoted for the observed or particle horizons are only linked to our position in a potentially much, much bigger universe.

7. One of the biggest problems that I still have with this model is linked to the fall-off of the energy density of dark energy. As such, the contracting effects of gravity would appear to completely overwhelm the expansive effects of dark energy in the earlier universe.

8. I don’t understand why the recession velocity linked to the Hubble constant [H] should essentially align to the free-fall velocity of an object under gravitational acceleration from infinity.

9. At one level, there still appears to be an inference that the initial inflation of the universe imparted an expansive velocity. However, I am struggling to reconcile this description with the expansion of each unit volume of space as often associated with dark energy.​

As always, would appreciate any further insights or corrections of any false assumptions. Thanks.
 
Last edited:
  • #9
The question below is raised based on the `cause and effect` assumptions of a basic energy density model, i.e. an expanding universe requires some causative agent. I realize that this post is excessive in length and detail and therefore may be of little interest to many, but wanted to table some of my outstanding issues for later reference.

Why does Friedmann’s equation suggest an expansion velocity, when only the gravitational energy density seems to be taken into consideration?

Cause & Effect: Overview

I started this thread as a learning exercise in the hope that it would answer my questions about how the universe expanded as a function of density-pressure plotted against time. While I recognised that the model outlined in earlier posts is only an approximation, I was encouraged by the fact that virtually all the cosmological calculators seem to be predicated on the same basic energy density model. However, while I think I understand how the figures are calculated, I still don’t understand how the model works in terms of a cause and effect description of expansion.

In order to align to the standard model, the big bang is considered not as an explosion, but rather the uniform expansion of space. In this way, superluminal recession velocity is said not to violate the principles of special relativity, i.e. nothing is traveling faster than light in any local frame of reference. However, such a process seems to suggest, at least within the confines of this model, that an `expansive cause` has to continually exist otherwise expansion would stopped. However, I don’t see any mechanism in this model that describes how the expansion of space would `persist` in the absence of some cause, i.e. negative pressure counteracting gravity?

Given that the original big bang model pre-dates the idea of inflation and any ideas about cold dark matter and dark energy, I would like to understand what was cited as the original `cause` driving an expanding universe?

Cause & Effect: Details

http://arxiv.org/abs/astro-ph/0309756v4.
This paper provides a Newtonian derivation of Friedmann’s equation relating to the kinetic and gravitational potential energy of a galaxy of mass (m) subject to expansion. For brevity, this is reduced to just 2 steps by assuming k=0:

[1] [tex]E_T = 1/2mv^2 + (- GMm/r) = 0 [/tex]

Substituting for M=pV, where p = density and V=volume, we can jump straight to a form of the Friedmann’s equation as the paper above provides more detail:

[2] [tex] H^2 = \frac {v^2}{r^2} = \frac {8}{3} \pi G \rho [/tex]

The omission of the [k] term is explained in terms of a spatially flat universe and an equivalent energy density that is inverse proportional to the square of the scale factor [a], which seems to be negligible in comparison to matter, radiation and dark energy over the timeline of decoupling until now. This assumption appears to be supported by all cosmology calculators reviewed. In addition, Einstein’s cosmological constant term is now considered as 1 of 4 energy density components, i.e. baryon and cold dark matter, radiation and dark energy. As far as I can see, the calculators also align to equation [2] in the following manner:

[3] [tex] H^2 = \left( \frac {\dot a}{a} \right)^2 = \frac{8}{3} \pi G( \rho_m + \rho_{cdm} + \rho_\lambda + \rho_\Lambda ) [/tex]

So while dark energy is considered to have negative pressure, only its gravitational energy density appears to be taken into account in Friedmann’s equation. As such, the recession velocity [v] implied by [H] in the basic derivation of equation [2] would appear to correspond to the free-fall velocity of object (m) under gravity. Therefore, this equation seems more reflective of contraction under gravity than expansion. So, even though equation [2] apparently gives the correct expansive velocity in terms of [H], it is not clear why this is the case. Proceeding towards the acceleration equation, which appears to amount to differentiating Friedmann’s equation with respect to time plus the substitution of the fluid equation:

[4] [tex] \frac {\ddot a}{\dot a} = - 4/3 \pi G (\rho+3P/c^2) [/tex]

Again, we might change the form of this equation to show the net effects of substituting for [tex]P=w \rho c^2[/tex] for each component:

[5] [tex] \left( \frac {\ddot a}{a} \right)^2 = -\frac{4}{3} \pi G( \rho_m + \rho_{cdm} + 2 \rho_\lambda - 2 \rho_\Lambda ) [/tex]

Now we see the effect of negative pressure associated with dark energy, when [w=-1], although calculations suggest that it has no net expansive effect until after 7 billion years. However, given that velocity and acceleration are both vector quantities, the sign of equations [2] and [4] require some interpretation. If we assume that the sign must relate to its direction, there doesn’t seem to be any obvious reason to associate a positive velocity with equation [2], although this is clearly the inference of observation, so we will start from this assumption.

So what about acceleration?

It can be seen that equation [4] comes with an inference that acceleration is negative by default, but the interpretation of acceleration being positive or negative often depends not only on the rate of change, i.e. rising or falling, but also on whether it acts in the same direction as the velocity vector. However, based on the attached diagram showing the results of equation [4] plotted against time, it is not entirely clear to me how the sign of the result might be interpreted given some of the ambiguity about the direction of velocity. However:

o Let’s assume we follow the normal arrow of time in the forward direction.
o While the direction of the velocity vector associated with equation [2] is ambiguous to me, let’s align to observation and assume that it has to be expansive, i.e. positive.
o On this basis, acceleration before 7 billion years was rising from a large negative value to a smaller negative value because it was always acting in opposition to expansion?
o By the same token, the acceleration after 7 billion years is positive because it is increasing and acting in the same direction as expansion.

Of course, the issue with this interpretation is that there appears to be no obvious `cause` for the assumed expansion within the confines of the model, although this position clearly contradicts observation. For obvious reasons, I would welcome any clarifications of my interpretations.
 

Attachments

  • Figure7.jpg
    Figure7.jpg
    23.8 KB · Views: 604
Last edited:
  • #10
Your formulas need work. You drop out the time metric in unique ways.
 
  • #11
Some of the following information was presented in another thread, but as it is derived from the same model as outlined in this thread, this post is simply intended to collate the information and open issues for future reference.

Initial Condition: Cause and Effect?
It has been suggested to me that I should view expansion as an “initial condition”, presumably inferring that expansion is an ongoing effect of some initial cause, possibly inflation? The problem I still perceive with this suggestion lies in the implied nature of the Big Bang, not as an explosion, but rather the uniform expansion of each unit volume of space. However, in these terms, it is difficult to explain how the `effect` of expansion persisted for 7 billion years given only a minuscule period of inflation within the first second of existence without some `cause` being maintained.

Friedmann Equation & the Hubble Constant?
Based on some helpful clarifications, it appears that [H] is primarily determined by observation and measurement. If so, it would seem the Friedmann’s equation does not really explain the value of H, only how the corresponding critical density is calculated from its value at any point in time. While the positive or expansive nature of [H] may be obvious from observation, what still puzzles me is why the Friedmann Equation implicitly assumes that the value of H is positive, i.e. an expansive velocity, when the component energy densities of this equation only seems to suggest a universe that must collapse under gravitation, prior to +7 billion years.

Conservation of Energy?
The anomaly of the following figures is the suggestion that the total energy of a comoving volume of space has increased primarily as a result of the assumption that dark energy density remains constant under expansion. While the universe may or may not be a closed energy system, there appears to be no mechanism in the model to explain any flow of energy into or out of this volume.

Energy Density at z=0;
Observable radius = 4.558e10 LY = 4.312e26 m
Observable volume = 3.965e32 LY^3 = 3.358e80 m^3

Baryons = …4%; ……….3.41e-11 joules/m^3; …1.145e70 joules
CDM = …...23%; ………1.96e-10 joules/m^3; …6.581e70 joules
Radiation = 0.00824%; …7.03e-14 joules/m^3; …2.360e67 joules
Lambda = …73%; …...…6.23e-10 joules/m^3; …2.092e71 joules
Total = …..100%; ……...8.53e-10 joules/m^3; ….2.864e71 joules

Energy Density at z=1090Observable radius = 4.180e7 LY = 3.954e23
Observable volume = 3.059e23 LY^3 = 2.589e71 m^3

Baryons = ..11%; …4.53e-2 joules/m^3; ….1.172e70 joules
CDM = …..64%; …2.60e-1 joules/m^3; …. 6.731e70 joules
Radiation = 25%; …1.02e-1 joules/m^3; …. 2.640e70 joules
Lambda = …0%; ….6.23e-10 joules/m^3; …1.612e62 joules
Total = ….100%; ….4.08e-1 joules/m^3; ….1.056e71 joules


So we are considering a comoving volume expanding from 41.80 million lightyears to 45.58 billion lightyears. Therefore, we can estimate the change in energy of this volume as follows:

Baryon:… 1.145e70 / 1.172e70 = 0.977
CDM: .…. 6.581e70 / 6.731e70 = 0.977
Radiation: 2.360e67 / 2.640e70 = 0.00089
Lambda:…2.092e71 / 1.612e62 = 1.297e9
Total: ……2.864e71 / 1.056e71 = 2.712


So under expansion, and within the limits of accuracy of the javascript program, the baryon and cold dark matter energy within this comoving volume remains essentially unchanged, as we would expect. The radiation energy falls due to the additional (1/a) wavelength expansion factor, while there is an exponential increase in dark energy because it scales proportionally to the change in the volume. However, the bottom line appears to be that our comoving volume now has more energy by a factor of 2.7212 than at the time of at decoupling, i.e. +370,000 years.
 
Last edited:
  • #12
mysearch said:
...However, the bottom line appears to be that our comoving volume now has more energy by a factor of 2.7212 than at the time of at decoupling, i.e. +370,000 years.

That sounds about right. One would take the reciprocal of 0.27 to get a rough estimate.

1/0.27 is 3.7
So if a certain comoving volume started out with X amount of energy, then it would now have 3.7X amount.

You could say that what it gained was 2.7X. that is, the increase is 2.7 times what it started with.
=====================

I remember being struck by this when I encountered it years ago. Einstein would have noticed that his GR theory does not have a global energy constancy law. I wonder what he thought about it.

Anyway there is no mathematical inconsistency because nobody says energy is supposed to be conserved at the level of the universe.

You can only prove that energy is supposed to be conserved in approximately or asymptotically flat spacetime geometry. In a dynamically changing unsteady curved geometry that bet is off. Or so I'm told.

One gets over it. :biggrin:

We humans can only do the best we can. GR is the best theory of gravity, and the geometry of the universe, that we have. It makes amazingly accurate predictions. And it is a package deal. You buy GR and you have to give up certain truisms because they don't apply in certain largescale situations, or they apply only approximately and at some scale the approximation goes bad.

Maybe some day GR will be replaced by a different theory and maybe that theory will say more about what dark energy could be, or whether inflation happened, and if it did then what caused it.

Inflation is a widely accepted notion and it contradicts oldfashion energy conservation expectations by a huge amount, far more than the factor of 2.7 or 3.7 we were talking about.
Put e^60 into the Google window and press return. You will get something like 1026

Life is wonderful. There are mysteries. :biggrin:
 
Last edited:
  • #13
I have a few (likely minor) comments...I skimmed the above posts but not the math...and I could not read the labels on the charts...too small

However, I don’t see any mechanism in this model that describes how the expansion of space would `persist` in the absence of some cause, i.e. negative pressure counteracting gravity?

However, in these terms, it is difficult to explain how the `effect` of expansion persisted for 7 billion years given only a minuscule period of inflation within the first second of existence without some `cause` being maintained.

The above sure seems logical enough and tghe following may provide a few ideas...

Brian Greene, FABRIC OF THE COSMOS,

(p310) Where did all the mass/energy in the universe come from?...inflationary cosmology has cast the question in an intriguing new light...(p311) the total energy embodies by the inflation field increases as the universe expands because it gains energy
from gravity...(278) the cosmological term Einstein added to the equations of general relativity...show a uniform negative pressure...an overall repulsive gravitational force.
(while) immesurably tiny...(281) it becomes important over vast cosmologic la distances...the same properties as a Higgs field stick on a plateau...a supercooled Higgs fields as like a cosmological constant...

Greene goes on to discuss the theoretical Higgs ocean and it's non zero value or spontaneous symmetry breaking...I will need to reread his chapters 9,10,11 but I came away with the impression the Higgs field may not only create mass but also power expansion...
 
Last edited:
  • #14
I have recognised for some time that I need to improve my understanding of GR maths. Of course, it might take a little time for the ideas to sink in, as apparently it is even more difficult than predictive text, BWTHDIK:rolleyes: However, in many ways, I started this thread simply to provide me with some practical understanding of what factors might be important to try and clarify within GR. So some responses to the comments raised:

Naty1: Thanks for the Brian Greene reference, I have his book `The Elegant Universe` somewhere, but have not seen the `Fabric of the Cosmos`. I have also started to read a little into inflation theory regarding cosmological scalar field etc, but have not yet seen any reference as to how this might explain the continued expansion for 7 billion years. Therefore, your extract might prove a useful lead. Sorry, I don't see the problem with the charts, when double-clicking on the charts, the labels seemed OK to me?
marcus said:
Anyway there is no mathematical inconsistency because nobody says energy is supposed to be conserved at the level of the universe. You can only prove that energy is supposed to be conserved in approximately or asymptotically flat spacetime geometry. In a dynamically changing unsteady curved geometry that bet is off. Or so I'm told.
There are 2 key issues captured in this comment that I am really trying to get a better intuitive feel:

o The first is linked to the conservation of energy suggested in the classical derivation of the Friedmann, Fluid and Acceleration equations cited in # 9. While I know many will simply dismiss any reference to this derivation on the grounds that it is not GR, I would still like to know why the classical derivation seems to lead to same results as GR given that the classical derivations of the Friedmann and Fluid equations are both based on the conservation of energy. The former being gravitational Ek+Ep, the latter equation being the adiabatic assumption about universe as a thermodynamic system.

o The second issue concerns my confusion over exactly what people mean by curved geometry. As far as I can see, spatial curvature does not translate to an effective energy density at any point in expansion modeled in this thread. In many instances, spacetime curvature seems to simply correspond to the expansion of space causing light to travel along a geodesic in spacetime, but this is an effect not a cause. Of course, GR in the form of the Schwarzschild metric does describe the effects on both space and time, albeit subject to an observer’s perspective, i.e. the actual observer in free-fall see no curvature of spacetime. However, such effects appear to require proximity to the centre of gravity of a large mass or concentrated energy density, which is not obvious in a homogeneous model that is usually assumed to have no centre of gravity.​
 
Last edited:
  • #15
marcus said:
Maybe some day GR will be replaced by a different theory and maybe that theory will say more about what dark energy could be, or whether inflation happened, and if it did then what caused it. :

Marcus, what are your thoughts on how GR is ever going to be replaced if people with alternative views have a problem getting observation time on the big telescopes and the peer-reviewing people for the major journals pretty much insist that papers follow the "standard" GR line?

Frank
 
  • #16
81+ said:
Marcus, what are your thoughts on how GR is ever going to be replaced ...

Well I do think that major revolutions in science tend to be rather simple in the sense that things tend to flow in a certain direction for a while. It's hard to have three or four major revolutions happening in one field at the same time---like horses pulling in different directions. So at anyone time one general direction of change is apt to be favored.

I see my place as on the sidelines---to observe and report the current revolution in cosmology, rather than to judge whether it is the right revolution or whether some other revolution might be superior in some sense. I'm happy for the players out on the field to struggle over which main direction things go in.

If the current revolution fails then there will be a period of turmoil and another major initiative will emerge. But it's not my place to speculate what that might be, or what chance of success attends the present drive for change.

So anyway, you ask what my thoughts are. About changes occurring in the GRG scene. (General Relativity and Gravitation)
If you want an accurate picture you have to look at the program and speakers of GR18 the 2007 meeting, and keep an eye out for clues as to the program and speakers of GR19, the 2010 meeting.

We are in the midst of a radical upheaval in GRG, where classic cosmology is being replaced by nonsingular quantum cosmology. And a major collateral effort is focused on finding how to replace classic GR by quantum----a new quantum continuum to replace the old spacetime manifold.

I've been watching this fairly closely since 2003 and it's moving now in a very exciting way.
More later.
==============================
I got the link for GR18
http://www.grg18.com/
I do think you have quite a mistaken impression if you think the GRG community is closed-minded, or static, or stuck on a standard GR. Everybody seems to want the fundamental theory renovated, particularly to eliminate singularities, and also to be more realistic at a microscopic level. A quantum universe model is the obvious way to get that. The way you can tell the community wants that is that the highest professional honors awarded at GR18 were awarded to the people who are exploring the most radical reformulation of spacetime dynamics. Honors like being elected president of the International GRG Society, being awarded the main prize (kind of a nobel but just within the GRG community). And being invited to give talks to the plenary session (where everybody gathers) instead of just in one of the specialized parallel sessions and workshops.

So anyway the big triannual conference is a good window and you can check it out and gauge what is happening.

I think you will see a community that is open to change, actively looks for data that can challenge the classic models, is to some extent focused on what is generally perceived as the most promising avenues for change, and is generously supporting and rewarding today's revolutionaries.

That's just my personal view. You asked about my thoughts on the subject. But personal views like this do not make the best topics of discussion. I'd prefer to focus on what we can see happening in the field, by objective measures, than attempt to render judgment on whether it is for the best or not.
 
Last edited:
  • #17
Marcus, thanks. Very thoughtful, very helpful and very clear as always. I'll review the GRG18 material and keep and eye out for GRG19.

Frank
 
  • #18
81+ said:
Marcus, thanks. Very thoughtful, very helpful and very clear as always. I'll review the GRG18 material and keep and eye out for GRG19.

Frank

Thanks for the kind words, 81+! I realize that the GR18 programme may seem in large part incomprehensible. But I'm very glad you are having a direct look. As I say it is a window on the field, for better or for worse.

Whenever you go to primary sources, you take a burden off one, such as myself, who wishes to report. Because I then run less risk of misleading. You already have one foot on solid ground and there is less you need to take on faith.

Since you are having a direct eyeball look at the GR18 line-up, I will be happy to help by pointing out some details, which you can easily check by looking at the programme.

1. At that meeting the two leaders of Loop Quantum Cosmology were honored. Abhay Ashtekar was elected president of the GRG. Martin Bojowald shared the Xanthopoulos prize, and chaired a workshop on quantum cosmology.

2. The person he shared the prize with was Thomas Thiemann, one of the foremost Loop Quantum Gravity people: an early (mid-1990s) member of the community.

3. One of the plenary speakers was Renate Loll, whose team at Utrecht has developed a new mathematical model of the spacetime continuum which is naturally chaotic at microscopic (i.e. Planckian) scale and gradually smooths out at larger scale, to appear regular 4D. The path integral of the universe (matter free case, but with cosmo constant) has been shown to have the correct geometry, agreeing with classic GR. The Loll quantum universe is probably the closest any approach has gotten so far to a new model of the spacetime continuum. I urge you to check out the Loll SciAm link in my sig.

4. Another plenary speaker was Laurent Freidel, lead member of an international collaboration developing another mathematical model of spacetime---the spinfoam. Freidel also has a good shot at finding the continuum that will replace the classic GR model. He also has represented spacetime by a path integral. Spacetime as a quantum path thru a range of possible geometries, starting with some initial spatial shape and ending up at another.

Loll and Freidel, who were singled out by the GR18 organizers early in 2007, have proven to be good bets. Both their groups have been exceptionally productive in the year or more since the GR18 conference was held.

I would guess that they will both be featured as plenary speakers at GR19. We'll see.
==================================

At the same time there has been a rapid growth (almost an explosion) in the number of researchers and research papers dealing with nonsingular cosmology---typically some kind of bounce cosmology. You would not be able to see that from GR18 because it is more recent, but we can tell that by using the arxiv, Spires, or Harvard keyword search tools.

So what we are seeing seems to be a fast-moving situation---perhaps it could be called a quantum cosmology revolution in the field of conventional cosmology. There is no telling for sure how it will turn out. There will be new observations, from the Planck satellite and possibly also the Herschel, and a possibility of testing. Some possibility. I am not certain how much. In any case it's become fascinating to watch.
 
Last edited:

FAQ: Modelling density as a function of time?

How do you determine the density of a substance over time?

In order to determine the density of a substance over time, you will need to collect data at different time intervals and measure the mass and volume of the substance. Then, using the formula density = mass/volume, you can calculate the density at each time point and plot it on a graph to see how it changes over time.

Can you model density as a function of time for any type of substance?

Yes, density can be modeled as a function of time for any type of substance, as long as you have accurate measurements of its mass and volume at different time points. However, the behavior of density over time may vary depending on the physical and chemical properties of the substance.

What factors can affect the density of a substance over time?

The density of a substance can be affected by various factors, such as changes in temperature, pressure, or composition. For example, as a substance undergoes a chemical reaction, its density may change over time due to the formation of new molecules or the release of gases.

How can modeling density as a function of time be useful in scientific research?

Modeling density as a function of time can be useful in scientific research as it allows us to track and understand changes in a substance's physical and chemical properties over time. This can help in studying reaction kinetics, identifying phase transitions, and predicting the behavior of substances under different conditions.

Are there any limitations to modeling density as a function of time?

One limitation of modeling density as a function of time is that it assumes a constant volume. In reality, the volume of a substance can change over time due to various factors, which can affect the accuracy of the model. Additionally, the model may not be applicable to substances that exhibit non-linear behavior or undergo rapid changes in density.

Similar threads

Replies
3
Views
1K
Replies
6
Views
2K
Replies
6
Views
1K
Replies
6
Views
2K
Replies
19
Views
3K
Replies
3
Views
1K
Replies
1
Views
1K
Replies
10
Views
3K
Back
Top