Effort to get us all on the same page (balloon analogy)

  • Thread starter marcus
  • Start date
  • Tags
    Analogy
In summary, the balloon analogy teaches us that stationary points exist in space, distances between them increase at a regular percentage rate, and points in our 3D reality are at rest wrt the CMB.
  • #281


marcus said:
@Ray,
...The Hubble expansion rate has decreased sharply in the past which is why we can see such a lot of stuff that we know is receding faster than light...

marcus. I get it! I've read the entire posting twice and the concept has become clear. Thank you.

The question I'm currently grappling with is, "What is the balloon? What is space?"

I get that space time is stretchy, bendable, compressible. In regions of higher mass time moves slower, so conversely in the regions between galaxies time moves faster. So what is space? Is space inside an atom the same as the rarefied space between galaxies clusters.

I've seen it theorized that on a scale many times smaller than sub atomic particles, space is granular like some twisted up dimensions tied into a knot and arranged on a grid. Is the VACUUM nothingness of space really something?
 
Space news on Phys.org
  • #282


Marcus -
EXCELLENT explanation of expansion. It does; however, leave me with the same pertinent questions raised by RayYates:

Unless the dispersion of the elemental particles that physicists deem to comprize the observable cosmos is a local event, then the entire cosmos, itself, is expanding.

Space 'exists' - if it did not we would all be set ablaze by Sol. Just because our technology seems unable to determine its composition doesn't mean it doesn't exist - and an infinite expanse of space sans fundamental particles would require no less justification than an infinite expanse of fundamental particles sans space.

If the cosmos (everything that exists everywhere) is expanding then either existing space must be increasing in volume (which would lead us to believe it is decreasing in density) or new space is being manufactured (conjured into existence, if you will).

How would you address this issue?
 
  • #283


Isn't another way the balloon analogy helps is to demonstrate how spacetime is different from three dimensional space? Most people picture the big bang as an expanding balloon, except with all kinds of stuff spread within it. If that were the case, then our universe would be unbounded - there would always be empty space beyond any position specified which could be filled later. But if someone can get his mind around the idea that three dimensional space is represented by the 2D surface of the balloon, it makes sense that moving or shining a light in any direction will still be within the limits of the spacetime of our universe.

Question: Although at this point I assume that a beam of light sent out could never theoretically make the circuit and return, because of the extreme expanision of the universe, weren't there times when a beam of photons could have circled around the smaller spacetime at that point and returned to (non-human) sender?
 
  • #284


Farahday said:
...If the cosmos (everything that exists everywhere) is expanding then either existing space must be increasing in volume (which would lead us to believe it is decreasing in density) or new space is being manufactured (conjured into existence, if you will)...

You put your finger on the point exactly.
 
  • #285


Farahday said:
...
If the cosmos (everything that exists everywhere) is expanding then either existing space must be increasing in volume (which would lead us to believe it is decreasing in density) or new space is being manufactured (conjured into existence, if you will).

How would you address this issue?
The standard view is that density is decreasing. The "steady state" idea of constant density went out of style by around 1960-1970, anyway a long time ago.

Cosmologists pretty much all accept 1915 Gen Rel as the currently best most reliable equation for how geometry/gravity evolves and is influenced by matter. Virtually all research is based on the 1915 Gen Rel equation.

According to that picture there are distances between real physical stuff, events etc. The network of distances (angles areas etc) is geometry. But distances are not made of anything, they are RELATIONS, not material substance.

In 1915 Einstein put it concisely: Dadurch verlieren Zeit und Raum den letzten Rest von physicalische Realität. (thereby lose time and space the last vestige of physical reality.)

The geometric relations among things are not a physical substance. "Space" is a word which does not refer to a material. It refers to a bunch of geometric relationships.

So it's misleading to talk about it being "manufactured".

And of course density declines as physical stuff gets farther apart.

CCWilson said:
... But if someone can get his mind around the idea that three dimensional space is represented by the 2D surface of the balloon, it makes sense that moving or shining a light in any direction will still be within the limits of the spacetime of our universe.

Question: Although at this point I assume that a beam of light sent out could never theoretically make the circuit and return, because of the extreme expanision of the universe, weren't there times when a beam of photons could have circled around the smaller spacetime at that point and returned to (non-human) sender?

You got it! Shining a light in any direction within our 3D world is like the 2D creatures shining a light in any direction along the curved 2D (infinitely thin) world they live in. It stays within the defined limits.

You asked a good question. Assuming the finite volume sphere-like model, could a light beam ever have gotten around, made the full circuit? The standard answer is NO. At least after the first fraction of a second :biggrin: I'm not sure about the first few instants. there are different scenarios. But apart from some very early business I can only speculate about, expansion has ALWAYS been too rapid for that to have happened. If something had made expansion pause long enough, sometime in the past, it could have happened. But it didn't. Or if expansion had been much slower than we think it was. But the standard reconstruction of expansion history implies that it was always outpacing the ability of a flash of light to make the full circuit.

It has been calculated what the maximum distance some photons, a flash of light, could now be from the sender, if the flash is sent at start of expansion or as close to then as you like.
So the flash has been traveling for the whole 13.7 billion year history of expansion. (Today some cosmo models go back before start of expansion into a contraction phase, but we arent including that, just the usual 13.7 billion year expansion age.)

That maximum distance is called the PARTICLE HORIZON and it is calculated to be about 46 billion lightyears. The farthest a flash of light can have gotten (with the help of expansion) in the whole 13.7 billion year history is only 46 billion lightyears. We're fairly sure now that the circumference of the entire U, if it isn't actually infinite, is considerably bigger than that, by over a factor of 10.

There was something in a NASA report from the WMAP mission about this, by a bunch of authors: Komatsu et al. I can get the link if you want. Maybe somebody else has something more recent, I'm not entirely sure and would be happy to be corrected if there's some better information.
 
  • #286


I'm sure this is a silly question, Marcus, but if indeed shortly after the big bang photons and particles could return to sender, because initial expansion hadn't been fast enough to disallow it, is it possible that this would have caused some sort of chain reaction, and could that have been the cause of the hyperinflationary period that is supposed to have occurred?
 
  • #287
To recap the main content of this thread, the balloon analogy is to help understand and imagine the GEOMETRY of expansion. Not the physics. It helps to watch the animation carefully
http://www.astro.ucla.edu/~wright/Balloon2.html
notice the galaxies stay in the same place while getting farther apart.
The photons always move at the same speed, say a centimeter per second depending on the size of the image on your computer screen.
In the analogy, all existence is concentrated on the infinitely thin 2D sphere.
The distance between two galaxies can be increasing faster than light (faster than one cm per second) and yet a photon may be able to get from one to the other. You can see this kind of thing happen even before you understand conceptually how it happens. And the analogous thing happens in the real 3D universe. (The distances to most of the galaxies which we observe today are increasing >c and were increasing >c when they emitted the light which we are now getting from them. Watch the animation to get an intuitive feel for these things, which may at first seem paradoxical.
========

Since the balloon thread is about the geometry of Hubble law expansion I will say a bit about Hubble law. If you are a beginner you should experiment with one of the online cosmology calculators, like this one:
http://www.einsteins-theory-of-relativity-4engineers.com/cosmocalc_2010.htm
Put various redshifts in and study the results. Redshift z=1 for nearby galaxies, z=9 for the most distant galaxies confirmed so far, 1090 or thereabouts for the ancient light (the CMB or background).
Put sample redshifts in and find the corresponding distances and the rates those distances are increasing. It will also tell what the Hubble rate was in the past, back when the light we are now seeing was emitted.

The most recent official figure for the current Hubble rate itself is 70.4 km/s per Mpc. You are encouraged to learn to calculate with it. For example, as exercise paste this into the google search window (which doubles as a calculator) and press return:

70.4 km/s per Mpc in percent per million years

Google calculator knows how to express a rate of change as a percent per million years.

You will get 1/139 of one percent per million years.
It will actually say "0.00719973364 percent per million years"
but 0.0072 is very nearly the same as 1/139

That is the percentage rate that distances (beyond the immediate neighborhood of our group of galaxies) are currently growing. According to standard cosmo picture the rate is destined to continue declining approaching about 1/160 in the limit.

1/139 percent per million years is the same as
a millionth of a percent in 139 years.
So if you want to picture how rapidly distances in our universe are expanding think of a distance, and think of waiting 139 years, and then finding that it has increased by a millionth of a percent.
===================

The socalled HUBBLE RADIUS is the distance which, today, is increasing at exactly the speed of light. It is 13.9 billion lightyears. It's just the same number as 1/139 but flipped, with the decimal point moved.
Saying that according to standard model 1/139 will go down to around 1/160 in the longrun is the same as saying that the Hubble radius which is now 13.9 will increase to 16 billion lightyears in the long term. But these are glacially slow changes really unimaginably slow. So we think of the Hubble rate as constant, for the time being. (It has been much larger in the past, though.)
===================
There are a lot of questions to ask about the Hubble rate. Notice that it is a fractional rate of distance increase, not an absolute rate. How can the fact that it has been and will be declining be compatible with the talk one hears about "acceleration"?

Well, fractional or percent rate is not the same as absolute rate. If you take a given distance and plot the CURVE OF WHAT IT WILL BE IN THE FUTURE you will find the curve has increasing slope. That's true even though at every future time the PERCENTAGE INCREASE will be getting less and less. Percentage increase is not the same as slope. One can be steadily decreasing while the other increases. Your bank savings account can grow by an increasing absolute dollar amount each year even though the bank is gradually reducing the percentage interest they give you, because the PRINCIPLE is larger each year. Just how it is, no contradiction.
===================

There are still lots of questions about Hubble expansion law. It involves distances now and the rate they are increasing now. Likewise at some earlier moment in time, or later. How is that "now" instant defined? How are distances defined? (Imagine stopping the expansion process everywhere at once, at a definite instant to make it possible to measure distances without them changing while you measure them. How is "everywhere at once" defined?) We use the idea of observers at CMB rest. There's an FAQ about that in the FAQ section.
====================

Still lots of questions that can and should be asked. But I hope that some readers will try using the google calculator, watch the balloon animation observantly, and experiment with one of the online redshift calculators. Jorrie's for instance:
http://www.einsteins-theory-of-relativity-4engineers.com/cosmocalc_2010.htm
 
Last edited:
  • #288
CCWilson said:
I'm sure this is a silly question, Marcus, but if indeed shortly after the big bang photons and particles could return to sender, because initial expansion hadn't been fast enough to disallow it, is it possible that this would have caused some sort of chain reaction, and could that have been the cause of the hyperinflationary period that is supposed to have occurred?

I'm not sure it is a silly question. I'm not an expert on the very brief inflation era that is widely supposed to have occurred. It's still speculative and there are a variety of scenarios.
One type model that is gaining attention involves a BOUNCE. In the main model of this type you get a brief period of faster than exponential growth of distances. Normally what is called "inflation" by cosmologists is exponential and slightly slower growth.

a(t) = eHt with H either steady or slowly declining.

In stark contrast to this, in socalled Loop cosmology you get this but with H increasing very rapidly to extremely high (Planck scale) values and because this is faster than the unsual exponential growth called inflation it is called "superinflation" by Loop cosmolgy researchers. I don't remember hearing the term "hyperinflation" in cosmology.

For me it is completely speculative what conditions could have been like and what could have been happening at such extremely high densities. In Loop cosmology, according to their equation model, gravity becomes repellent at near Planck density, which is what causes the bounce. It is a quantum gravity effect. Quantum nature doesn't like to be pinned down too tightly, so resists extremely dense compression.

One option is not to try to understand the very beginning of expansion but only start thinking about it a few blinks after it started. Wait until there is more agreement among the real experts before trying to understand. sorry so unhelpful.

BTW CCWilson, I'm curious to know. Have you done any of these things?
Watched the wright balloon animation
http://www.astro.ucla.edu/~wright/Balloon2.html
Experimented with the cosmology calculator
http://www.einsteins-theory-of-relativity-4engineers.com/cosmocalc_2010.htm
Read the LineweaverDavis SciAm article
http://www.mso.anu.edu.au/~charley/papers/LineweaverDavisSciAm.pdf
I'd be curious to know any impressions, what you may have learned etc.
 
Last edited:
  • #289


Marcus, I had previously done the calculator and animation.

The problem with the cosmology calculator is that the terms - the omegas and the Hubble constant and even some of the calculated results - were unfamiliar or unclear to me, so it wan't that helpful. Would be much improved for laypeople with instructions and definitions.

The balloon animation was kind of interesting but not compelling. I'm not sure it added to my understanding, but I already had a fair grasp of it, having ruminated on the balloon analogy before, which was tremendously helpful; I'd have been lost in trying to understand universe expansion without it. One problem for me with the animation's redshift changes was that we are not outside the sphere watching the whole thing, our two-dimensional selves are presumably somewhere on the balloon skin, in which case the redshift for us should not be the same for all the galaxies and other sources of photons, right?

I just now went through the "Misconceptions about the Big Bang" and thought it was great. I learned some new things. One is that the red shift is not Doppler but related to expansion of space, which sort of makes sense. Question: When Hubble noticed the red shift, from which he deduced that the universe was expanding, was he aware that it was not, strictly speaking, the Doppler effect? In fact, was he aware at first that it indicated expansion of space time, or did he think it meant that galaxies were moving away from each other within a three dimensional universe?

Also, it made clearer the concept of how some galaxies can be moving away from us faster than the speed of light. The idea that we can see some of those galaxies moving away from us faster than the speed of light, I'm still working on that one.

These ideas are a lot for anybody to get our minds around, especially laymen. But the more you read such articles and think about them, the more sense they make. Are there people who can actually think in four dimensions, or do even you smarties have to rely on balloon analogies and such, and are resigned to doing the mathematical calculations without having a clear visualization in your minds?
 
  • #290


Thanks for your reply!
CCWilson said:
Marcus, I had previously done the calculator and animation.

The problem with the cosmology calculator is that the terms - the omegas and the Hubble constant and even some of the calculated results - were unfamiliar or unclear to me, so it wan't that helpful. Would be much improved for laypeople with instructions and definitions.I just now went through the "Misconceptions about the Big Bang" and thought it was great...Also, it made clearer the concept of how some galaxies can be moving away from us faster than the speed of light. The idea that we can see some of those galaxies moving away from us faster than the speed of light, I'm still working on that one...

I agree that Jorrie's calculator could be much improved, by what you say and also by editing. There's too much there. You have to train yourself to look just at what you want to know.

There's a much simpler calculator, by a university prof in Iowa, that only gives the very basic stuff, so less confusing. Also a lot fewer decimal places--she gives distances in billions of LY instead of millions, and rounds off to just a few digits. It's more user friendly.
Google "cosmos calculator".

When you go there you first have to type in three numbers: the present matter fraction (.27) the cosmological constant equivalent (.73) and the current Hubble parameter (70.4). It makes you aware how important those three numbers are---the results depend on them!
The age of the universe now and when the light was emitted, the distances, the recession speeds all depend on the values of these parameters.

Jorrie saves one the trouble of typing in those three numbers. But maybe Prof. Morgan's calculator is better pedagogically because it makes you type them in---and then gives you only the simplest basic output, in numbers of only 3 digits or so. Here's the link in case you or anyone might be interested:
http://www.uni.edu/morgans/ajjar/Cosmology/cosmos.html

Oh, Prof Morgan does give a couple of things to ignore: "distance modulus" and "luminosity distance". A general purpose tool always tends to have features you need mentally to filter out, nothing is perfect :biggrin:.

And Prof. Morgan has a short paragraph of explanations and directions right there under the calculator. Pedagogically I don't see how it could be better. It's just inconvenient when you are in a hurry because you have to type in .27, .73, 70.4 before you can calculate.
 
Last edited:
  • #291


CCWilson said:
...
I just now went through the "Misconceptions about the Big Bang" and thought it was great...

Also, it made clearer the concept of how some galaxies can be moving away from us faster than the speed of light. The idea that we can see some of those galaxies moving away from us faster than the speed of light, I'm still working on that one...

That's not too hard to understand when you appreciate how much the Hubble rate has declined over the years. And it is still declining though not as rapidly as it did in earlier times.
Prof. Morgan calculator gives the Hubble rate at times in the past, so you can see this.

That means for any given size of distance, like 1 Megaparsec (3.26 million LY) the rate distances that size have been increasing has gotten less and less over time.

Suppose a galaxy it at some distance that is increasing at rate 2c and it emits a photon in our direction. the photon is tryng to get to us but it keeps receding at rate c. However the galaxy is receding at rate 2c. so after a while the photon is a lot closer to us than the galaxy!
And by that time that might be close enough because the recession rates of various size distances keep declining. At any given range it keeps getting easier.
===============

The way to make that argument mathematically clear is to define a distance threshold called Hubble radius which at any given time t is the size of distances that are increasing exactly at rate c. If a photon gets within that range it will begin to make progress, because distances less than that range increase slower than c.

But the Hubble radius is c/H(t) ---just by simple algebra from the Hubble law v = H(t) D.
Set the distance growth rate v equal to c and solve for D, in the equation c = H(t) D.

Since H(t) has been decreasing throughout history, the threshold range c/H(t) has been INCREASING. It has been REACHING OUT to photons trying to get to us.

So we are seeing light from a lot of galaxies which themselves have been receding faster than c all during the time the light has been traveling on its way to us.
 
Last edited:
  • #292


I'm thinking a tabulation like this might be useful.
Code:
 standard model using 0.272, 0.728, and 70.4 (the % is per million years)
timeGyr   z        H-then   H-then(%)   dist-now(Gly)  dist-then(Gly) 
   0      0.000       70.4   1/139 
   1      0.076       72.7   1/134
   2      0.161       75.6   1/129
   3      0.256       79.2   1/123
   4      0.365       83.9   1/117
   5      0.492       89.9   1/109
   6      0.642       97.9   1/100
   7      0.824      108.6   1/90  
   8      1.054      123.7   1/79
   9      1.355      145.7   1/67
  10      1.778      180.4   1/54
  11      2.436      241.5   1/40
  12      3.659      374.3   1/26
  13      7.190      863.7   1/11
You know that the present Hubble rate is put at 70.4 km/s per Mpc which means distances between stationary observers increase 1/139 percent per million years. And the Hubble radius (a kind of threshold of safety within which distances are expanding slower than c) is currently 13.9 billion LY.
So by analogy you can see how the Hubble rate has been greater in the past, and has been declining, while the Hubble radius (reciprocally) has been increasing. That means reaching out farther to struggling photons and welcoming them inclusively into safe water where the current is not so strong.

So you can see that 4 billion years ago distances were increasing 1/117 percent per million years, and the Hubble radius (safe harbor threshhold) is 11.7 billion LY.
In the intervening time, in other words, the Hubble radius has extended farther out from 11.7 billion LY to 13.9 billion LY.

That extension of the threshold simply reflects the fact that the Hubble expansion rate has declined from 1/117 percent to 1/139 percent per million years.

(It is not like the physical expansion of a distance between two stationary observers, the kind of expansion decribed and governed by Hubble law. It's more like gradual revision of a criterion of admittance: a convenient scale we calculate from other parameters in the system.)
 
Last edited:
  • #293


I think I get the Hubble radius concept. As long as space between observer and galaxy isn't expanding faster than the speed of light, a photon will succeed in reaching the observer. At exactly at the Hubble radius, that photon would remain suspended at the exact same distance (maybe that's not quite the right word) from the observer but if the Hubble radius increases, would slowly start making headway. Is that correct?
 
  • #294


CCWilson said:
I think I get the Hubble radius concept. As long as space between observer and galaxy isn't expanding faster than the speed of light, a photon will succeed in reaching the observer. At exactly at the Hubble radius, that photon would remain suspended at the exact same distance from the observer but if the Hubble radius increases, would slowly start making headway.

Yes! You get it exactly and express it clearly in just a few words. I will use that visual way of putting it next time I need to explain this, if you don't beat me to it. :biggrin:

It will hang there right on the edge, and then (when the radius increases) it will slowly start to make headway.
 
  • #295


marcus said:
The standard view is that density is decreasing. The "steady state" idea of constant density went out of style by around 1960-1970, anyway a long time ago.

Cosmologists pretty much all accept 1915 Gen Rel as the currently best most reliable equation for how geometry/gravity evolves and is influenced by matter. Virtually all research is based on the 1915 Gen Rel equation.

According to that picture there are distances between real physical stuff, events etc. The network of distances (angles areas etc) is geometry. But distances are not made of anything, they are RELATIONS, not material substance.

In 1915 Einstein put it concisely: Dadurch verlieren Zeit und Raum den letzten Rest von physicalische Realität. (thereby lose time and space the last vestige of physical reality.)

The geometric relations among things are not a physical substance. "Space" is a word which does not refer to a material. It refers to a bunch of geometric relationships.

So it's misleading to talk about it being "manufactured".

And of course density declines as physical stuff gets farther apart.
Of course if Einstein was wrong about space not being a substance (ethereal vs material), then GR is flawed - and so are any other theories using GR as a basis.

It would be like a ballistics engineer who dismisses air as 'nothing'. His calculations over short distances would be accurate within "tolerable limits", but the farther the distance, the less accurate his figures would become. He, too, might chalk the errors up to some speculative unknown factor(s) affecting the empirical circumstances.
 
Last edited:
  • #296


marcus said:
...According to that picture there are distances between real physical stuff, events etc. The network of distances (angles areas etc) is geometry. But distances are not made of anything, they are RELATIONS, not material substance.

In 1915 Einstein put it concisely: Dadurch verlieren Zeit und Raum den letzten Rest von physicalische Realität. (thereby lose time and space the last vestige of physical reality.)

The geometric relations among things are not a physical substance. "Space" is a word which does not refer to a material. It refers to a bunch of geometric relationships.

So it's misleading to talk about it being "manufactured".

And of course density declines as physical stuff gets farther apart...

If space is geometry, where dies dark energy come from? I had the impression it came from space itself or is that just speculation?
 
  • #297


It's not clear that the cosmological constant is an "energy" in any reasonable sense of the word. The evidence so far is of a nonzero curvature constant that occurs naturally in the Einstein 1915 GR equation.

As you suggest might be the case, a lot of the talk one heard following the discovery of acceleration finding was "speculative".

My impression is that the speculative hubub has died down quite a bit in the past 3 years of so. There is now a greater tendency to simply consider that the 1915 law of gravity has TWO constants, Newton G and Einstein Lambda.

Less tendency now to speculate about some mysterious "energy". Ten years ago people were excitedly talking about "phantom" energy and "quintessence" and "Big Rip", not just in the popular media but in the scientific literature.

More data came in in the past 10 years and it was all consistent with the simple idea that Einstein 1915 law of gravity (GR) has two constants: G and Λ. Nature was behaving consistently as if there was this constant Λ which was just a constant and not doing anything funny.

I think to some extent it is simply a matter of taste and personal inclination. If you LIKE to think of space as being filled with some kind of mysterious "energy field" that causes the curvature constant Λ, then that's cool. You should believe what you want. It has not been proved or disproved.

Many people are skeptical of that however. Since there is no scientific reason to believe it, so far, then until there is evidence the simplest thing seems to be to stay with the original GR idea. Two constants appear naturally in the theory and for many years most of us thought one of them was zero, but it turned out not to be.

Here's a presentation of the skeptical viewpoint:
http://arxiv.org/abs/1002.3966/
Why all these prejudices against a constant?
 
  • #298


CCWilson said:
I think I get the Hubble radius concept. As long as space between observer and galaxy isn't expanding faster than the speed of light, a photon will succeed in reaching the observer. At exactly at the Hubble radius, that photon would remain suspended at the exact same distance (maybe that's not quite the right word) from the observer but if the Hubble radius increases, would slowly start making headway. Is that correct?

This is good. I want to take this a step farther and focus on the LIMITING value of the Hubble radius. It's slated to continue increasing and ultimately approach an upper bound of about 16 billion lightyears.

Reciprocally, the Hubble (fractional growth) rate is slated to decline from 1/139 to about 1/160 if you express it as a percent per million years.

What this means is that there is a COSMIC EVENT HORIZON out there at around 16 billion LY in the sense that news can never reach us from an event that occurs today in a galaxy at that distance. The photons trying to get to us, from that event, are forever beyond the reach of our Hubble radius because it can never extend more than 16 Gly.

I would like to call that limiting Hubble rate by the name H. We know that H(t) declines with time and the standard notation for PRESENT value is H0. So it seems natural to denote the eventual asymptotic value in the far future by H.

Then we have a nice easy to remember equation relating Einstein's cosmological constant Λ to that H. Assuming a universe that is spatially flat, or nearly, as by all accounts it seems to be, we have:

Λc2/3 = H2

Morally speaking the c2 and the 1/3 factor are just accidental features of how Einstein originally defined Λ and put it in his GR equation. Morally, I would say, the cosmological constant simply is H2.

Remember that H is a number per unit time (a fractional growth rate). And Einstein defined the constant Λ to be a number per unit area (a type of curvature unit).
So multiplying Λ by c2 changes number per (length)2 into number per (time)2 which is (number per unit time)2 in other words the SQUARE of a fractional growth rate.

So the units match.

How best to write H? It's a bit longwinded to say 1/16 percent per million years. How about 1/16 ppb per year?

I want a format that communicates well to newcomers to forum. The present H0 is about 1/14 ppb per year.
"Distances between stationary observers are increasing by about 1/14 parts per billion per year."
Actually closer to 1/13.9, but 1/14 is close enough.
I like this format because it has the Hubble time (13.9 billion years) and Hubble radius (13.9 billion lightyears) built into it.

So then this important H (an important constant of nature, which is morally a form of the cosmological constant) becomes
H = 1/16 ppb per yr = 1/16 of a part per billion per year.

It is the lowest the fractional growth rate of distance is ever destined to get, according to todays best understanding.

See what you think. Would you prefer scientific notation, with a 10-9 or 10-10?
Can you figure out a better way to say it that can communicate to newcomers?
 
Last edited:
  • #299


Marcus, it seems to me that 1/16 parts per billion per year is such a tiny percentage that using 1/16 rather than 0.0625 doesn't help anyone visualize it. So I'd prefer 0.0625 parts per billion per year, or whatever it really is. That seems more in keeping with scientific use. So you could say that the Hubble/cosmological "constant" - the percentage rate of increase of distance between two distant points - will decrease from the current 0.071 ppb per year to 0.0625 ppb per year, at which point it will truly remain constant.

Please keep in mind that I struggle to figure out these concepts and many years have passed since any math courses, so my preference may not be everybody else's.

Is the reason we believe that the Hubble constant will eventually slow down as calculated is that we think that the curvature of the universe is flat, and those calculations flow from that assumption, right?
 
  • #300


You are probably right, esp about respecting conventional notation. I like the idea that the Hubble radius (that threshold of admission for photons trying to get to us) is so important and one can just flip the Hubble constant and get it

H0 = 1/13.9 ppb per year ≈ 0.072 ppb per year
Hubble radius (now) = 13.9 billion lightyears.

You get to remember two quantities for the price of remembering one. But it does jar a little to write 1/13.9.

I will calculate the limiting value of the Cosmic Event Horizon and of the Hubble radius and try writing it in the style you suggest. Let's use the current estimates of 70.4 km/s per Mpc and 0.728

Put this into the google search window:
1/(sqrt(0.728)*70.4 km/s per Mpc)
16.279 billion years

So, without doing much round-off, what we get for for the longterm Hubble radius is
16.279 billion lightyears.

And what we get for the longterm Hubble rate is 1/16.279 or written as you suggest:
H = 0.0614 ppb per year.

As I say, you are probably right. But for a while, at least, to see how it goes, I will keep trying to think of it as
Radius = 16 billion lightyears
H = 1/16 ppb per year,
also let's keep the option of adding another digit of accuracy---e.g. say 16.3 and 1/16.3
 
Last edited:
  • #301


There is a form of the Friedmann equation (for spatially flat or nearly flat universe) which goes like this:

H2 - H2 = (8πG/3)ρm

At any given time H(t) is going to be bigger than its eventual value in such amount that their SQUARES differ by something proportional to the current matter density (radiation, dark and ordinary matter combined).

The constant (8πG/3) we can't do anything about, it stems from the original Einstein GR equation. Basically the equation says the denser the matter load, the more rapid expansion must be to maintain balance and keep things on the level. The more thinly matter is spread, on the other hand, the closer H can come down to its eventual longterm value.

If you solve that for the critical mass density that just balances the current expansion rate, the lefthand side comes out
.272*(70.4 km/s per Mpc)^2 (Put that in google window)
Then if you divide both sides by (8pi*G) you get
.272*(70.4 km/s per Mpc)^2/(8pi*G/3)
But that is expressed in kilograms per cubic meter and it is such a tiny mass density that it is hard to remember, so let's use the ENERGY DENSITY EQUIVALENT of the mass density and multiply by c2.
Then the critical matter density comes out
.272*(70.4 km/s per Mpc)^2/(8pi*G/3)*c^2 (Paste that into the window.)

What you get is 0.2276 nanojoules per cubic meter, or as the google calculator likes to say it: 0.2276 nanopascals. Used as a measure of energy density, one pascal = one joule per m3

So as not to overstate the precision, we could say 0.23 nanopascal for the critical matter density.
 
Last edited:
  • #302


marcus said:
...So as not to overstate the precision, we could say 0.23 nanopascal for the critical matter density.

If I follow correctly then this is a calculation of the average matter density in the universe. Is this based on H0, H or something else?

If I may regress for a moment to an earlier post. I've read the link you provided http://arxiv.org/abs/1002.3966/ and think I follow. I also have come to understand that energy of empty space (aka dark energy) does not come from space, but it is rather is the residual (background) energy of every cosmic event that has ever occurred... and so space is geometry (it's not being created) and that the energy does not come from space, but rather empty space is not truly empty.

It's like the ripples on a flat calm pond. No matter how how calm, on a small enough scale there are always ripples of energy passing over the surface, but the ripples are not "caused" by the pond.

Back to your calculation, if this is the "average" matter density (I assume the "maximum" matter/energy density occurred at the moment of the big bang) what would be the minimum matter/energy density currently be?
 
  • #303


RayYates said:
If I follow correctly then this is a calculation of the average matter density in the universe. Is this based on H0, H or something else?

If I may regress for a moment to an earlier post. I've read the link you provided http://arxiv.org/abs/1002.3966/ and think I follow. ...

Hi Ray, I'm glad you read the "Why all these prejudices against a constant?" article!
I'll reply after a few minutes (coffee break :biggrin:) to your "Is [the calculated matter density] based on...?" question. But first, to quote a key passage:
===1002.3966===
The most general low-energy second order action for the gravitational field, invariant under the relevant symmetry (diffeomorphisms) is
S[g] = (1/16πG)∫(R[g] − 2λ)√g,
[they label this equation (5)]
which leads to (1). It depends on two constants, the Newton constant G and the cosmological constant λ, and there is no physical reason for discarding the second term.
From the point of view of classical general relativity, the presence of the cosmological term is natural and a vanishing value for λ would be more puzzling than a finite value: the theory naturally depends on two constants; the fact that some old textbooks only stress one (G) is only due to the fact that the effects of the second (λ) had not been observed yet.
==endquote==
I'll try to explicate this passage, which you just read. Saying it in different words may help make it more understandable.
Another way to say it is that it is wrong and misleading to use the words "dark energy". What we are looking at is simply a constant of nature, like Newton's G, which occurs naturally in the current Law of Gravity. The Law of Gravity or Einstein GR equation is what they refer to as equation (1) in the above quote.

It follows from equation (5) which is shown in the quote. I'm told that Roger Penrose has recently been pointing out that the fundamental meaning of ENERGY is "ability to do work" and that this constant λ is not able to do work. So why call it an "energy"?

In the Einstein GR equation and equation (5) it is a reciprocal AREA: like "one over length squared". or as the square of reciprocal length. Since time and length are so closely related you could also think of it as the square of a reciprocal time.
The square of a number-per-unit-time quantity.

So one conclusion from the position presented in the article might be that rather than a fictitious "energy" the better way to think of the cosmo constant Lambda is as
the square of a fractional growth quantity, a number per unit time.
 
  • #304


You asked about where the calculated density comes from. I derived it in post #301.
The density is normally denoted by the Greek letter rho (ρ) and it is of ordinary matter+dark matter+radiant energy all combined in one. At present the contribution from radiation is negligible so we call it simply "matter density".

The basic equation of cosmology is called the Friedmann equation, and it is derived from Einstein GR equation after making some simplifying assumptions. Cosmo is a mathematical science so it is all about equations, not about VERBAL explanations. So to understand where the density comes from we have to look at the Friedmann equation.

Assuming spatial flatness, or near-flatness, this takes a rather simple form:

H2 - H2 = (8πG/3)ρ

Here rho is the density I was talking about. And H is a number-per-unit-time which is today's fractional growth rate of distance. And H is another number-per-unit-time which is the fractional growth rate of distance in the far future, which the universe is heading towards. Its square is the same as the cosmo constant Lambda except for a factor of c2/3. So we can treat it as a look-alike or stand-in for λ.

Now H and H are things that cosmologists infer from measurement. Both are small fractions of a percent growth per million years. They are, respectively, estimated to be 1/139 and 1/163 of one percent per million years.

So you could say that the critical density ρ needed for perfect flatness is based on both H and H.

But you could also say that all THREE quantities are based on the millions of datapoints of observation that has accumulated. Because estimates of all three are adjusted to FIT THE DATA. In a mathematical science you adjust the parameters of the model to fit the data. The Friedmann equation model is a simple version of Einstein GR which has been checked in many different situations (solar system, neutron stars, precision satellites with clocks or gyroscopes, galaxy counts, microwave background etc.)

So for the time being we trust the Friedmann equation and we adjust all the parameters together to get the best fit. Thanks for the interesting question!
 
Last edited:
  • #305


marcus said:
Hi Ray, I'm glad you read the "Why all these prejudices against a constant?"

I get that it's impractical to describe complex mathematical concepts without using math but as layman, most of this is over my head and I tend to focus on non-math sections and conclusions. I read pages 6 and 7 several times.

...But to claim that dark energy represents a profound mystery, is, in our opinion, nonsense. Dark energy" is just a catch name for the observed acceleration of the universe, which is a phenomenon well described by currentlyaccepted theories, and predicted by these theories, whose intensity is determined by a fundamental constant, now being measured. The measure of the acceleration only determines the value of a constant that was not previously measured. We have only discovered that a constant that so far (strangely) appeared to be vanishing, in fact is not vanishing. Our universe is full of mystery, but there is no mystery here.

To claim that the greatest mystery of humanity today is the prospect that 75% of the universe is made up of a substance known as 'dark energy' about which we have almost no knowledge at all" is indefensible.

So here's what I've learned. λ has not been precisely calculated (yet). But it has been shown that λ ≠ 0 and that GR predicts the expansions acceleration without any notion of "Dark Energy". That the QFT calculations for λ are 120 orders of magnitude greater that what is observed; an indication that everything in QFT is not yet understood.
 
  • #306


RayYates said:
... it has been shown that λ ≠ 0 and that GR predicts the expansions acceleration without any notion of "Dark Energy"...

You got it! GR predicts accelerated growth of distances simply on the basis of the positive cosmological constant λ.

As you say, explaining it does not require any notion of "Dark Energy" :biggrin:

(It's not clear there is any connection with flat-space QFT and it's funny prediction about "vacuum energy", off by a factor of 10120 so we can simply omit mention of QFT.)

λ has not been precisely calculated (yet).

It has been rather precisely measured. That is what one does with a constant of nature (like the electron charge, or the Newton G, or λ). One measures it, often by fitting a curve to data showing how two quantities are related.
 
Last edited:
  • #307
One thing to note is that the cosmological constant in the EFE is the same thing as a constant negative pressure vacuum energy. If you solve the EFE for the stress energy tensor, you'll notice that the cosmological constant contributes to the total energy.

It still doesn't change the fact that QFT predicts this vacuum energy should be much, much larger than the observed value.
 
  • #308


But as it actually appears in the Einstein GR equation or "EFE" λ is not an energy and not a pressure. It is the reciprocal of length squared (a measure of curvature).
One can talk about the amount of energy you would need to create that curvature if it were not already there. As far as we know that is a fictitious energy.

My point is that thinking about λ in terms of pressure or energy is a bad idea. It involves dragging a curavture term on the lefthand side of the equation over onto the righthand side and multiplying it by stuff to turn it into a fictitious energy and a negative pressure.

This is explained more clearly and at greater length in the article we were just quoting. You might want to take a look at it!

http://arxiv.org/abs/1002.3966/
Why all these prejudices against a constant?
__

For the more math-inclined, Cosmology is a mathy science so explanation/understanding involves fitting data to equation-models. But the models are relatively simple! I'm sure lots of layfolk diffident about their math can nevertheless understand cosmology basics just fine!

The basic equation of cosmology is called the Friedmann equation, and is derived from Einstein GR equation after making simplifying assumptions.
Ever since very early times, matter has vastly outweighed radiation. We can handle other conditions but it makes a later equation slightly more complicated. For simplicity let's assume matter dominates and also that space is nearly flat. We can put the Friedmann equation in a particularly simple form:

H2 - H2 = (8πG/3)ρ

Here rho is the current density of matter (dark+ordinary) and the small contribution from radiation. We could write ρ(t) to show the time dependence. H is a number-per-unit-time which is the current fractional growth rate of distance. And H is a constant number-per-unit-time which is the fractional growth rate of distance in the far future, which the H(t) is heading towards. Its square is the same as the cosmo constant Lambda except for a factor of c2/3. So we can treat it as a look-alike or stand-in for λ.

Measuring H2 is really the same as measuring λ. Whatever you get for the former, you just multiply by 3 and divide by c2 and presto that is your value of λ.

That's because by definition H2 = λc2/3.
So measuring one is equivalent to measuring the other.

Let's take an imaginary example--forgetting realistic numbers. We can see that the equation is of the form x2 - C = y, where C is some constant to be determined by plotting lots of (x,y) datapoints. You look back in the past and estimate density (y) was and also what the expansion rate (x) was at particular times in the past. You do that with lots of cases (in reality wth supernovae in lots of galaxies). So you get a curve. If C, the constant, equals zero, then the curve is simply x2 = y.
If C = 1, then the curve will look different: x2 - 1 = y, so you can tell the difference and in this way decide what the constant C is.
 
Last edited:
  • #309
Thanks for the link!

I still don't understand how the cosmological constant is a 'natural' part of the EFE. They show the modified Einstein-Hilbert action (Eq. 5) to prove their point, but the original Einstein-Hilbert action for the gravitational field doesn't contain a [itex] \Lambda [/itex]. You have to put it in by hand (That's just how I've understood it, correct me if I'm wrong).

The point I was making is that the cosmological constant is physically equivalent to a negative pressure vacuum energy. That's why the cosmological constant is measured in units of energy. However, QFT does predict - it necessitates - some kind of vacuum energy. Even if the number is vastly incorrect, the concept flows directly from quantum mechanics. So, if there was no vacuum energy, that would be extremely odd. I just assume there is a cutoff at which current particle physics no longer apply, where whatever higher energy physics that exist there fix the problem. I'm aware that that's a cop out.

Their argument against a QFT vacuum energy is that even assuming a finite cutoff, the value is still too high. To which I don't have the expertise to respond to.
 
  • #310


Let's take another look at our basic expansion-rate equation
H2 - H2 = (8πG/3)ρ

Just using Freshman calculus we can differentiate it, and some nice things happen. The constant term drops out and we just have.

2HH' = (8πG/3)ρ'

But density ρ is essentially just some mass M divided by an expanding volume proportional to the cube of the scalefactor: a3
(M/a3)' = -3(M/a4)a' = -3ρ(a'/a) = -3ρH
Because by definition H = a'/a

2HH' = (8πG/3)(-3ρH) = - 8πGρH, and we can cancel 2H to get:

H' = - 4πGρ

I've highlighted that because it comes in a few lines later. Again by definition H = a'/a so we can approach H' from another direction:
H' = (a'/a)' = a"/a - (a'/a)2 = a"/a - H2
It's great how much of the first 2 or 3 weeks of a beginning calculus course comes into play: chain rule, product rule, (1/xn)'...

Now the Friedman equation tells us we can replace H2 by H2 + (8πG/3)ρ. So we have
H' = a"/a - H2 = a"/a - H2 - (8πG/3)ρ = - 4πGρ

Now we group geometry on the left and matter on the right, as usual, and get:
a"/a - H2 = (8πG/3)ρ - 4πGρ = - (4πG/3)ρ
Here we used the arithmetic that 8/3 - 4 = - 4/3

This is the socalled "second Friedmann equation" in the matter-dominated case where pressure is neglected.
a"/a - H2 = - (4πG/3)ρ

We can make another application of our basic Friedmann equation to replace
(4πG/3)ρ by (H2 - H2)/2
a"/a = H2 - (4πG/3)ρ = H2 - (H2 - H2)/2
= (3H2 - H2)/2

This will tell us the time in history when the INFLECTION occurred. When the distance growth curve slope stopped declining and began to increase. This is the moment when a" = 0. It marks when actual acceleration of distance growth began---i.e. when a" became positive.
To find that time all we need to do is find when
H2 = 3H2
since then their difference will be zero, making a" = 0. That means
H = √3 H = √3/163 percent per million years = 1/94 percent per million years.
That happened a little less than 7 billion years ago. In other words when expansion was a bit less than 7 billion years old. You can see that from the table. The 1/94 fits right in between 6 billion years ago and 7 billion years ago. In between 1/100 and 1/90.

Code:
 standard model using 0.272, 0.728, and 70.4 (the % is per million years)
timeGyr   z        H-then   H-then(%)   dist-now(Gly)  dist-then(Gly) 
   0      0.000       70.4   1/139 
   1      0.076       72.7   1/134
   2      0.161       75.6   1/129
   3      0.256       79.2   1/123
   4      0.365       83.9   1/117
   5      0.492       89.9   1/109
   6      0.642       97.9   1/100
   7      0.824      108.6   1/90  
   8      1.054      123.7   1/79
   9      1.355      145.7   1/67
  10      1.778      180.4   1/54
  11      2.436      241.5   1/40
  12      3.659      374.3   1/26
  13      7.190      863.7   1/11
The present Hubble rate is put at 70.4 km/s per Mpc which means distances between stationary observers increase 1/139 percent per million years. And the Hubble radius (a kind of threshold within which distances are expanding slower than c) is currently 13.9 billion LY.
So by analogy you can see how the Hubble rate has been declining, while the Hubble radius (reciprocally) has extended out farther and farther.
 
Last edited:
  • #311


Mark M said:
Thanks for the link!
I still don't understand how the cosmological constant is a 'natural' part of the EFE. They show the modified Einstein-Hilbert action (Eq. 5) to prove their point, but the original Einstein-Hilbert action for the gravitational field doesn't contain a [itex] \Lambda [/itex]. You have to put it in by hand (That's just how I've understood it, correct me if I'm wrong).
...
I've seen Steven Weinberg reason "natural" this way in another situation. It's highbrow physics, but very common. When you write down a theory you are supposed to include ALL THE TERMS ALLOWED BY THE SYMMETRIES of the theory. It is almost a ritual mantra.
The symmetries (whatever they are for that particular theory) determine what terms "belong" and which do not.

In the case of GR the symmetries are the DIFFEOMORPHISMS (all the invertible smooth maps of the manifold). It is a large powerful group and it shows its power by excluding all terms except G and λ terms. Einstein called diffeo-invariance by the name "General Covariance". He had decided to make a theory that was "general covariant" and so the action had to be what you see and the EFE arising from it had to be what you see. There isn't anything put in by hand.

So the λ was always LATENT in the theory even though it may have been omitted in the very first publications. My guess is that it was seen to belong even at the start but not much talked about. Somebody else who knows the history better should clarify this. Presumably DeSitter needed λ to get his DeSitter space (1917) solution, which has no matter but does have a positive λ. And Levi-Civita came up with the same solution at just the same time. My guess is they both must have known the constant was there and that they were not putting anything in "by hand". Just my guess, could be wrong.

I think Bianchi and Rovelli discuss the λ naturalness, on the basis of diffeo-invariance. Maybe we should check back and see exactly what they say.

http://arxiv.org/abs/1002.3966/
Why all these prejudices against a constant?
 
Last edited:
  • #312
Thanks for that correction. I hadn't known that the cosmological constant emerged from a symmetry of GR.

Still, what of the QFT vacuum energy? If it exists, it certainly makes a contribution to the acceleration of the universe. So, if it does exist, then it seems simpler to say that it is the cause of the acceleration. The only way I see a way around that is to say that QFT either doesn't predict vacuum energy, or it's very much misunderstood. Since the first scenario (as far as I know) can't be true, we would need to go with the second. And I don't think that's very desirable, considering the success of QFT.
 
  • #313


I'll expand the earlier table and recap the easy calculus derivations from before.
The present Hubble rate is put at 70.4 km/s per Mpc which means distances between stationary observers increase 1/139 percent per million years. And the Hubble radius (a kind of threshold within which distances are expanding slower than c) is currently 13.9 billion LY.
So by analogy you can see how the Hubble rate has been declining, while the Hubble radius (reciprocally) has extended out farther and farther.

Code:
 standard model using 0.272, 0.728, and 70.4 (the % is per million years)
time(Gyr)   z    H-then   H(%)  Hub-radius(Gly)  dist-now   dist-back-then 
   0     0.000     70.4   1/139      13.9
   1     0.076     72.7   1/134      13.4
   2     0.161     75.6   1/129      12.9
   3     0.256     79.2   1/123      12.3
   4     0.365     83.9   1/117      11.7
   5     0.492     89.9   1/109      10.9
   6     0.642     97.9   1/100      10.0
   7     0.824    108.6   1/90        9.0 
   8     1.054    123.7   1/79        7.9
   9     1.355    145.7   1/67        6.7
  10     1.778    180.4   1/54        5.4
  11     2.436    241.5   1/40        4.0
  12     3.659    374.3   1/26        2.6
  13     7.190    863.7   1/11        1.1

By definition H = a'/a, the fractional rate of increase of the scalefactor.

We'll use ρ to stand for the combined mass density of dark matter, ordinary matter and radiation. In the early universe radiation played a dominant role but for most of expansion history the density has been matter-dominated with radiation making only a very small contribution to the total. Because of this, ρ goes as the reciprocal of volume. It's equal to some constant M divided by the cube of the scalefactor: M/a3.
Differentiating, we get an important formula for the change in density, namely ρ'.
ρ' = (M/a3)' = -3(M/a4)a' = -3ρ(a'/a) = -3ρH
The last step is by definition of H, which equals a'/a

Next comes the Friedmann equation conditioned on spatial flatness.
H2 - H2 = (8πG/3)ρ
Differentiating, the constant term drops out.
2HH' = (8πG/3)ρ'
Then we use our formula for the density change:
2HH' = (8πG/3)(-3ρH) = - 8πGρH, and we can cancel 2H to get the change in H, namely H':

H' = - 4πGρ

I've highlighted that because it gets used a few lines later. Again by definition H = a'/a so we can differentiate that by the quotient rule and find the change in H by another route:
H' = (a'/a)' = a"/a - (a'/a)2 = a"/a - H2

Now the Friedman equation tells us we can replace H2 by H2 + (8πG/3)ρ. So we have
H' = a"/a - H2 = a"/a - H2 - (8πG/3)ρ = - 4πGρ

We group geometry on the left and matter on the right, as usual, and get:
a"/a - H2 = (8πG/3)ρ - 4πGρ = - (4πG/3)ρ
Here we used the arithmetic that 8/3 - 4 = - 4/3

This is the socalled "second Friedmann equation" in the matter-dominated case where radiation pressure is neglected.
a"/a - H2 = - (4πG/3)ρ
In the early universe where light contributes largely to the overall density a radiation pressure term would be included and, instead of just ρ in the second Friedmann equation, we would have ρ+3p.

Now using the second Friedmann equation we would like to discover the time in history when the INFLECTION occurred. When the distance growth curve slope stopped declining and began to increase. This is the moment when a" = 0. It marks when actual acceleration of distance growth began---i.e. when a" became positive.

We can use the MAIN Friedmann equation to replace
(4πG/3)ρ by (H2 - H2)/2 in the second equation.
a"/a = H2 - (4πG/3)ρ = H2 - (H2 - H2)/2
= (3H2 - H2)/2

Now to find the inflection time, all we need to do is find when it was that
H2 = 3H2
since then their difference will be zero, making a" = 0. That means H = √3 H = √3/163 percent per million years = 1/94 percent per million years. As one sees from the table, that happened a little less than 7 billion years ago. In other words when expansion was a bit less than 7 billion years old. You can see that from the table. The 1/94 fits right in between 6 billion years ago and 7 billion years ago. In between 1/100 and 1/90.
 
Last edited:
  • #314


Is there a non mathimatical definition of λ ?

I know ∏ is the relationship between a circles radius and its circumference; c is the speed of light. Those are definitions I can get my head around.

I googled Definition: cosmological constant and got an arbitrary constant in the equations of general relativity theory. --- worse than useless! Other definitions were contradictory with many referencing dark energy.

Anyone care to give it a try?
 
  • #315


RayYates said:
Is there a non mathimatical definition of λ ?
I know ∏ is the relationship between a circles radius and its circumference; c is the speed of light. Those are definitions I can get my head around.
...

Heh heh, you won't believe me, will you? :biggrin: One of the main points in my recent posts has been to try to give you just exactly that. A definition of λ which is intuitively meaningful---something you can visualize and hang some concrete meaning on.

FACT: our universe has this pattern of expanding distances, by a small fraction of a percent per million years.

Get to know your universe: the rate is 1/139 of one percent expansion per million years.

That's a fact of life, your life, my life and the life of the Aliens on Planet Gizmo.

There is another important quantity that is a basic feature of the universe, which everybody should know (Aliens included :biggrin:)

FACT: The 1/139% rate is not steady but is tending towards 1/163 of a percent per million years.

What is λ? Probably the most immediate handle on it is what you get when you multiply it by c2 and divide by 3.

You get the SQUARE of that 1/163 percent rate.

On Planet Gizmo they probably don't use exactly the same λ that our Albert wrote they probably use a constant χ which is the same as λc2/3.

Because the relativity equation their Albert wrote had a c2 and a 3 in it. So to compensate, to make their equation give the same answers, they have to adjust the constant and make it χ.

They also don't use ∏ = 3.14 they use a constant called something else which is 6.28. It is the ratio of the RADIUS to the circumference. That's what they use as a constant, instead of the DIAMETER to the circumference like we do. And they probably have different sexual practices as well. Or barbecue differently.

A constant is just as good if you multiply it by 2, or divide it by 3, you just have to adapt the formulas so as to compensate.

Morally the cosmological constant Λ is the same as that 1/163 of a percent growth rate which our universe will eventually get to. A kind of trend inherent in its 4d geometry. A rate of distance increase that our geometry thinks is "just right", no more and no less. (And on this planet we do not yet know WHY it is this particular 1/163 size and not some other size. QG (4d quantum geometry) may eventually explain the size of that 1/163 rate since it is a feature built into the geometry and QG looks at the quantum foundations of geometry.
 
Last edited:
Back
Top