Hubble tension -- any resolution?

In summary, "Hubble tension" refers to the discrepancy between different measurements of the Hubble constant, which describes the rate of expansion of the universe. The tension arises from conflicting values obtained through observations of the cosmic microwave background radiation and direct measurements from supernovae and other local distances. Various potential resolutions have been proposed, including new physics beyond the standard model of cosmology, systematic errors in measurements, or the influence of dark energy. Ongoing research aims to clarify these inconsistencies, but a definitive resolution remains elusive.
  • #36
Mordred said:
Well I for one have always hated the "new physics required " trend you see in a large number of different studies where some contention shows up. Seems to be a very common declaration often used particularly but not restricted to pop media.
Mordred said:
Typically what I've seen in the last few decades of various studies where some contention shows up the problem often gets resolved via some calibration fine tuning or other related systematic error margin etc.
Mordred said:
Our models are extremely successful and robust with huge supportive bodies of evidence that are extremely interconnected among numerous related physics theories. Knowing this I typically approach these findings with the frame of mind that new physics isn't usually the answer

However that's just me
I hear you, and agree with you strongly in the area of high energy physics and in describing highly relativistic systems like black holes and white dwarf-black hole binary systems.

In the case of cosmology, while there is something to be said for not leaping off to new physics without trying to give existing models a try, it is also true that the LambdaCDM model is not nearly as complete as the Standard Model of Particle Physics, for example.

The CDM part of the LambdaCDM model is basically a placeholder with a quite general description of dark matter's properties, but a lot of the specifics not worked out.

Also, while it would be one thing if the Hubble tension was the only issue with the LambdaCDM model, and indeed, it is still, barely, possible that the Hubble tension could be resolved with improved measurements, the LambdaCDM model has lots and lots and lots of independently measured tensions with astronomy observations, i.e. dozens of them (something that has been explored in some detail and length in other PF threads).

The unmodified LambdaCDM model remains the paradigm mostly because it is easy to work with and has few parameters (just six in the most basic version), and because no consensus has emerged around any one alternative to it.
 
  • Like
Likes nnunn and Mordred
Space news on Phys.org
  • #37
Mordred, regarding your Higgs-related research, would a non-constant distribution of that condensate of weak hypercharge be worth exploring? Global symmetry-breaking, leading to global homogeneity, while simplifying a standard model for particle physicists, might be another of those "too great assumptions"?

The electroweak processes involving Higgs would likely be either washed out via inflationary processes. However one may get signatures in the CMB. So the electroweak process itself involving Higgs wouldn't be the cause of the Hubble contention.

However and this is specifically if the cosmological constant itself involves the Higgs field as being the cause of the Lambda then as the universe expands you should see a reduction in the energy density term for Lambda. As per every other SM particle. So far no evidence of a varying Lambda term has been conclusive enough.
Where this becomes important is the effective equation of state using the scalar field modelling.

https://en.m.wikipedia.org/wiki/Equation_of_state_(cosmology)

See last equation if the last equation has a term other than w=-1 then the cosmological constant will vary. However all research and measurement have strong agreement on w=-1 so far.
 
Last edited:
  • #38
ohwilleke said:
In the case of cosmology, while there is something to be said for not leaping off to new physics without trying to give existing models a try, it is also true that the LambdaCDM model is not nearly as complete as the Standard Model of Particle Physics, for example.

The CDM part of the LambdaCDM model is basically a placeholder with a quite general description of dark matter's properties, but a lot of the specifics not worked out.

Also, while it would be one thing if the Hubble tension was the only issue with the LambdaCDM model, and indeed, it is still, barely, possible that the Hubble tension could be resolved with improved measurements, the LambdaCDM model has lots and lots and lots of independently measured tensions with astronomy observations, i.e. dozens of them (something that has been explored in some detail and length in other PF threads).

The unmodified LambdaCDM model remains the paradigm mostly because it is easy to work with and has few parameters (just six in the most basic version), and because no consensus has emerged around any one alternative to it.

Agreed on everything you have above. We share the same way of thinking on this.
 
  • Like
Likes ohwilleke
  • #39
Hi Jaime, I wrote:
nnunn said:
part of this tension may be related to the fact that calibration of distance ladders is done within the local supercluster (Laniakea). Any unaccounted-for bulk flows, non-homogeneities, or non-isotropies beyond this back yard would affect the validity of such a local yardstick.
You replied:
Jaime Rudas said:
Internally, galaxy clusters are neither homogeneous nor isotropic, so I don't understand how these inhomogeneities and anisotropy can affect the calibration of distance ladder.

Indeed! But by "back yard" and "yardstick", I was thinking of Laniakea.

So the "bulk flows, non-homogeneities, or non-isotropies" I had in mind would be on a larger scale, that is to say, external to our local supercluster. Sorry for not being clear.
 
  • #40
ohwilleke said:
The unmodified LambdaCDM model remains the paradigm mostly because it is easy to work with and has few parameters (just six in the most basic version), and because no consensus has emerged around any one alternative to it.
In my opinion, the ΛCDM model remains the paradigm because it is, by far, the model that best fits the observations, and no alternative has emerged that even remotely comes close in this regard.
 
  • #41
nnunn said:
Indeed! But by "back yard" and "yardstick", I was thinking of Laniakea.

So the "bulk flows, non-homogeneities, or non-isotropies" I had in mind would be on a larger scale, that is to say, external to our local supercluster. Sorry for not being clear.
But the calibration of the distance ladder does not depend on the presence or absence of inhomogeneities or anisotropies at any scale.
 
  • #42
To add some clarity to Jaime Rudas post. The papers I linked in post 30 describes calibration procedures of our Milky way. Those calibration procedures are to set specific filters in regards to Leavitt Law. In essence fine tuning metalicity detection of different Cepheids with the combination of parallax distance measures using the GAIA parallax database.

In Cosmology a commonly used tool is the luminosity to distance relation. So fine tuning this relation helps fine tune further measurements where parallax becomes impractical.
 
  • Like
Likes Jaime Rudas
  • #43
Mordred said:
To add some clarity to Jaime Rudas post. The papers I linked in post 30 describes calibration procedures of our Milky way. Those calibration procedures are to set specific filters in regards to Leavitt Law. In essence fine tuning metalicity detection of different Cepheids with the combination of parallax distance measures using the GAIA parallax database.

In Cosmology a commonly used tool is the luminosity to distance relation. So fine tuning this relation helps fine tune further measurements where parallax becomes impractical.
Yes, the three most important steps of the distance ladder are parallax, Leavitt's Law and SN Ia . The calibration of these steps does not depend on inhomogeneities or anisotropies.
 
  • #44
Jaime Rudas said:
In my opinion, the ΛCDM model remains the paradigm because it is, by far, the model that best fits the observations, and no alternative has emerged that even remotely comes close in this regard.
Does ΛCDM really fit the observations well?

(I won't litter this comment with citations for all of them but can easily produce them if desired.)

1. Galaxy formation occurs much sooner than predicted.
2. The predicted value of the growth index in the ΛCDM Model, that measures the growth of large scale structure, is in strong (4.2 sigma) tension with observations, given the model's measured parameters.
3. CDM predicts fewer galaxy clusters than are observed.
4. There are too many colliding clusters and when they are colliding they are on average, colliding at too high relative velocities.
5. Void galaxies are observed to have larger mean-distances from each other at any given void size than predicted by ΛCDM.
6. Voids between galaxies are more empty than they should be, and do not contain the population of galaxies expected in ΛCDM. See also the KBC void.
7. The gravitational lensing of subhalos in galactic clusters recently observed to be much more compact and less "puffy" than CDM would predict.
8. The 21cm background predictions of the theory are strongly in conflict with the EDGES data as shown in this illustration:

1718834881471.png

9. The Hubble tension. Many analyses of the data prefer a non-constant amount of dark energy over cosmological history.
10. The σ8 -- S8 -- fσ8 tension
11. Increasing evidence that the universe is not homogeneous and isotropic.
12. ΛCDM provides no insight into the "cosmic coincidence" problem.
13. CDM gets the halo mass function (i.e. aggregate statistical distribution of galaxy types and shapes) wrong.
14. Doesn't adequately explain galaxies with no apparent dark matter and has no means of predicting where they will be found.
15. KIDS evidence of less clumpy structure than predicted.
16. CDM should predict NFW shaped dark matter halos almost universally, but observations show that NFW shaped dark matter halos are rare.
17. CDM predicts cuspy central area of dark matter halos which are not observed. Physically motivated feedback models have failed to explain this observation.
18. CDM predicts more satellite galaxies.
19. CDM fails to predict that satellite galaxies strongly tend to be in the plane of the galaxy.
20. The observed satellite galaxies of satellite dwarf galaxies are one hundred times brighter than ΛCDM simulations suggest that they should be
21. Flat rotation curves in spiral galaxies are observed to extend to about a million parsecs, but CDM predicts that they should fall off much sooner (at most in the tens or hundred of thousands of parsecs).
1718835136213.png

22. CDM fails to explain by the baryonic Tully-Fischer relationship hold so tightly over so many orders of magnitude or why there is a similar tight scaling law with a different slope in galaxy clusters.
23. Well known scaling laws among the structural properties of the dark and the luminous matter in disc systems are too complex to be arisen by two inert components that just share the same gravitational field as CDM proposes.
24. CDM failed to predict in advance that low surface brightness galaxies appear to be dark matter dominated.
25. CDM erroneously predicted X-ray emissions in low surface brightness galaxies that are not observed.
26. CDM fails to predict the relationship between DM proportion in a galaxy and galaxy shape in elliptical galaxies.
27. CDM doesn't predict the relationship between bulge mass and number of satellite galaxies.
28. CDM predicts that too few metal poor globular clusters are formed.
29. CDM does not explain why globular clusters which are predicted and observed to have little dark matter shown non-Keplerian dynamics.
30. We do not observe in galaxy systems the Chandrasekhar dynamical friction we would expect to see if CDM was as proposed.
31. CDM greatly underestimate the proportion of disk galaxies that have very thin disks.
32. CDM doesn't explain why thick spiral galaxies have more inferred dark matter than thin ones.
33. CDM doesn't predict the absence of inferred dark matter effects in gravitationally bound systems that are within moderately strong gravitational fields of a larger gravitationally bound system.
34. Compact objects (e.g. neutron stars) should show equation of state impacts of dark matter absorbed by them, at rates predicted from estimated dark matter density, that are not observed.
35. ΛCDM extended minimally to include neutrinos is over constrained in light of the latest DESI data with a best fit to negative neutrino mass, or at least a sum of neutrino masses far lower than the minimum neutrino masses inferred from neutrino oscillation.
36. CDM does not itself predict the observation from lensing data that dark matter phenomena appear to be wave-like.
37. Extensive and varied searches for dark matter particle candidates have ruled out a huge swath of the parameter space for these particles while finding no affirmative evidence of any such particles. These searches include direct dark matter detection experiments, micro-lensing, LHC searches, searches for sterile neutrinos, searches for axion interactions, comparisons of the inferred mean velocity of dark matter due to the amount of observed structure in galaxies with thermal freeze out scenarios, searches for signatures of dark matter annihilation, etc.
38. CDM is not the only theory that can accurately produce the observed cosmic background radiation pattern (e.g., at least three gravity based approaches to dark matter phenomena have done so).
39. The angular momentum problem: In CDM, during galaxy formation, the baryons sink to the centers of their dark matter halos. A persistent idea is that they spin up as they do so (like a figure skater pulling her arms in), ultimately establishing a rotationally supported equilibrium in which the galaxy disk is around ten or twenty times smaller than the dark matter halo that birthed it, depending on the initial spin of the halo. This simple picture has never really worked. In CDM simulations, in which baryonic and dark matter particles interact, there is a net transfer of angular momentum from the baryonic disk to the dark halo that results in simulated disks being much too small.
40. The missing baryons challenge: The cosmic fraction of baryons – the ratio of normal matter to dark matter – is well known (16 ± 1%). One might reasonably expect individual CDM halos to be in in possession of this universal baryon fraction: the sum of the stars and gas in a galaxy should be 16% of the total, mostly dark mass. However, most objects fall well short of this mark, with the only exception being the most massive clusters of galaxies. So where are all the baryons?
41. CDM halos tend to over-stabilize low surface density disks against the formation of bars and spirals. You need a lot of dark matter to explain the rotation curve, but not too much, in order to allow for spiral structure. This tension has not been successfully reconciled.
42. Many of the possible CDM particle scenarios disturb the well established evidence of Big Bang Nucleosynthesis.

This list isn't comprehensive, but it is more complete than most (compare, e.g., the list from Wikipedia).

Three and a half dozen strong tensions or outright conflicts between ΛCDM predictions and observations doesn't sound like a great fit to observations to me.

In fairness, the original ΛCDM model formulated in the late 1990s wasn't intended to be perfect. It was a first approximation that was focused mostly on cosmology observations and was not unduly concerned with galaxy and cluster scale phenomena. The scientists who devised it in the first place knew perfectly well that they were ignoring factors (like neutrinos) that were present and had some effect, but were negligible relative to the precision of astronomy observations available at the time (which were much more crude than recent observations - e.g., they didn't have the JWST or the Hubble Telescope or DESI or 21cm measurements or gravitational wave detectors or decent neutrino telescopes). It wasn't supposed to be the final be all and end all theory and it has had a good run and probably lasted longer as the paradigm than originally expected.

And, again, competing paradigms are like duels to be the head of the tribe. Until you have a particular competitor that is clearly superior enough to displace the leader of the pack, it stays in the lead by default, even if its flaws are myriad. I'm not necessarily saying that the competitor has arrived.

But, I'm also saying that a paradigm that has so many conflicts with observations that it is vulnerable and has reduced credibility. So, it shouldn't be taken as seriously as something like the Standard Model of Particle Physics which has only a handful of recent and relatively minor tensions the currently remain unresolved after half a century of rigorous efforts to poke holes in it.

To circle around to the original question of the Hubble tension, all of these discrepancies, even if they merely require tweaks to the ΛCDM model rather than a wholesale abandonment of it, add credence to the possibility that the ΛCDM model details used to predict the early time Hubble constant from the Cosmic Microwaved Background radiation with high precision, could cause the early time Hubble constant value to be underestimated. This is so despite the assumption that there have been correct CMB measurements by Planck and that those measurements were then correctly inserted into the status quo ΛCDM model. This could easily have resulted in a 2-6% too low early time Hubble constant determination from the CMB measurements.
 
Last edited:
  • Informative
Likes nnunn
  • #45
Meh. Them's the kind of issues with the model as with the 'Earth is round' one. Both fail once you get down to the nitty-gritty, and want to know where all the kinks and ridges and bulges come from. But it's pretty clearly a generally-correct background on which to build, and whatever model eventually supersedes LCDM will have to resemble it at less granular scales.
 
  • Skeptical
  • Like
Likes ohwilleke and Jaime Rudas
  • #46
ohwilleke said:
Does ΛCDM really fit the observations well?
Well, I didn't say that it fits the observations well, but that it is the model that best fits the observations. Do you know of any model that fits the observations better than ΛCDM?
 
  • Like
Likes nnunn
  • #47
ohwilleke said:
Does ΛCDM really fit the observations well?
Fairly substantial list, one detail often missed is that LCDM evolves as new research findings become conclusive its rather adaptive in that regard. For example prior to WMAP you had a rather large list of possible Universe geometries that had viability. Now any complex geometries such as variations of the Klein Bottle are constrained.
Consider this example

Encyclopaedia Inflationaris​


https://arxiv.org/abs/1303.3787

this is a comprehensive list of different inflation scenarios. If one were to look over this list could anyone state "this inflation theory is LCDM ?" or would one consider them all to be possible options under LCDM ?

Take for example this line from the above introduction.

"namely the slow-roll single field models with minimal kinetic terms. "

this descriptive is something that got described as being the best favored fit from the first Planck dataset.

Another decent example being the following

"The Cosmic Energy Inventory"
https://arxiv.org/pdf/astro-ph/0406095v2

if one were to look through many of the values given in this article, one would find that many of those values were replaced with better estimates.

As LCDM is adaptive its likely to stick around for quite some time albiet it will continually adapt and improve as research findings become available.
With regards to the Hubble contention I would be extremely surprised if LCDM could not adapt to the new findings whatever they may be.
As someone who has actively watched cosmology research develop over the past 35 years or so it's often amazed me as to how adaptive LCDM is.
 
Last edited:
  • Like
Likes nnunn
  • #48
Mordred said:
For example prior to WMAP you had a rather large list of possible Universe geometries that had viability. Now any complex geometries such as variations of the Klein Bottle are constrained.
As I understand it, the WMAP observations don't constrain the existence of a complex topology for the universe, but rather its size. That is, if, for example, the universe has a flat 3-torus topology, its size would have to be several times that of the observable universe.
 
Last edited:
  • #49
It did apply limited constraints as you described that were later further constrained in the first Planck dataset. Lol that was quite a few years ago so my memory of the events could very well be a little sketchy on all the details from the WMAP results. Though I do recall all the space.com forum (back when it existed) arguments that were debating complex geometries under WMAP.
 
  • #50
Mordred said:
It did apply limited constraints as you described that were later further constrained in the first Planck dataset.
What kind of constraints are you referring to?
 
  • #51
I don't know if the technical papers are still available. They likely are however

"The newly-released WMAP data are now sufficiently sensitive to test dark energy, providing important new information with no reliance on previous supernovae results. The combination of WMAP and other data** limits the extent to which dark energy deviates from Einstein's cosmological constant. The simplest model (a flat universe with a cosmological constant) fits the data remarkably well. The new data constrain the dark energy to be within 14% of the expected value for a cosmological constant, while the geometry must be flat to better than 1%. The simplest model: a flat universe with a cosmological constant, fits the data remarkably well.

In more technical terms, for a flat universe, the dark energy "equation of state" parameter is -1.1 � 0.14, consistent with the cosmological constant (value of -1). If the dark energy is a cosmological constant, then these data constrain the curvature parameter to be within -0.77% and +0.31%, consistent with a flat universe (value of 0)."

https://map.gsfc.nasa.gov/news/7yr_release.html
 
  • #52
Mordred said:
I don't know if the technical papers are still available. They likely are however

"[...]
In more technical terms, for a flat universe, the dark energy "equation of state" parameter is -1.1 � 0.14, consistent with the cosmological constant (value of -1). If the dark energy is a cosmological constant, then these data constrain the curvature parameter to be within -0.77% and +0.31%, consistent with a flat universe (value of 0)."

https://map.gsfc.nasa.gov/news/7yr_release.html
But, that isn't related to complex topology constraints, which was what was mentioned in post #47.
 
  • #53
I realize that it doesn't specifically state that in that link. Your asking me for a reference from over a decade ago that may or may not be available. I'm currently searching for the paper I recall reading if I can find it I will post it.
 
  • #54
Well I couldn't locate the study I recall and as this is off topic of this thread I'm not going to waste too much time on it.
However we can readily list a few constraints.

1) inflation as cause of the flatness
2) non varying Lambda term
3)little to no indication of topological deformation such as a mobius twist
4) lack of mirroring or reflections
5) strong agreement on homogeneous and isotropy.(cosmological principle defined by the statement )
##d\tau^2=g_{\mu\nu}dx^\mu dx^\nu=dt^2-a^2t{\frac{dr^2}{1-kr^2}+r^2d\theta^2+r^2\sin^2\theta d\varphi^2}##

6) closeness to critical density.

However this is once again off topic for this thread.
 
Last edited:
  • #55
Mordred said:
3)little to no indication of topological deformation such as a mobius twist
4) lack of mirroring or reflections
5) strong agreement on homogeneous and isotropy.
In my view, these are the only ones that, in some way, constrain the possibility of a complex topology, but, as I mentioned in post #48, these constraints refer more to the size of the universe than to its topology.
 
  • #56
Well one can argue in both cases, while viable geometries is an interesting topic. It's not really related to the Hubble Constant problem.

The more I look into the PL relations as well as Tully-Fisher the more I've been leaning towards the problem involving the different methods in regards to calibration errors.

I've been studying the methods and related papers using this as a reference listing.

https://pdg.lbl.gov/1999/hubblerpp.pdf

I plan on examining the oft mentioned Freedmann paper/papers shortly.
The uncertainties that link gives for each method definitely bears looking into on each method. If so then this is something that is likely going to take years to resolve. (Calibration research time) etc.
 
Last edited:
  • Like
Likes nnunn
  • #57
Jaime Rudas said:
It should be noted that Freedman's results have not yet been published.
A first preprint in this regard has already been published in arXiv
 
  • Like
Likes ohwilleke
  • #58
Thanks the paper is interesting but it's more a proof of methodology to reduce error margin using the different independent measurements mentioned in the paper. Though it mentions it's in good agreement with the predictions made using Cephied Leavitt Law which we already mentioned needing a good calibration.
The JWST paper you linked shows that the range of where that law has been practical can be extended by the methods in the paper to 10 Mpc. Assuming I understood the paper accurately.
 
  • #59
Mordred said:
The JWST paper you linked shows that the range of where that law has been practical can be extended by the methods in the paper to 10 Mpc.
The range of Leavitt's law was expanded to 10 Mpc by the characteristics of James Webb, not by the methods described in the paper.

It seems to me that an important conclusion of the paper is that, by rectifying the color-magnitude diagram of the Tip of the Red Giant Branch, it is possible to reduce the uncertainty of the measurement of its magnitude by a third, which makes this method a good distance indicator.
 
  • #60
Agreed however the methods of the paper are useful when it comes to calibration of JWST characteristics. Which evidently we both agree on lol
 
  • #61
Mordred said:
Agreed however the methods of the paper are useful when it comes to calibration of JWST characteristics.
I don't see how the methods of the paper can be useful in calibrating the JWST characteristics. Could you give an example?
 
  • #62
Any method using Cephieds, stars including TRGB stars requires a good understanding of their luminosities particularly on distance measures.

A decent article covering the factors involved for TRBG can be found here

https://arxiv.org/abs/2002.01550

Several of the luminosity relations will differ from Cepheids as their metalicities vary.
You might note that the above paper also references Freedmanns paper from 2019
 
Last edited:
  • #63
Mordred said:
Any method using Cephieds, stars including TRGB stars requires a good understanding of their luminosities particularly on distance measures.
Cepheids and TRGB are two very different types of stars.
 
  • #65
Mordred said:
Yes I know
If you know, why do you imply that the TRGB are Cepheids?
 
  • #66
Where did I do that ?
 
  • #67
  • #68
Really ?

"However, with any standard candle, it is first necessary to provide an absolute reference. Here we use Cycle 1 data to provide an absolute calibration in the F090W filter. F090W is most similar to the F814W filter commonly used for TRGB measurements with HST, which had been adopted by the community due to minimal dependence from the underlying metallicities and ages of stars. The imaging we use was taken in the outskirts of NGC 4258, which has a direct geometrical distance measurement from the Keplerian motion of its water megamaser. Utilizing several measurement techniques, we find MF090WTRGB = -4.362 ± 0.033 (stat) ± 0.045 (sys) mag (Vega) for the metal-poor TRGB. We also perform measurements of the TRGB in two Type Ia supernova hosts, NGC 1559, and NGC 5584. We find good agreement between our TRGB distances and previous distance determinations to these galaxies from Cepheids (Δ = 0.01 ± 0.06 mag), with these differences being too small to explain the Hubble tension (∼0.17 mag). As a final bonus, we showcase the serendipitous discovery of a faint dwarf galaxy near NGC 5584."
 
  • #69
Mordred said:
Where did I do that ?
In the post #62:

Mordred said:
Any method using Cephieds, stars including TRGB stars requires a good understanding of their luminosities particularly on distance measures.
 
  • #70
My apologies, poorly worded I can see where the confusion cropped in
 
Back
Top