- #1
mitchell porter
Gold Member
- 1,457
- 712
I appreciate Sabine Hossenfelder's latest video, and thought I would assemble some references. There's a mix of particle physics and cosmology topics. In general, the coincidences mentioned are mainstream topics in cosmology, whereas the particle physics coincidences are not.
1) Proton/electron mass ratio equals 6π5
This is the coincidence in her list, that is most decisively dismissed as just a coincidence, because these two particles obtain their masses in different ways (electron from Higgs mechanism, proton from from quarks and gluons).
But back in 1951, when this was first reported, in a paper by Friedrich Lenz that is one of the shortest of all time, they didn't know about either of those things.
The best-known attempt to explain it (still very obscure) is due to Armand Wyler, who was also explaining the fine-structure constant with a somewhat more complicated formula. Wyler's work was published in the journal of the French Academy of Sciences, and actually got him invited to Princeton's IAS for a while. I cannot find his main paper "Les groupes des potentiels de Coulomb and de Yukawa" anywhere online, but his unpublished notes have survived. He obtained his formulas from quotient spaces associated with the conformal group, an extension of the Poincare group of special relativity. Using the conformal group arguably made him ahead of his time, but no one could find a physical interpretation of his calculations.
That all happened around 1971. In the 1970s the standard model came together, including QCD and the Higgs mechanism, and the associated view that the masses of electron and proton derive from unrelated effects. Fifty years further on, one might say that the standard model parameters all derive from the moduli of a single string geometry, and so maybe this formula has a very slim chance of not being a coincidence. But that would require some new kind of stringy implementation of the standard model. So, like the mainstream, I place this first coincidence at the bottom of the list, in terms of plausibility.
2) Koide formula
The subject of hundreds of comments in this sub-forum, Koide's formula first appeared as a corollary of a preon (sub-quark) model that he worked on. But as mentioned, we now see the mass of the electron and other fundamental fermions as being due to the Higgs mechanism, and Koide later switched to Higgs-based explanations.
From a contemporary view, this is a relationship among couplings to the Higgs field (yukawa couplings, or "yukawas" for short). Relationships among yukawas are a common thing in mainstream model-building, but this involves square roots of yukawas; highly unusual, but still not impossible. What's fatal for the Koide formula, in the eyes of many particle physicists, is that it is a relationship among "pole masses" at different scales; whereas the symmetry mechanisms that can produce relations among yukawas, pertain to "running masses" at the same scale.
Koide's compatriot Yukinari Sumino did propose a mechanism whereby a symmetry could lead to an exact relation among pole masses, but Koide himself is the only person who has built on Sumino's ideas. Koide is also the author of the vast majority of the actual field-theoretic models that could explain the original relationship. (Offhand, the only exception I can think of is due to Ernest Ma.)
However, numerous generalizations of the Koide formula have been proposed, e.g. extending it to other triplets of particles. The most comprehensive such extension is due to @arivero, and is discussed in the thread on "Koide sum rules". One of its ingredients is an intriguing refinement of the original formula due to @CarlB, in which the formula is obtained by taking the trace of a cyclic matrix. This refined formula has two parameters, a mass scale and a phase angle. The significance of the phase angle is discussed in some papers by Piotr Zenczykowski.
This particular coincidence is much more amenable to explanation in terms of physics as we understand it, than the previous one.
3) Cosmological constant is geometric mean of Planck mass and mass of the universe
This is the first of the cosmological coincidences mentioned in the video. I have a much weaker sense of what is a superior or even a reasonable explanation where these are concerned (and much less to say about them). But I can say that all of them have been the subject of more-or-less mainstream research.
This one apparently originates with Hsu and Zee 2004, although it was anticipated by Cohen, Kaplan, Nelson 1998. In both cases, the idea is that the dark energy is in some way due to a quantum gravitational effect that involves both extreme infrared physics (cosmological scale) and extreme ultraviolet physics (Planck scale). For example, that it reflects a specific coupling between each local region, and everything else within its cosmological horizon.
In this regard, I will mention an unusual concept of "Newtonian dark energy" that I ran across, that would derive from the mass of the entire universe.
4) MOND acceleration constant proportional to square root of cosmological constant
The MOND version of modified gravity, another popular topic here, originates with Mordehai Milgrom, and this particular coincidence was spotted by Milgrom himself right at the beginning.
In his Scholarpedia review article on MOND, he proposes two interpretations. It could mean that the MOND deviation from Newtonian gravity is because "the state of the Universe at large strongly enters local dynamics of small systems" (e.g. as discussed regarding coincidence #3 in this list); or it could mean that "the same fundamental parameter enters both cosmology, as a cosmological constant, and local dynamics", as a critical acceleration.
5) Dark matter density is currently same order of magnitude as dark energy density
This one is due to Paul Steinhardt, the co-inventor (and now major critic) of cosmic inflation. The point is that dark energy appears to remain almost constant for billions of years at a time, whereas the dark matter density should dilute with the expansion of the universe.
I find Steinhardt's own exposition (in "A Quintessential Introduction to Dark Energy") interesting. He regards it as an addition to the original cosmological constant problem - which is, why is the dark energy there, and why is it so small. The usual view is that dark energy is vacuum energy; a common view, starting with Weinberg and implemented within string theory by Bousso and Polchinski, is that the small value is anthropically determined (any bigger and the universe would fly apart before galaxies, or even atoms, could form).
On this view, dark matter and dark energy are quite unrelated - dark matter is just one kind of particle, dark energy is the combined vacuum energy of all the fields - so the first part of this coincidence problem is, why would they ever exhibit a similar energy density at all. But there's a second part, which in the literature is called the "why now" problem - why are they about the same, in this cosmic epoch of atoms, humans, and galaxies, rather than in one of the fleeting epochs of the very early universe, or the drawn-out epochs of the far future?
Steinhardt acknowledges that it sounds like the anthropic principle could be at work. But he would prefer a non-anthropic explanation. So first, he adopts what is probably the second most common theory of dark energy, that it is due to a new scalar field, conventionally called a "quintessence" field. Then he promotes a particular kind of quintessence, kinetic quintessence or "k-essence", which tracks the radiation density of the universe, until the cosmic epoch of "matter domination", which is when the universe has cooled and diluted enough for galaxies to form. At that point, k-essence begins to behave like the dark energy that we observe. Thus k-essence answers "why now" in a non-anthropic way: "Cosmic acceleration and human evolution are both linked to the onset of matter-domination."
Alternatively, one may model dark energy as quintessence, but settle for an anthropic explanation of "why now". As an example of this, I like "A New Perspective on Cosmic Coincidence Problems" (Arkani-Hamed et al, 2000). This is a relatively obscure paper, but I like it because it is a maximal example - they are explaining the "triple coincidence" of radiation, matter, and dark energy densities in this epoch, and even a fivefold coincidence if you include neutrino and baryon energy densities too.
6) Flatness of spacetime after the big bang
This is normally called the flatness problem, but Hossenfelder calls it the curvature problem. Either way, the problem is that the large-scale curvature of the universe is empirically negligible in this epoch; but space-time dynamics should amplify curvature with time, so (running the clock backwards) the deviations from flatness in the early universe must have been extremely small. What was the cause of those extremely flat initial conditions?
The conventional answer to that question is, inflation. It's there in the title of Alan Guth's 1981 paper introducing the inflationary universe: "A possible solution to the horizon and flatness problems". I believe Guth's idea is that, rather than looking for an answer in the mysteries of primordial quantum gravity, exponentially accelerated expansion can remove the need for finetuning, since the dynamical period is extremely short. All you need is a Planck-size patch of flatness somewhere in your initial conditions, and then boom, inflation can blow it up to astronomical size, before it has much of a chance to curve.
There is a debate about whether inflation removes the need for finetuning or not. I haven't examined the arguments, so I'll just mention a bit of history. The flatness problem was introduced in a 1969 lecture by Robert Dicke, and received very little attention for ten years - but one of the attendees was Alan Guth. It's an example of how thinking about a problem, that others don't even recognize as a problem, can pay off in the long run.
In the article by Steinhardt that I quoted above, he describes the cosmological constant coincidence problem (#5 in Hossenfelder's list) as a generalization of the flatness problem (#6). I don't quite see it, but thought I'd mention it.
7) Metastability of the standard model vacuum
Finally we're back to particle physics, although this is a particle physics topic with cosmic implications. The key idea is that of true vacuum versus false vacuum in quantum field theory. The true vacuum should be the ground state, the lowest-energy state; no particles present, but possibly a nonzero "vacuum expectation value" (VEV) in some of the fields. This phenomenon of a nonzero VEV in a vacuum state, is put to use in the Higgs mechanism, in order to break symmetries and add mass to massless particles... But one may also have a vacuum like this, which is nonetheless not the lowest energy state - a different selection of VEVs may have lower energy. There is therefore a finite probability for a small patch of space to tunnel from one configuration of field VEVs to another, and once it does this, energy minimization will favor the spread of the new vacuum state.
This scenario of vacuum decay has achieved some pop-science notoriety in recent decades, as "the ultimate ecological catastrophe". Guth's original model of inflation ends with an inflationary false vacuum decaying into the stabler vacuum of the current universe. (Bousso and Polchinski's mechanism for producing a small cosmological constant also involves a series of similar transitions, altering the brane flux in the compact extra dimensions of string theory, as the early universe expands.)
At some point in the 1980s, physicists also began to study whether the simple potential of the standard model's electroweak sector (where the real-world Higgs mechanism is at work), or slight extensions of it, might also develop false vacua. Initially this was a way to exclude certain beyond-standard-model possibilities, e.g. if new generations of massive fermions could be ruled out because our current vacuum could not have survived so long.
But by the early 21st century, as the range of empirically possible values for the mass of the Higgs boson grew narrower, and with the mass of the very heavy top quark known since 1995, the possibility that we might actually be living in a very long-lived false vacuum (one with a half-life of astronomical duration) became more and more realistic. And after 2012, when the Higgs boson had finally been observed and its mass determined, it appeared that the parameters of the standard model in our universe are such, that our vacuum is right on the edge of instability. It is either a true vacuum parametrically very close to being a false vacuum, or vice versa.
I recommend "Investigating the near-criticality of the Higgs boson" (2013) as the main technical reference on this topic.
Last edited: