"The 7 Strangest Coincidences in the Laws of Nature" (S. Hossenfelder)

In summary, "The 7 Strangest Coincidences in the Laws of Nature" by S. Hossenfelder explores seven remarkable coincidences in physics that demonstrate the fine-tuning of the universe. These coincidences highlight the unlikely values and relationships between fundamental constants and forces, suggesting that the conditions for life and the structure of the universe are intricately connected. Hossenfelder examines phenomena such as the strengths of gravity and electromagnetism, the mass of elementary particles, and the cosmic microwave background radiation, raising questions about their origins and implications for our understanding of reality. Through these examples, the work emphasizes the mystery surrounding the laws of nature and the ongoing quest for deeper explanations.
  • #1
mitchell porter
Gold Member
1,464
720


I appreciate Sabine Hossenfelder's latest video, and thought I would assemble some references. There's a mix of particle physics and cosmology topics. In general, the coincidences mentioned are mainstream topics in cosmology, whereas the particle physics coincidences are not.

1) Proton/electron mass ratio equals 6π5

This is the coincidence in her list, that is most decisively dismissed as just a coincidence, because these two particles obtain their masses in different ways (electron from Higgs mechanism, proton from from quarks and gluons).

But back in 1951, when this was first reported, in a paper by Friedrich Lenz that is one of the shortest of all time, they didn't know about either of those things.

The best-known attempt to explain it (still very obscure) is due to Armand Wyler, who was also explaining the fine-structure constant with a somewhat more complicated formula. Wyler's work was published in the journal of the French Academy of Sciences, and actually got him invited to Princeton's IAS for a while. I cannot find his main paper "Les groupes des potentiels de Coulomb and de Yukawa" anywhere online, but his unpublished notes have survived. He obtained his formulas from quotient spaces associated with the conformal group, an extension of the Poincare group of special relativity. Using the conformal group arguably made him ahead of his time, but no one could find a physical interpretation of his calculations.

That all happened around 1971. In the 1970s the standard model came together, including QCD and the Higgs mechanism, and the associated view that the masses of electron and proton derive from unrelated effects. Fifty years further on, one might say that the standard model parameters all derive from the moduli of a single string geometry, and so maybe this formula has a very slim chance of not being a coincidence. But that would require some new kind of stringy implementation of the standard model. So, like the mainstream, I place this first coincidence at the bottom of the list, in terms of plausibility.

2) Koide formula

The subject of hundreds of comments in this sub-forum, Koide's formula first appeared as a corollary of a preon (sub-quark) model that he worked on. But as mentioned, we now see the mass of the electron and other fundamental fermions as being due to the Higgs mechanism, and Koide later switched to Higgs-based explanations.

From a contemporary view, this is a relationship among couplings to the Higgs field (yukawa couplings, or "yukawas" for short). Relationships among yukawas are a common thing in mainstream model-building, but this involves square roots of yukawas; highly unusual, but still not impossible. What's fatal for the Koide formula, in the eyes of many particle physicists, is that it is a relationship among "pole masses" at different scales; whereas the symmetry mechanisms that can produce relations among yukawas, pertain to "running masses" at the same scale.

Koide's compatriot Yukinari Sumino did propose a mechanism whereby a symmetry could lead to an exact relation among pole masses, but Koide himself is the only person who has built on Sumino's ideas. Koide is also the author of the vast majority of the actual field-theoretic models that could explain the original relationship. (Offhand, the only exception I can think of is due to Ernest Ma.)

However, numerous generalizations of the Koide formula have been proposed, e.g. extending it to other triplets of particles. The most comprehensive such extension is due to @arivero, and is discussed in the thread on "Koide sum rules". One of its ingredients is an intriguing refinement of the original formula due to @CarlB, in which the formula is obtained by taking the trace of a cyclic matrix. This refined formula has two parameters, a mass scale and a phase angle. The significance of the phase angle is discussed in some papers by Piotr Zenczykowski.

This particular coincidence is much more amenable to explanation in terms of physics as we understand it, than the previous one.

3) Cosmological constant is geometric mean of Planck mass and mass of the universe

This is the first of the cosmological coincidences mentioned in the video. I have a much weaker sense of what is a superior or even a reasonable explanation where these are concerned (and much less to say about them). But I can say that all of them have been the subject of more-or-less mainstream research.

This one apparently originates with Hsu and Zee 2004, although it was anticipated by Cohen, Kaplan, Nelson 1998. In both cases, the idea is that the dark energy is in some way due to a quantum gravitational effect that involves both extreme infrared physics (cosmological scale) and extreme ultraviolet physics (Planck scale). For example, that it reflects a specific coupling between each local region, and everything else within its cosmological horizon.

In this regard, I will mention an unusual concept of "Newtonian dark energy" that I ran across, that would derive from the mass of the entire universe.

4) MOND acceleration constant proportional to square root of cosmological constant

The MOND version of modified gravity, another popular topic here, originates with Mordehai Milgrom, and this particular coincidence was spotted by Milgrom himself right at the beginning.

In his Scholarpedia review article on MOND, he proposes two interpretations. It could mean that the MOND deviation from Newtonian gravity is because "the state of the Universe at large strongly enters local dynamics of small systems" (e.g. as discussed regarding coincidence #3 in this list); or it could mean that "the same fundamental parameter enters both cosmology, as a cosmological constant, and local dynamics", as a critical acceleration.

5) Dark matter density is currently same order of magnitude as dark energy density

This one is due to Paul Steinhardt, the co-inventor (and now major critic) of cosmic inflation. The point is that dark energy appears to remain almost constant for billions of years at a time, whereas the dark matter density should dilute with the expansion of the universe.

I find Steinhardt's own exposition (in "A Quintessential Introduction to Dark Energy") interesting. He regards it as an addition to the original cosmological constant problem - which is, why is the dark energy there, and why is it so small. The usual view is that dark energy is vacuum energy; a common view, starting with Weinberg and implemented within string theory by Bousso and Polchinski, is that the small value is anthropically determined (any bigger and the universe would fly apart before galaxies, or even atoms, could form).

On this view, dark matter and dark energy are quite unrelated - dark matter is just one kind of particle, dark energy is the combined vacuum energy of all the fields - so the first part of this coincidence problem is, why would they ever exhibit a similar energy density at all. But there's a second part, which in the literature is called the "why now" problem - why are they about the same, in this cosmic epoch of atoms, humans, and galaxies, rather than in one of the fleeting epochs of the very early universe, or the drawn-out epochs of the far future?

Steinhardt acknowledges that it sounds like the anthropic principle could be at work. But he would prefer a non-anthropic explanation. So first, he adopts what is probably the second most common theory of dark energy, that it is due to a new scalar field, conventionally called a "quintessence" field. Then he promotes a particular kind of quintessence, kinetic quintessence or "k-essence", which tracks the radiation density of the universe, until the cosmic epoch of "matter domination", which is when the universe has cooled and diluted enough for galaxies to form. At that point, k-essence begins to behave like the dark energy that we observe. Thus k-essence answers "why now" in a non-anthropic way: "Cosmic acceleration and human evolution are both linked to the onset of matter-domination."

Alternatively, one may model dark energy as quintessence, but settle for an anthropic explanation of "why now". As an example of this, I like "A New Perspective on Cosmic Coincidence Problems" (Arkani-Hamed et al, 2000). This is a relatively obscure paper, but I like it because it is a maximal example - they are explaining the "triple coincidence" of radiation, matter, and dark energy densities in this epoch, and even a fivefold coincidence if you include neutrino and baryon energy densities too.

6) Flatness of spacetime after the big bang

This is normally called the flatness problem, but Hossenfelder calls it the curvature problem. Either way, the problem is that the large-scale curvature of the universe is empirically negligible in this epoch; but space-time dynamics should amplify curvature with time, so (running the clock backwards) the deviations from flatness in the early universe must have been extremely small. What was the cause of those extremely flat initial conditions?

The conventional answer to that question is, inflation. It's there in the title of Alan Guth's 1981 paper introducing the inflationary universe: "A possible solution to the horizon and flatness problems". I believe Guth's idea is that, rather than looking for an answer in the mysteries of primordial quantum gravity, exponentially accelerated expansion can remove the need for finetuning, since the dynamical period is extremely short. All you need is a Planck-size patch of flatness somewhere in your initial conditions, and then boom, inflation can blow it up to astronomical size, before it has much of a chance to curve.

There is a debate about whether inflation removes the need for finetuning or not. I haven't examined the arguments, so I'll just mention a bit of history. The flatness problem was introduced in a 1969 lecture by Robert Dicke, and received very little attention for ten years - but one of the attendees was Alan Guth. It's an example of how thinking about a problem, that others don't even recognize as a problem, can pay off in the long run.

In the article by Steinhardt that I quoted above, he describes the cosmological constant coincidence problem (#5 in Hossenfelder's list) as a generalization of the flatness problem (#6). I don't quite see it, but thought I'd mention it.

7) Metastability of the standard model vacuum

Finally we're back to particle physics, although this is a particle physics topic with cosmic implications. The key idea is that of true vacuum versus false vacuum in quantum field theory. The true vacuum should be the ground state, the lowest-energy state; no particles present, but possibly a nonzero "vacuum expectation value" (VEV) in some of the fields. This phenomenon of a nonzero VEV in a vacuum state, is put to use in the Higgs mechanism, in order to break symmetries and add mass to massless particles... But one may also have a vacuum like this, which is nonetheless not the lowest energy state - a different selection of VEVs may have lower energy. There is therefore a finite probability for a small patch of space to tunnel from one configuration of field VEVs to another, and once it does this, energy minimization will favor the spread of the new vacuum state.

This scenario of vacuum decay has achieved some pop-science notoriety in recent decades, as "the ultimate ecological catastrophe". Guth's original model of inflation ends with an inflationary false vacuum decaying into the stabler vacuum of the current universe. (Bousso and Polchinski's mechanism for producing a small cosmological constant also involves a series of similar transitions, altering the brane flux in the compact extra dimensions of string theory, as the early universe expands.)

At some point in the 1980s, physicists also began to study whether the simple potential of the standard model's electroweak sector (where the real-world Higgs mechanism is at work), or slight extensions of it, might also develop false vacua. Initially this was a way to exclude certain beyond-standard-model possibilities, e.g. if new generations of massive fermions could be ruled out because our current vacuum could not have survived so long.

But by the early 21st century, as the range of empirically possible values for the mass of the Higgs boson grew narrower, and with the mass of the very heavy top quark known since 1995, the possibility that we might actually be living in a very long-lived false vacuum (one with a half-life of astronomical duration) became more and more realistic. And after 2012, when the Higgs boson had finally been observed and its mass determined, it appeared that the parameters of the standard model in our universe are such, that our vacuum is right on the edge of instability. It is either a true vacuum parametrically very close to being a false vacuum, or vice versa.

I recommend "Investigating the near-criticality of the Higgs boson" (2013) as the main technical reference on this topic.
 
Last edited:
  • Like
  • Informative
Likes mattt, ohwilleke and DrClaude
Physics news on Phys.org
  • #2
Oops, I posted that a bit prematurely. Fortunately I'm almost done...

This is the last of Sabine Hossenfelder's 7 coincidences, and it may well be the most important discovery of the Large Hadron Collider.

The usual interpretation of the data concerning the Higgs is that it is a problem for the idea of "naturalness". It was believed that a light Higgs would be made heavy by heavy virtual particles, unless there was a new symmetry that made those virtual corrections cancel out. This should involve new particles detectable at the LHC. No such particles have been seen, and so opinion has turned against the idea that the Higgs is "natural". Instead, it's believed that something must be finetuning its mass, perhaps the anthropic principle. One sometimes sees papers modeling the anthropic finetuning of the Higgs and the cosmological constant together.

This scenario, while logically possible, does not use all the available clues. In particular, it does not seem to use the near-criticality of the standard model vacuum very much. The Higgs boson mass was in fact predicted in 2009 (by Shaposhnikov and Wetterich) using a less-popular model of quantum gravity (asymptotic safety). Shouldn't one be interested, for example, in whether the kind of ultraviolet boundary conditions they employed, can be created by some form of self-organized criticality? There are a few papers in that direction, the general idea is mentioned in the 2013 paper above, but it deserves a lot more attention.

In conclusion, I thank Sabine Hossenfelder for putting together such an inspiring collection of clues to what really lies beyond the standard model.
 
  • Informative
Likes ohwilleke and phinds
  • #3
mitchell porter said:
Oops, I posted that a bit prematurely. Fortunately I'm almost done...

This is the last of Sabine Hossenfelder's 7 coincidences, and it may well be the most important discovery of the Large Hadron Collider.

The usual interpretation of the data concerning the Higgs is that it is a problem for the idea of "naturalness". It was believed that a light Higgs would be made heavy by heavy virtual particles, unless there was a new symmetry that made those virtual corrections cancel out. This should involve new particles detectable at the LHC. No such particles have been seen, and so opinion has turned against the idea that the Higgs is "natural". Instead, it's believed that something must be finetuning its mass, perhaps the anthropic principle. One sometimes sees papers modeling the anthropic finetuning of the Higgs and the cosmological constant together.

This scenario, while logically possible, does not use all the available clues. In particular, it does not seem to use the near-criticality of the standard model vacuum very much. The Higgs boson mass was in fact predicted in 2009 (by Shaposhnikov and Wetterich) using a less-popular model of quantum gravity (asymptotic safety). Shouldn't one be interested, for example, in whether the kind of ultraviolet boundary conditions they employed, can be created by some form of self-organized criticality? There are a few papers in that direction, the general idea is mentioned in the 2013 paper above, but it deserves a lot more attention.

In conclusion, I thank Sabine Hossenfelder for putting together such an inspiring collection of clues to what really lies beyond the standard model.
Regarding the fine-tuning issue.
I remember reading the Hebrew translation of "The whole She Bang", where he wrote that the universe might be tuned by a mad professor.... I'll try to get to the book. It's the popular book by Timothy Ferris on Cosmology.
 
  • #4
I know that most of these are a stretcht but when one says that [##\sqrt{M_{\text{Planck}}R_{\text{Universe}}}=\Lambda##] the radius of the universe is not expected to be constant, right? Is it arguing for a non-constant ##\Lambda##?
 
  • #5
A nice insightful treatment with good references.

With regard to the first "coincidence":

The hypothesized value of the proton-electron mass ratio is 1836.11810871168872.

The Particle Data Group value of the electron mass is:
Screenshot 2024-05-01 at 4.23.24 PM.png

The Particle Data Group value of the proton mass is:
Screenshot 2024-05-01 at 4.24.47 PM.png

The empirically measured proton-electron mass ratio is:
1836.152673425258506

So, the observed value exceeds the hypothesized value by 0.034586308371306.

The difference between the observed value and the hypothesized value is about five orders of magnitude greater than the uncertainty in the measured value of the proton-electron mass ratio and the hypothesized value is exact. So, this isn't really even a coincidence. It is just a "near miss" that is definitively ruled out in the form originally suggested.

Of course, that is with the pole mass of the electron. Maybe there is some energy scale at which the running of the electron mass produces the hypothesized value, but unless that scale is very close to the proton mass, that's pretty meaningless.

The running of the electron-mass is at least in the right direction, it gets smaller at higher energy scales, which increases the proton-electron mass ratio, and the necessary tweak is only on the order of 0.00188% (v. a 4.8% reduction at the roughly 90 GeV Z boson mass energy scale), so that does help a little, but probably too much at 938 MeV, since the running is roughly proportionate to the log of the energy scale.
 
Last edited:
  • #6
ohwilleke said:
A nice insightful treatment with good references.

With regard to the first "coincidence":

The hypothesized value of the proton-electron mass ratio is 1836.11810871168872.

The Particle Data Group value of the electron mass is: View attachment 344349
The Particle Data Group value of the proton mass is:
View attachment 344350
The empirically measured proton-electron mass ratio is:
1836.152673425258506

So, the observed value exceeds the hypothesized value by 0.034586308371306.

The difference between the observed value and the hypothesized value is about five orders of magnitude greater than the uncertainty in the measured value of the proton-electron mass ratio and the hypothesized value is exact. So, this isn't really even a coincidence. It is just a "near miss" that is definitively ruled out in the form originally suggested.

Of course, that is with the pole mass of the electron. Maybe there is some energy scale at which the running of the electron mass produces the hypothesized value, but unless that scale is very close to the proton mass, that's pretty meaningless.
I think that's the whole point. There are some deviations, but maybe those deviations won't be there at higher energy or are due to some broken symmetry. However, numerology has been tried in physics history at several times, even by Dirac, and it has rarely (if ever) produced anything meaningful.
 
  • Like
Likes PhDeezNutz and Motore
  • #7
pines-demon said:
I think that's the whole point. There are some deviations, but maybe those deviations won't be there at higher energy or are due to some broken symmetry. However, numerology has been tried in physics history at several times, even by Dirac, and it has rarely (if ever) produced anything meaningful.
I don't know. A lot of coincidences involve not very precisely known quantities where it is impossible to know if the predicted value holds or not.

For example, quantities related to dark energy and the cosmological constant involved in coincidences #3, #4 and #5 are known to a precision of only about one part per 36 (a bit better than ± 3%), which leaves much more room for empirically unresolvable conjecture than the proton-electron mass ratio where both measurements are ultra-precise.

In contrast, Koide's rule remains consistent within less than 1 sigma at the latest experimental values of the charged lepton masses, and is actually a better fit to the measured values than it was when it was proposed 43 years ago, despite not having been modified or refined since then.

The latest tau lepton mass from Belle II Collaboration, "Measurement of the τ-lepton mass with the Belle~II experiment" arXiv:2305.19116 (May 30, 2023) measures a tau quark mass of 1777.09 ± 0.136 MeV/c^2. This is a big improvement over the previous Belle II tau lepton mass measurement of 1777.28 ± 0.82 MeV/c^2 from August of 2020. This will eventually pull up the Particle Data Group value which is currently 1776.86 ± 0.12 MeV/c^2. The new measurement is consistent with the Particle Data Group value at the 1.62 sigma level. The Particle Data Group value is considering only measurements in 2014 and earlier. The new inverse error weighted PDG value should be about 1776.97 ± 0.11. The Koide's rule prediction is 1776.968 94 ± 0.000 07 MeV/c^2.

The Koide's rule prediction is consistent with the new measurement at the 0.89 sigma level and is consistent with the PDG value at the 0.91 sigma level. The Belle II result pulls the global average closer to the Koide's rule prediction based upon a formula stated in 1981 and the latest Belle II measurement also is closer to the Koide's rule prediction than its previous less precise measurement from August of 2020.

Whether or not the original motivation for Koide's rule is correct (and there is pretty strong observational evidence against preons at this point relative to what we had in 1981), the phenomenological rule seems quite solid and there are multiple plausible ways one could explain it in a manner harmonious with existing Standard Model physics. The electron mass and muon mass are known very precisely which is why the uncertainty in the Koide rule predicted tau lepton mass is so small. The global average measured value of tau lepton mass has a precision of about one part per 10,000 which isn't as good as the proton mass and electron mass, but is much better than the cosmological constant.

The really tantalizing piece of Koide's rule is that a generalization of it to the quark masses comes pretty close as a first order estimate, and a pretty simple tweak to that generalization can get even closer and suggests a plausible physical mechanism for both the generalized rule for quarks and the original rule for charged leptons (basically dynamical balancing of Higgs Yukawas via W boson interactions). At that point it starts to get interesting as a hint of a potential breakthrough waiting to be made.
 
Last edited:
  • Like
Likes Davephaelon and pines-demon
  • #8
Regarding #6, flatness, the extent to which the observable universe is flat is the subject of a pre-print released today. The best fit curvature measurement exceeds zero by about half of the uncertainty in the measurement, i.e. it is consistent with zero at roughly the 0.5 sigma level. The preprint and its abstract are as follows:

Javier de Cruz Perez, Chan-Gyung Park, Bharat Ratra "Updated observational constraints on spatially-flat and non-flat ΛCDM and XCDM cosmological models" arXiv:2404.19194

"We study 6 LCDM models, with 4 allowing for non-flat geometry and 3 allowing for a non-unity lensing consistency parameter AL. We also study 6 XCDM models with a dynamical dark energy density X-fluid with equation of state w. For the non-flat models we use two different primordial power spectra, Planck P(q) and new P(q). These models are tested against: Planck 2018 CMB power spectra (P18) and lensing potential power spectrum (lensing), and an updated compilation of BAO, SNIa, H(z), and fσ8 data [non-CMB data]. P18 data favor closed geometry for the LCDM and XCDM models and w<−1 (phantom-like dark energy) for the XCDM models while non-CMB data favor open geometry for the LCDM models and closed geometry and w>−1 (quintessence-like dark energy) for the XCDM models. When P18 and non-CMB data are jointly analyzed there is weak evidence for open geometry and moderate evidence for quintessence-like dark energy. Regardless of data used, AL>1 is always favored. The XCDM model constraints obtained from CMB data and from non-CMB data are incompatible, ruling out the 3 AL=1 XCDM models at >3σ. In the 9 models not ruled out, for the P18+lensing+non-CMB data set we find little deviation from flat geometry and moderate deviation from w=−1. In all 6 non-flat models (not ruled out), open geometry is mildly favored, and in all 3 XCDM+AL models (not ruled out) quintessence-like dark energy is moderately favored (by at most 1.6σ). In the AL=1 non-flat LCDM cases, we find for P18+lensing+non-CMB data Ωk=0.0009±0.0017 [0.0008±0.0017] for the Planck [new] P(q) model, favoring open geometry. The flat LCDM model remains the simplest (largely) observationally-consistent cosmological model. Our cosmological parameter constraints obtained for the flat LCDM model (and other models) are the most restrictive results to date (Abridged)."

This said, it is hard for me to see why a flat universe is viewed as "surprising" or a "coincidence". It pretty much follows from the fact that mass-energy distributions aren't terribly bumpy and that there are huge amounts of virtually empty space. Non-zero but small spatial curvature would be my null hypothesis coming to the issue cold, and knowing nothing about what observations actually show. And, the observational constraints on spatial curvature still aren't even all that terribly tight, even with new more refined observations.

Also, for what it is worth, Sabine Hossenfelder is herself pretty critical of cosmological inflation as a theory, and in particular, of the conventional wisdom justifications for its necessity, which she argues are greatly overstated.
 
  • Like
Likes vortextor
  • #9
Re "7) Metastability of the standard model vacuum"

At the outset, it is worth noting that Sabine Hossenfelder is such a great critic of the concept of "naturalness" and the "hierarchy problem" in physics that she wrote an entire book length diatribe, "Lost in Math" (2018), against it.

It is also important to note what the situation in academia among physicists trying to predict the Higgs boson mass looked like right before it was first measured in 2012. The HEP phenomenology community had as many predictions on the table as they had March Madness basketball competition predictions. They covered an immense range of values at various levels of uncertainty with almost every plausible value that wasn't already ruled out covered, usually with multiple proposed justifications at a given value.

Somebody, inevitably, had to reach the right result, whether or not any of them had sound reasoning for their proposals. The measured value of the Higgs boson mass, incidentally, which is 125.25 ± 0.17 GeV according to the Particle Data Group, was and still is, very much at odds with the predicted value from a global electroweak physics fit by roughly 25%. Several of the predictions were correct, and their justifications had nothing to do with each other.

Also the metastability prediction flows from an extrapolation of the running of the Higgs vev and Higgs boson mass all of the way up to roughly the GUT scale, something that is vastly in excess of anything we can test in an experiment or astronomy observation, even in the distant, distant future.

Now, the equations for the running of Standard Model constants, including the Higgs boson mass and Higgs vev, are in principle possible to determine exactly, directly from theory, without resort to any experimental measurements and have been worked out to a high enough order that they're pretty precise.

But, essentially everything in the Standard Model impacts the running of every physical constant in the Standard Model. So, if there is even a single particle missing from the Standard Model, the beta function for the running of the Higgs vev will be wrong, and at close to the GUT scale, it will be maximally wrong.

There is at least one particle, moreover, that is almost certainly missing: the graviton. Shaposhnikov and Wetterich have estimated that a vanilla graviton, if it does exist, would materially tweak the running of the Higgs vev in a way that would be not imperceptible at close to the GUT scale. But, no one else predicting metastability has rigorously considered the impact of this modification.

Likewise, if there are BSM dark matter particles, or there is a new particle involved in giving rise to neutrino masses, that would also tweak the metastability conclusion.

Sabine Hossenfelder and others have also observed, contrary to the inane hypothesis of "naturalness" that Nature prefers dimensionless physical constants with values of the order of unity, that Nature actually seems to like physical constants that are right on the brink of throwing everything out of whack and are often very far from being on the order of unity (e.g. the weak force coupling constant, the gravitational coupling constant, and the cosmological constant). So, a reality that the Higgs boson mass has the lowest value that doesn't actually lead to the collapse of the vacuum in the universe should actually be utterly unsurprising, and is exactly how Nature acts in other situations.
 
Last edited:
  • Like
Likes Davephaelon
  • #10
Three comments:

(1) It has been known that m(p)/m(e) ≠6π5 for 50 years. Does Dr. Hossenfelder know this? She should. Does she care? Apparently not.

(2) BF(π→μν)/BF(π→eν) = 12345. How do you explain that?

(3) I once stayed at a hotel room where the room number was the same as the mass of the Λ baryon. Coincidence? I think not!
 
  • Haha
  • Like
Likes ShadowKraz, vortextor, Mister T and 11 others
  • #11
Vanadium 50 said:
It has been known that m(p)/m(e) ≠6π5 for 50 years. Does Dr. Hossenfelder know this? She should. Does she care? Apparently not.

it is like people that still insist that ##\alpha=1/137##
 
  • Like
Likes vortextor and ohwilleke
  • #12
pines-demon said:
I know that most of these are a stretcht but when one says that [##\sqrt{M_{\text{Planck}}R_{\text{Universe}}}=\Lambda##] the radius of the universe is not expected to be constant, right? Is it arguing for a non-constant ##\Lambda##?
Yes.
 
  • Like
Likes ohwilleke and pines-demon
  • #13
pines-demon said:
it is like people that still insist that ##\alpha=1/137##
the symbol '=' means different thing for different people. :cool:
 
  • Like
Likes pines-demon
  • #14
billtodd said:
the symbol '=' means different thing for different people. :cool:
"It depends on what the meaning of 'is' is." -- Bill Clinton.
 
  • Like
Likes MidgetDwarf and PhDeezNutz
  • #15
Hornbein said:
"It depends on what the meaning of 'is' is." -- Bill Clinton.
'is' is just is, isn't it? :oldbiggrin: or should I have said ain't it?
 
  • Like
Likes PhDeezNutz
  • #16
This is a rather esoteric list. If physics can’t explain why the three quarks in a proton have, in addition to color charges, just the right electric charge to bind an electron to the nucleus, explaining these other problems is a pipe dream.
 
  • Like
Likes vortextor and Hornbein
  • #17
Quarker said:
This is a rather esoteric list. If physics can’t explain why the three quarks in a proton have, in addition to color charges, just the right electric charge to bind an electron to the nucleus, explaining these other problems is a pipe dream.
Huh. I never thought of that.
 
  • #18
Quarker said:
just the right electric charge to bind an electron to the nucleus
The electron would be bound by any positive charge.

A very good question, though, is "why are atoms neutral"? That is, wny the quarks and leptons have charges that are integral multiples/fractions of each other. This is the question that GUTs try to answer.

But I imagine Dr. Hossenfelder would rather eat ground glass than say anything good about research in that direction.
 
  • Like
Likes PhDeezNutz, dextercioby, arivero and 1 other person
  • #19
Quarker said:
This is a rather esoteric list. If physics can’t explain why the three quarks in a proton have, in addition to color charges, just the right electric charge to bind an electron to the nucleus, explaining these other problems is a pipe dream.
Why questions are bad physics questions. As with anything in this list.
 
  • #20
Maybe physics just isn’t asking the right why questions.
 
  • #21
Yeah. yeah. The physicists are all doing it wrong. Not the first time we heard this.
 
  • Like
Likes vortextor and PhDeezNutz
  • #22
Vanadium 50 said:
Yeah. yeah. The physicists are all doing it wrong. Not the first time we heard this.
I never said physicists are doing it wrong, just that physics seems to limit itself to the same questions.
 
  • #23
Quarker said:
I never said physicists are doing it wrong, just that physics seems to limit itself to the same questions.
Physics is limited by physics. We can explore what we can explore because it is physically accessible.
 
  • Like
Likes ShadowKraz
  • #24
I like to call it the Motl-Lenz formula, because Lubos claimed it in his blog as an independent discovery when he was playing with a calculator in secondary school.
 
  • Haha
Likes billtodd and ohwilleke
  • #25
pines-demon said:
Physics is limited by physics. We can explore what we can explore because it is physically accessible.
No argument here, but this list shows that there’s a good chance the universe isn’t mathematically coherent. Maybe a hodgepodge of unrelated equations really is the most accurate description of the physically accessible universe. If so, how did such a random collection of particles emerge to interact so intricately with one another, and space-time itself? A rhetorical question, of course.
 
  • #26
Quarker said:
this list shows that there’s a good chance the universe isn’t mathematically coherent.
This list shows numerology, it does not say anything about physics or the universe's mathematical consistency.
 
  • Like
Likes PhDeezNutz, mattt, weirdoguy and 2 others
  • #27
Vanadium 50 said:
But I imagine Dr. Hossenfelder would rather eat ground glass than say anything good about research in that direction.

Maybe she will say something good when her views will drop significantly. For me she lost her reliability/credibility even before I knew who she is.
 
  • #28
Vanadium 50 said:
A very good question, though, is "why are atoms neutral"? That is, wny the quarks and leptons have charges that are integral multiples/fractions of each other. This is the question that GUTs try to answer.
You can get that from the compactness of U(1) though. Or from anomaly cancellation if you include gravity, I think. I'd say that, given the standard model, the case for GUTs is (1) a generation arises very naturally from an SU(5) or SO(10) multiplet (2) the near-unification of coupling constants at high scales.

The anomaly cancellation when gravity is included, makes me wonder if there's some "swampland" reason why fermions might form GUT multiplets in a theory of quantum gravity, even without actual unification.
 
  • Like
Likes arivero
  • #29
mitchell porter said:
You can get that from the compactness of U(1) though. Or from anomaly cancellation if you include gravity, I think. I'd say that, given the standard model, the case for GUTs is (1) a generation arises very naturally from an SU(5) or SO(10) multiplet (2) the near-unification of coupling constants at high scales.

The anomaly cancellation when gravity is included, makes me wonder if there's some "swampland" reason why fermions might form GUT multiplets in a theory of quantum gravity, even without actual unification.
Not only charge neutral atoms, but matter-antimatter particles. Charge neutrality seems to be very important to the universe. Maybe the universe is trying to tell us something.
 
  • #30
"Why are atoms neutral?"
If they weren't our theories wouldn't work/be ugly."

I don't think this is a very good answer. I don't think atoms care.
 
  • Like
Likes PhDeezNutz, pines-demon and weirdoguy
  • #31
Vanadium 50 said:
"Why are atoms neutral?"
If they weren't our theories wouldn't work/be ugly."

I don't think this is a very good answer. I don't think atoms care.
Maybe, for whatever reason, charge neutrality is so important, the universe itself is charge neutral.
 
  • #32
"Why are atoms neutral?" I wonder if we could run a thread on it, or revive some old one. I always had thought that it was just anomaly cancelation, but I do not remember now any discussion including alternatives (say bosons on SU(N) instead of SU(4), more that two types of quarks in the same generation; not chiral forces...)
 
  • #33
Vanadium 50 said:
Three comments:

(1) It has been known that m(p)/m(e) ≠6π5 for 50 years. Does Dr. Hossenfelder know this? She should. Does she care? Apparently not.

(2) BF(π→μν)/BF(π→eν) = 12345. How do you explain that?

(3) I once stayed at a hotel room where the room number was the same as the mass of the Λ baryon. Coincidence? I think not!
The gematria of Genesis 1:1 reveals its value as 2701, who's prime factorization is 37x73, two mirror prime numbers containing the trinity and divine wholeness who's ordinals 12 and 21 are also mirror numbers.

Grtz grtz God
 
  • Haha
Likes DrClaude and ohwilleke
  • #34
arivero said:
"Why are atoms neutral?" I wonder if we could run a thread on it,
Nothing's stopping you. You might think carefully on how best to pose it to get what you're after and not a chaotic scrum.
weirdoguy said:
For me she lost her reliability/credibility
What did it for me was her insistence that people who disagree with her are dishonest and/or secretly agree with her. The fact that her business model is to take money from crackpots to tell them that the establishment is being mean to them is secondary.

Further, while she is the first to crow about being a theoretical physicist, her publication record is...um...less strong than many others. She would (and has) argued that this is proof that the community is interested in the wrong things. I would say that is not the only possibility.
 
  • Like
  • Skeptical
Likes PhDeezNutz, martinbn, Motore and 2 others
  • #35
Today I saw a license plate labeled KTF-2065. What are odds of that happening?
 
  • Like
Likes PhDeezNutz
Back
Top