Why is the hierarchy problem a problem?

  • Thread starter Smattering
  • Start date
In summary, I think the hierarchy problem is a problem because typically the renormalized value of parameters are close to their fundamental values, but this is not the case for the weak-gravity ratio.
  • #1
Smattering
170
21
As I am not sure what is the most appropriate forum for this question, I am posting it here:

In another thread I came across a link to the Wikipedia article on the hierarchy problem:

https://en.wikipedia.org/wiki/Hierarchy_problem

Unfortunately, after reading the article several times, I am still not sure what the core of the problem actually is, and why it is a problem.

For one thing, there seem to be several different definitions of the problem that might be equivalent, but apparently I am lacking the required knowledge to understand why:

1. Why is the weak force ##10^{32}## times stronger than gravity?
2. Why is the fundamental value of some physical parameter vastly different from its effective value after renormalization?
3. Why is the Higgs boson so much lighter than the Planck mass?

Regarding 1: Why is this considered a problem?
Regarding 2: Renormalization is necessary in QFT, right? So to what QFT is this actually referring?
Regarding 3: What has the Planck mass to do with this?

Can someone help me to get a better idea what this is all about?Robert
 
  • Like
Likes Buzz Bloom
Physics news on Phys.org
  • #2
Hi Smattering:

I have been interested to see what the experts would say about this, so I am as disappointed as you are likely to be from getting no answers. I will give some thoughts about the questions in the hope that my foolish ideas might provoke a smart answer.

In general, "WHY" questions about physics are frequently unanswerable. To go along with the "Hierarchy Problem" how about the following:
1. Why is EM force about 1036 times stronger than gravity?
2. Why is EM force about 104 times stronger than the weak force?
3. Why doesn't either (1) or (2) deserve a problem name like the "Hierarchy Problem"? That is, why is the "Hierarchy Problem" more of a problem than (1) or (2)?

BTW, I did not understand your
Regarding 2: Renormalization is necessary in QFT, right? So to what QFT is this actually referring?​

Regards,
Buzz
 
  • #3
I agree that why questions tend to be unphysical when referring to why nature behaves in a certain way. But in this case the why question does not refer to nature, but rather to the physicists who feel that there is a hierarchy problem. Physicists are people, and unlike nature, people are supposed to have motives.

Edit: Initially, I thought you were referring to my own why question from the thread title. But after re-reading your post, I now think that you were referring to the definition of the hierarchy problem which is itself a why question. Regarding this, I agree. I neither understand how a why question about the values of some natural values can even be physically meaningful.
 
Last edited:
  • #4
Maybe this will get more responses now that it's been moved to the particle physics forum.
 
  • #5
Buzz Bloom said:
BTW, I did not understand your
Regarding 2: Renormalization is necessary in QFT, right? So to what QFT is this actually referring?​

Hi Buzz,

According to the Wikipedia article there is some physical parameter that has a fundamental value that is several magnitudes higher than its effective value after renormalization. I thought that this parameter must have something to do with gravitation, but renormalization is closely related to quantum field theory, and there is no generally accepted QFT of gravity.
 
Last edited:
  • #6
Hi Smattering:

Thanks for your answer. I confess my too quick look carelessly missed that discusion of renormalization in the Wikipedia article.

Now that I actually tried to read the article, I found it way over my head, especially
such quantum corrections are usually power-law divergent, which means that the shortest-distance physics are most important.​
I think I now get that the "Hierarchy Problem" is a problem because
Typically the renormalized value of parameters are close to their fundamental values​
and for the weak-gravity ratio, this is not the case, and apparently no one has an acceptable explanation for this anomoly.

Regards,
Buzz
 
  • #7
jtbell said:
Maybe this will get more responses now that it's been moved to the particle physics forum.
Unfortunately that doesn't create an alert.

All those hierarchy problems are "just" things that look odd. Sure, a parameter can be exactly 1.000000000000000000000344, and the theory works, but without a deeper theory that predicts this value it looks odd. If the parameter does not have to be 1, and can be anything, why is it so close to 1 but not exactly 1? It is expected that some "more fundamental" theory will lead to some explanation of factors like that.
 
  • #8
mfb said:
Unfortunately that doesn't create an alert.

All those hierarchy problems are "just" things that look odd. Sure, a parameter can be exactly 1.000000000000000000000344, and the theory works, but without a deeper theory that predicts this value it looks odd. If the parameter does not have to be 1, and can be anything, why is it so close to 1 but not exactly 1? It is expected that some "more fundamental" theory will lead to some explanation of factors like that.

Hm ... but this sounds a bit like the layman's argument that the chance of winning a lottery with numbers "1 2 3 4 5 6" is less likely than winning it with more random looking numbers.

Having spent quite some time on statistical pattern recognition in university, I can certainly understand that fine tuning can be a problem due to the risk of overfitting your model on the existing observations such that it will not generalize well on new observations. But fine tuning (as I understand the term) does not refer to parameter values differing in magnitude. Rather it means that very small changes to a parameter's value lead to huge differences in result.
 
  • #9
Smattering said:
Hm ... but this sounds a bit like the layman's argument that the chance of winning a lottery with numbers "1 2 3 4 5 6" is less likely than winning it with more random looking numbers.
It is not, but if there is only one drawing ever and it gives 1 2 3 4 5 6 in that order, it is still a surprising result. It makes you wonder if the drawing was truly random or if someone simply coded "give me the smallest number not yet drawn" and ran that to generate the numbers.

But fine tuning (as I understand the term) does not refer to parameter values differing in magnitude. Rather it means that very small changes to a parameter's value lead to huge differences in result.
That is related to the hierarchy problem(s). Changing the parameter 1.000000000000000000000344 to 1.000000000000000000000484 (made-up numbers) could have a huge effect.
 
  • Like
Likes BiGyElLoWhAt
  • #10
mfb said:
It is not, but if there is only one drawing ever and it gives 1 2 3 4 5 6 in that order, it is still a surprising result. It makes you wonder if the drawing was truly random or if someone simply coded "give me the smallest number not yet drawn" and ran that to generate the numbers.

Having *your* numbers drawn in a lottery is always a surprising result, isn't it? If you told me to bet on "23 41 17 34 3 8", I would find it equally suprising if these numbers were drawn in exactly the sequence you predicted.

That is related to the hierarchy problem(s). Changing the parameter 1.000000000000000000000344 to 1.000000000000000000000484 (made-up numbers) could have a huge effect.

I can understand why this would be an issue. But the Wikipedia article implied to me that the hierarchy problem is not so much about the fine tuning of single parameters, but rather the differing value scales of two or more parameters.
 
Last edited:
  • #11
Smattering said:
Having *your* numbers drawn in a lottery is always a surprising result, isn't it? If you told me to bet on "23 41 17 34 3 8", I would find it equally suprising if these numbers were drawn in exactly the sequence you predicted.
If you are the one running the lottery (something that makes you unique - we have only one universe to observe) it would be surprising if you win your own lottery, independently of the numbers. Sure, it can happen by chance, but manipulation is certainly a relevant alternative hypothesis.

I can understand why this would be an issue. But the Wikipedia article implied to me that the hierarchy problem is not so much about the fine tuning of single parameters, but rather the differing value scales of two or more parameters.
Those concepts are related. If a mass value can be anything from 0 to the Planck scale, and has to be subtracted from a value that should be around the Planck scale, it is surprising if the difference is orders of magnitude below the Planck scale. That's the factor 1.00000000000000000000463 I mentioned earlier (again, random digits).
 
  • #12
mfb said:
Those concepts are related. If a mass value can be anything from 0 to the Planck scale, and has to be subtracted from a value that should be around the Planck scale, it is surprising if the difference is orders of magnitude below the Planck scale. That's the factor 1.00000000000000000000463 I mentioned earlier (again, random digits).

Sorry, but I do not understand what you are referring to. Can you please explain this in more detail?
 
  • #13
The Higgs mass is the sum (or difference, depending on sign conventions) of two unrelated terms:
- its bare mass, which can take any value
- radiative corrections, which (in the absence of new physics below the Planck scale) should be of the order of the Planck mass

The Higgs is 17 orders of magnitude lighter than the Planck mass, so in the Standard Model the two terms have to be very close together to create such a huge difference between Planck scale and Higgs mass.

Supersymmetry and other models lead to smaller radiative corrections, so the necessary amount of fine-tuning goes down.
 
  • Like
Likes ohwilleke
  • #14
Maybe it's worth putting some numbers in (courtesy of Michael Dine): m(H)2 = 36,127,890,984,789,307,394,520,932,878,928,933,023 - 36,127,890,984,789,307,394,520,932,878,928,917,398 GeV2.

It is entirely possible that the two numbers come from completely unrelated processes and their closeness is purely coincidental. Just like it's possible to walk into a room and find all the pencils are pefectly balanced on their points. But does that seem likely to you?
 
  • #15
I have to say that I am not at all impressed with the presumptuous premise of the hierarchy problem and a number of other "problems" of modern physics such as the problem of matter-antimatter asymmetry in the universe, the problem that the cosmological constant has the value that it does, and the "problem" that the strong force Lagrangian doesn't have a CP violating term even though a generalized version of the equation has a very obvious place to put one. Nature is what it is and there is no particular reason that its fundamental constants should have any particular value, which is what "fundamental" means.

If a physical constant value present in Nature looks unnatural, in my mind, this is evidence that your looking at the situation in the wrong way. But, it isn't necessarily a hint that you need to devise news laws of Nature that make physical constant values seem "natural" by hand.

Supersymmetry is a particularly brute force solution to the hierarchy problem that could probably be answered with additional laws of Nature (e.g. the sum of the square of the masses of the fundamental fermions equals the sum of the square of the masses of the fundamental bosons, which is true experimentally to within all applicable margins of error) that are more subtle and do not require a host of new particles that have not been observed.
 
  • #16
I tried to find an example from the history of physics where an apparent "mysterious" fine tuning of free parameters of a theory was resolved by a newer theory, but I'm not that good with history of science. Anyone?
 
  • #17
  • #18
Not physics, but evolution explained why so many different species exist, all "fine-tuned" to their specific environment, and all sortable into groups of very similar species.

If you think of atomic energy levels as independent free parameters, then quantum mechanics explained their relation (e. g. 1/n^2 for hydrogen-like atoms).
We don't see them as independent parameters today as we found a theory predicting fixed relations between them.

The orbits of the planets all follow Kepler's law - could look like fine-tuning, but Newton's theory of gravity gave a simple explanation for it.
 
  • #19
mfb said:
If you think of atomic energy levels as independent free parameters, then quantum mechanics explained their relation (e. g. 1/n^2 for hydrogen-like atoms).
We don't see them as independent parameters today as we found a theory predicting fixed relations between them.

This is not quite a type of example I was looking for. There was no "deferent and epicycle" atomic theory which was predicting atomic energy levels, before we've got our current one.
 
  • #20
mfb said:
Not physics, but evolution explained why so many different species exist, all "fine-tuned" to their specific environment, and all sortable into groups of very similar species.

But the explanation that evolution can offer here is not any sophisticated mechanism, but simply selection bias.
 
  • #21
nikkkom said:
This is not quite a type of example I was looking for. There was no "deferent and epicycle" atomic theory which was predicting atomic energy levels, before we've got our current one.
That is exactly the point! The Standard Model does not make a prediction for the bare Higgs mass which needs its fine-tuned value in this model.
Smattering said:
But the explanation that evolution can offer here is not any sophisticated mechanism, but simply selection bias.
An easy explanation for something that looked mysterious before. In other words, a good theory.
 
  • #22
Sorry, but I still do not get the point.

mfb said:
The Higgs mass is the sum (or difference, depending on sign conventions) of two unrelated terms:
- its bare mass, which can take any value
- radiative corrections, which (in the absence of new physics below the Planck scale) should be of the order of the Planck mass

Does this justify any expection about the relative values that these two terms should have?

The Higgs is 17 orders of magnitude lighter than the Planck mass, so in the Standard Model the two terms have to be very close together to create such a huge difference between Planck scale and Higgs mass.

Why would it seem more surprising if they were close together than if they were far away from each other?

Supersymmetry and other models lead to smaller radiative corrections, so the necessary amount of fine-tuning goes down.

I cannot agree so far. Even if the radiative corrections were smaller, the other term still would have to match exactly.
 
  • #23
Vanadium 50 said:
Maybe it's worth putting some numbers in (courtesy of Michael Dine): m(H)2 = 36,127,890,984,789,307,394,520,932,878,928,933,023 - 36,127,890,984,789,307,394,520,932,878,928,917,398 GeV2.

It is entirely possible that the two numbers come from completely unrelated processes and their closeness is purely coincidental.

I once came across some crackpot site where the site owner discovered that if you perform some kind of mathematical operation on a specific fundamental constant (unfortunately, I do not remember which one anymore) you get extremely close to the value of another (seemingly unrelated) fundamental constant. And then he built a complete crackpot theory on that suprising result.

Just like it's possible to walk into a room and find all the pencils are pefectly balanced on their points.

I cannot see any commonality between this example and situation described above.
 
  • #24
Smattering said:
Does this justify any expection about the relative values that these two terms should have?
No.
Smattering said:
Why would it seem more surprising if they were close together than if they were far away from each other?
See Vanadium's numbers. There are 1017 more numbers far away from each other than numbers close together.
Smattering said:
I cannot agree so far. Even if the radiative corrections were smaller, the other term still would have to match exactly.
Yes, but you need a lower amount of fine-tuning. It is reasonable to add two 4-digit numbers (one of them negative) and get a 3-digit result. That is not very unlikely.
It is surprising that adding two 19-digit numbers gives a 3-digit number.

Smattering said:
I once came across some crackpot site where the site owner discovered that if you perform some kind of mathematical operation on a specific fundamental constant (unfortunately, I do not remember which one anymore) you get extremely close to the value of another (seemingly unrelated) fundamental constant. And then he built a complete crackpot theory on that suprising result.
It was not an agreement with a precision of 17 digits, and with unrelated constants you have thousands of possible ways to combine them (even more if you ignore units). The Higgs has a single combination that happens to match with a precision of 17 digits in the Standard Model.

And I repeat, because I don't think this point got clear: the SM works fine with that. There is no fundamental problem with such a fine-tuning. It just does not look very natural.
 
  • #25
Vanadium 50 said:
Maybe it's worth putting some numbers in (courtesy of Michael Dine): m(H)2 = 36,127,890,984,789,307,394,520,932,878,928,933,023 - 36,127,890,984,789,307,394,520,932,878,928,917,398 GeV2.
Hi Vanadium:

Can you cite a reference for Michael Dine's result that you quoted? If not, can you identify for me the individual variables whose values differ to give the square of the Higgs mass? I get from a mfb quote that they might be
- its bare mass, which can take any value
- radiative corrections, which (in the absence of new physics below the Planck scale) should be of the order of the Planck mass​
If this is correct, can you explain (at a summary level) how the values for these two variables is derived?

Regards,
Buzz
 
  • #26
Michael gave it in a talk somewhere. I copied it down there. But the idea is that the first term is the Higgs bare mass and the second term is the radiative corrections to the mass. Neither is calculable today, all we know is the rough size and the difference between those numbers.
 
  • #27
Vanadium 50 said:
Neither is calculable today
Hi Vanadium:

Does this mean that the values were once calculable, but now they aren't? If so, can you explain why that might be so? If not, please clarify?

Regards,
Buzz
 
  • #28
A new (yet unformulated) theory might allow to calculate them in the future. That is the "today" aspect.
 
  • #29
mfb said:
A new (yet unformulated) theory might allow to calculate them in the future. That is the "today" aspect.
Hi mfb:

If that is the case, where did Michael Dine get his numbers? Were they just made up to make a point about how the Higgs mass seems to have a "magical" quality?

Regards,
Buzz
 
  • #30
All those digits? Sure. We know the magnitude of the number, but not the precise value.
Googling the number directly leads to Michael's talk.
 
  • #31
mfb said:
All those digits? Sure. We know the magnitude of the number, but not the precise value.
Googling the number directly leads to Michael's talk
Hi mfb:

Thanks for the link and the Google hint. Both PDF files look both interesting and difficult. It will no doubt take me a while to digest whatever I can get out of them.

Regards,
Buzz
 
  • #32
Does anybody know a reference where a list is given with finetuning-examples in science in general? :)
 
  • #33
The hierarchy problem is a fine-tuning problem within the Wilsonian framework that the QFTs we use are effective field theories. If they are not effective field theories, but correct and complete quantum field theories, then there is no hierarchy problem.

http://quantumfrontiers.com/2013/06/18/we-are-all-wilsonians-now/

"Wilson’s mastery of quantum field theory led him to another crucial insight in the 1970s which has profoundly influenced physics in the decades since — he denigrated elementary scalar fields as unnatural. I learned about this powerful idea from an inspiring 1979 paper not by Wilson, but by Lenny Susskind. That paper includes a telltale acknowledgment: “I would like to thank K. Wilson for explaining the reasons why scalar fields require unnatural adjustments of bare constants.”

Susskind, channeling Wilson, clearly explains a glaring flaw in the standard model of particle physics — ensuring that the Higgs boson mass is much lighter than the Planck (i.e., cutoff) scale requires an exquisitely careful tuning of the theory’s bare parameters. Susskind proposed to banish the Higgs boson in favor of Technicolor, a new strong interaction responsible for breaking the electroweak gauge symmetry, an idea I found compelling at the time. Technicolor fell into disfavor because it turned out to be hard to build fully realistic models, but Wilson’s complaint about elementary scalars continued to drive the quest for new physics beyond the standard model, and in particular bolstered the hope that low-energy supersymmetry (which eases the fine tuning problem) will be discovered at the Large Hadron Collider. Both dark energy (another fine tuning problem) and the absence so far of new physics beyond the HIggs boson at the LHC are prompting some soul searching about whether naturalness is really a reliable criterion for evaluating success in physical theories. Could Wilson have steered us wrong?"
 
  • #35
A handwavy way to think about it is that if the theories we have are not the final theory, then fine tuning of our crummy wrong theory is indicating something about the high energy theory that is peeping through to the low energy. This is why fine tuning is often argued to indicate new physics.
 
Back
Top