Why are free parameters bad for a theory?

In summary, free parameters are those that cannot be predicted by the theory and need to be measured and put in the theory 'by hand'.
  • #1
Floyd_13
11
3
It is often said that one of the drawbacks of the standard model is that it has many free parameters. My question is two-fold:
  1. What exactly is a free parameter? My understanding is that the free parameters of a model/theory are the ones that cannot be predicted by the theory and need to be measured and put in the theory 'by hand' so to speak. Are all constants of nature free parameters then? Also, can you give me an example of a non-free parameter in a theory?
  2. Why is it bad for a theory to have free parameters? Couldn't be that some quantities in nature such as the mass of the electron just 'happen' to have a certain value that cannot be predicted by a theory?
 
  • Like
Likes dextercioby
Physics news on Phys.org
  • #2
1. You almost answered this yourself. A free parameter is a parameter in the model that cannot be predicted from the other parameters of the model and that a priori could take any of a number of values (often along a continuous spectrum).

2. It may not be ”bad” per se, but the more free parameters a model has, the less predictive it becomes. John von Neumann is often quoted to have said ”give me four parameters and I will fit you an elephant”. The sentiment bein that if you have enough free parameters, it becomes easier to fit almost any data.
 
  • Like
Likes Ahmed1029, Astronuc, ohwilleke and 4 others
  • #3
Floyd_13 said:
Also, can you give me an example of a non-free parameter in a theory?
Everything you can derive from other things. As an example, the mass of the electron or positron is a free parameter in the Standard Model, but their mass ratio has to be exactly 1. That 1 is not a free parameter because particles and antiparticles need to have the same mass in QFT. Often there is some ambiguity in what exactly you call the free parameter. You could e.g. say "the electron mass is a free parameter, the positron mass is derived" or call the positron mass a free parameter and the electron mass derived. Doesn't change the fact that there is one free parameter here.
 
  • Like
Likes Astronuc, ohwilleke, vanhees71 and 1 other person
  • #4
This begs the question do we have BSM theories that have only one free parameter in them?
I am assuming you have to have at least one free parameter.
What is the success if there is if such theories exist?
 
  • #5
No. Reducing everything to a single parameter - or even the more modest goal of "fewer than the SM" - would be an amazing success. Generally BSM models come with extra parameters.
 
  • Like
Likes ohwilleke
  • #6
mfb said:
Everything you can derive from other things. As an example, the mass of the electron or positron is a free parameter in the Standard Model, but their mass ratio has to be exactly 1. That 1 is not a free parameter because particles and antiparticles need to have the same mass in QFT. Often there is some ambiguity in what exactly you call the free parameter. You could e.g. say "the electron mass is a free parameter, the positron mass is derived" or call the positron mass a free parameter and the electron mass derived. Doesn't change the fact that there is one free parameter here.
I like to distinguish between experimentally measured parameters that can be determined from theory, and theoretically assumed parameters that can be confirmed by experiment.

The number of degrees of freedom in theory is somewhat less than the full number of experimentally measured parameters, because some experimentally measured parameters have functional relationships to each other (e.g. the W boson mass, the Z boson mass, the electromagnetic coupling constant, and the weak force coupling constant contain only three degrees of freedom even though all of them are experimentally measured, because there is a functional relationship between them).

In the Standard Model, the 14-15 non-zero fundamental particle masses (depending upon the mass of the lightest neutrino mass) are examples of the former. The electromagnetic charge of up type quarks relative to the charge of electrons, is an example of the latter, even though it isn't a derived quantity and is just put in the model as an axiom.

The derived quantities of the Standard Model include: all properties (include mass, decay rates, CP violation, electric dipole moment, magnetic moment, and parton distribution functions) of composite particles bound by the strong force made from quarks and gluons, the Higgs vacuum expectation value, the decay rates and branching fractions of all fundamental particles that decay, the electron dipole moments and magnetic moments of all of the fundamental and composite particles, the running of the Standard Model parameters with energy scale given a value at a given energy scale, and the phase diagram of strong force bound matter (e.g. the point at which quark gluon plasma arises).

Why is it bad for a theory to have free parameters? Couldn't be that some quantities in nature such as the mass of the electron just 'happen' to have a certain value that cannot be predicted by a theory?

@Orodruin has a couple of good points.

Another is that when parameters are derived and functionally related to each other, you can use the more precisely measurable related parameter to determine the value of the other parameters derived from it more precisely than you could if you simply measured it experimentally. And, even the most optimistic physicists usually assume that whatever theory ultimately becomes the most fundamental one that physicists ever devise will still have at least one, and maybe a few, free parameters.

The preference for fewer parameters also reflects the philosophically reductionist objective of fundamental physics for scientists that seeks to explains the broadest range of phenomena with the most minimal set of theories and parameters. Basically, it is the mission statement of fundamental physics to do so. Physicists would be thrilled to have a one parameter model of high energy physics, even if that parameter had an uncertainty of 5%. Engineers would prefer the status quo.

Also, in the case of the Standard Model, it isn't just that there are lots of free parameters. It is also the case that the couple of dozen parameters that we do have maddeningly seem to have some sort of pattern, but we can't quite figure out what it is yet. There are some decent "near miss" explanations for their experimentally measured values that seem to provide proof of principle that they could be amenable to being reduced to far fewer parameters if we just knew how.

One of the reasons that we haven't made more progress on the front of figuring out these relationships is that the data points we are trying to relate aren't known terribly precisely, even though some quantities derived from these fundamental parameters in the Standard Model (like the proton mass) are known to exquisite accuracy. All of the following fundamental parameters of the Standard Model have only been measured to uncertainties of more than 0.1%:

* All six quark masses.
* All three neutrino masses.
* The W boson mass.
* The Higgs boson mass.
* The four parameters of the CKM matrix.
* The four parameters of the PMNS matrix.
* The strong force coupling constant.

The only Standard Model parameters we know to less uncertainty than 0.1% are:

* The electromagnetic and weak force coupling constants.
* The electron, muon, and tau lepton masses.
* The Z boson mass.

Also, Planck's constant and the speed of light (if you count those as Standard Model parameters).

As a result, any theory that tries to relate these values is either clearly false, or is supported by only highly inconclusive evidence.

Similarly, in general relativity, we know Newton's G to less uncertainty than 0.1%, but not the cosmological constant or Hubble's constant.

As a result, there are lots of circumstances where having more experimentally measured parameters, rather than using a reductionist approach, is preferable, including almost all practical engineering applications.

For example, if you want to do an experiment or have a process that depends upon the precise value of the proton mass, you would be much better off using the best available experimentally measured value which has an uncertainty of about 1 part per 4.5 trillion and not the theoretically determined value from first principles in the Standard Model with an uncertainty only a bit better than 1%, even though we are very comfortable that the proton mass is entirely derived from the free parameters and equations of the Standard Model.

Similarly, for practical chemistry applications, you are generally better off using the myriad direct experimental measurements found in the CRC Handbook of Chemistry and Physics of the quantity that you care about, than you are trying to derive the properties of atomics and molecules from first principles, even though we are very confident that this is possible in theory to do.

In the cases where experiments are more precise than theoretical calculations from first principles, it is usually because the math needed to make the calculations is intrinsically very cumbersome, while the effort and skill needed to do the experimental measurement is less cumbersome.

Relying on lots of independently measured experimental data points is also more robust. A flaw in one measurement doesn't throw everything else you do that doesn't relate to that data point off. In contrast, a slight error in a core free parameter of the Standard Model inserts systemic error into everything derived from that free parameters.

This happens often in the Standard Model, because in almost all Standard Model calculations, everything that could conceivably happen enters into the calculation at some level of precision, so almost all Standard Model parameters have some impact on the final value, however slight.

For example, to make highly precise calculations of the muon anomalous magnetic moment, one portion of the calculations involves the light quark masses and the strong force coupling constant, and this is the part of the calculation where the lion's share of the uncertainty in the theoretical calculation comes from, even though these make only a tiny contribution to the overall value. You can get a very good approximation from the highly precisely known Standard Model parameters when doing the electromagnetic and weak force components of the calculation that account for the overwhelming share of the total result, but to be competitive with experimental measurements that are available, the strong force driven contributions matter too.
 
Last edited:
  • Like
Likes Floyd_13
  • #7
A free parameter must be dimensionless. Only mass ratios are free, not individual masses.
c is not a free parameter, but is a definition of the ratio m/s.
alpha is a free parameter.
 
  • #8
Meir Achuz said:
A free parameter must be dimensionless. Only mass ratios are free, not individual masses.
c is not a free parameter, but is a definition of the ratio m/s.
alpha is a free parameter.
A free parameter does not have to be dimensionless:
A free parameter is a variable in a mathematical model which cannot be predicted precisely or constrained by the model and must be estimated experimentally or theoretically. A mathematical model, theory, or conjecture is more likely to be right and less likely to be the product of wishful thinking if it relies on few free parameters and is consistent with large amounts of data.
For example, in the Standard Model, while the mass ratios are dimensionless, if you express the Standard Model fundamental particle masses in that form, you need a mass scale for the Standard Model as a whole that does have a dimension of mass to fully describe it, or you have to describe them as ratio of a dimesionful constant such as the Planck mass.

Some theories in physics (like general relativity) don't even have any dimensonless physical constants, although you could define related parameters that are dimensionless in a variety of ways using Planck mass and Planck time (which are based upon combinations of other experimentally measured fundamental constants).

There is a special subcategory of dimensionless physical constants (in the Standard Model, these include the coupling constants of the Standard Model, and the ratio of the fundamental particle masses):
In physics, a dimensionless physical constant is a physical constant that is dimensionless, i.e. a pure number having no units attached and having a numerical value that is independent of whatever system of units may be used.

For example, if one considers one particular airfoil, the Reynolds number value of the laminar–turbulent transition is one relevant dimensionless physical constant of the problem. However, it is strictly related to the particular problem: for example, it is related to the airfoil being considered and also to the type of fluid in which it moves.

On the other hand, the term fundamental physical constant is used to refer to some universal dimensionless constants. Perhaps the best-known example is the fine-structure constant, α, which has an approximate value of 1⁄137.036.  The correct use of the term fundamental physical constant should be restricted to the dimensionless universal physical constants that currently cannot be derived from any other source. This precise definition is the one that will be followed here.

However, the term fundamental physical constant has been sometimes used to refer to certain universal dimensioned physical constants, such as the speed of light c, vacuum permittivity ε0, Planck constant h, and the gravitational constant G, that appear in the most basic theories of physics. NIST and CODATA sometimes used the term in this way in the past.
But a mere parameter of a model need not be a dimensionless fundamental physical constant, and as the examples quoted above illustrate, many core physics physical constants are not dimensionless.

Also, describing dimensionless physical constants as "dimensionless" is honestly not something to be too hung up on, because many superficially dimensionless physical constants still have some implicit dimension "hidden in the footnotes" so to speak.

For example, since the masses of the fundamental particles in the Standard Model run with energy scale, a ratio of masses implicitly includes a usually unstated footnote regarding the energy scale at which the masses compared to each other are measured.

Similarly, the "dimensonless" physical constant of the Standard Model which is the strong force coupling constant, is subject to the same limitation. The most commonly cited value of the strong force coupling constant is 0.118(1) is the value at the Z boson mass energy scale, but at other energy scales it is different (in a manner that goes up to a peak value and then down again).

Likewise, the fine structure constant's value at the Z boson mass energy scale is about 1/127 rather than 1/137.

There is no truly "dimensionless" physical constant in the Standard Model for which there is not an implicit energy scale footnote that has a dimension in some sort of units. All Standard Model physical constants run with energy scale.

The numerical value of the speed of light is used to defined the meter in the current version of the SI standard of units, but that doesn't mean that it isn't an experimentally measured free parameter of the Standard Model, general relativity and special relativity. If it had a different physical value (regardless of the numerical value we assign to it in arbitrary human made units determined by committee in a political process), the world would behave differently. You need to know its dimensioned value to do physics.

Since 1983 (with the wording updated in 2019), the meter has been internationally defined as the length of the path traveled by light in vacuum during a time interval of 1/299 792 458 of a second, where the second is defined in terms of the caesium frequency ∆ν (which, in turn, was determined by calibrating this to the previous definition of 86,400 seconds in an average Earth day as exactly as was possible at the time it was defined).

The current definition of the meter was the product of ultraprecise measurements as of 1983 that were compared to a physical exemplar which was the previous definition (if the meter had been defined in terms of the speed of light earlier, the defined speed of light would have defined to be 300,000,000 meters per second, would be about 0.1% shorter, and physics students everywhere would have been thankful, but back compatibility issues demanded adherence to the physical exemplar already used to make ultraprecise measurements in 1983). The physical exemplar, in turn, was established with state of the art 18th century science based upon the distance from the north pole to the equator:
The metre was initially defined as one ten-millionth of the distance on the Earth's surface from the north pole to the equator, on a line passing through Paris. Expeditions from 1792 to 1799 determined this length by measuring the distance from Dunkirk to Barcelona, with an accuracy of about 0.02%.
The numerical value of the speed of light isn't an axiom of general relativity, special relativity, or the Standard Model, all of which utilize this parameter. It is a property of Nature which we measure, either directly, or indirectly using other measurements which are functionally related to it.

Aesthetically, there is something nice about defining physical constants in a way that is independent of particular arbitrary human defined units, as this is a more "universal" expression of them. But lack of any dependence on scale (which is what a truly dimensionless physical constant would have as a property) may simply be inconsistent with the nature of what is being described by a physical constant. Nature may be scale dependent.
 
Last edited:
  • Like
Likes vanhees71
  • #9
Scale invariance is immediately anomalously broken in QFT (trace anomaly).
 
  • Like
Likes ohwilleke

FAQ: Why are free parameters bad for a theory?

Why do free parameters make a theory less reliable?

Free parameters in a theory refer to the variables that can be adjusted or changed to fit the data. This means that the theory is not constrained by any specific values and can be adjusted to fit any outcome, making it less reliable as it may not accurately explain the phenomenon it is trying to describe.

How do free parameters affect the predictive power of a theory?

Free parameters can greatly impact the predictive power of a theory. Since they can be adjusted to fit any data, the theory may be able to accurately predict past data but may fail to predict future data. This is because the theory is not based on any specific values and is not constrained by any real-world limitations.

Can too many free parameters in a theory lead to overfitting?

Yes, having too many free parameters in a theory can lead to overfitting. Overfitting occurs when a model is overly complex and fits the training data too closely, resulting in poor performance when applied to new data. This can happen when there are too many free parameters that are adjusted to fit the training data perfectly, but do not accurately represent the underlying relationship between the variables.

How do free parameters affect the falsifiability of a theory?

Falsifiability refers to the ability of a theory to be proven false through empirical evidence. Free parameters can make a theory less falsifiable as they can be adjusted to fit any outcome, making it difficult to disprove the theory. This can lead to the theory being considered unscientific as it cannot be tested or disproven.

Are there any benefits to having free parameters in a theory?

While free parameters can have negative impacts on the reliability, predictive power, and falsifiability of a theory, they can also have some benefits. Free parameters allow for flexibility in a theory and can be useful in exploring different possibilities and hypotheses. They can also be helpful in adjusting a theory to fit new data that may not have been considered initially.

Similar threads

Replies
1
Views
2K
Replies
6
Views
4K
Replies
10
Views
2K
Replies
5
Views
2K
Replies
3
Views
3K
Replies
13
Views
1K
Back
Top