How do the collider experiments measure the mass of the W boson?

In summary, collider experiments measure the mass of the W boson by analyzing the behavior of particles produced during high-energy collisions, such as those at the Large Hadron Collider (LHC). By observing the decay products of W bosons and measuring their energy and momentum, physicists can apply the principles of conservation of energy and momentum to accurately calculate the mass of the W boson. Additionally, the experimental results are compared to theoretical predictions from the Standard Model of particle physics to ensure consistency and precision.
  • #1
Vanadium 50
Staff Emeritus
Science Advisor
Education Advisor
2023 Award
35,005
21,683
TL;DR Summary
How do the collider experiments measure the mass of the W boson?
A few months ago, there was a discussion on the W mass. It unfortunately degeneratd with posters attacking the honesty of the researchers. A pity, because we never got into the issues involved in making a sub 100 ppm measurement.

The first problem is that the decay is W to lepton + neutrino. The lepton can be measured, but the neutrino cannot. The best you can do is measure the "missing transverse energy" - the energy/momentum carried by the neutrino perpendicular to the beam. This doesn't work in the longitudinal direction, because that's where the unmeasured beam remnants go.

So now you have three observables dependent on the mass: the lepton momentum spectrum, the missing energy spectrum, and something called the "transverse mass", which is the invariant mass ignoring the z direction. All three of these depend on the W mass, and they are all correlated. I believe every experiment has chosen to publish the transverse mass as their main result, and sometimes quoting the other two as well.

Which gets us to the zeroth problem: what exactly is being measured? The number of interest is actually the electroweak component of the W mass, i.e. neglecting a small QCD contribution. So far as I know, nobody does this. Instead they quote the pole mass, which is a parameter in the theory. Given a pole mass and a known proton structure, one can predict the transverse mass distribution. So in principle this is easy - run off a bunch of predictions for various pole masses, and see which one best fits the data.

To keep from being influenced by the result, the method is "blinded". One does not know which pole mass corresponds to which prediction until the end of the analysis until the end. This si to protect against human nature - if you expect 80, look at all the problems you can think of, and get 80 you pat yourself on the back and move on. But if you see 60, you go back and look some more. That's human nature, and blinding protects against that.

Now let's consider the W -> mu nu channel. Muons are measured by their momentum which is measured by the curvature in a magnetic field. You might think we know where the position detectors are and what the magnetic field is, but what we actually know is where the detector elements were when we built them and what the magnetic field was when we could measure it - before we put the detector in. So we need to figure out where everything is. You might think that we can look at long tracks, remove one hit, refit, and see the distribution of positions for the hit we removed (i.e. if they aren't centered, the sensor is in the wrong place) The problem with this is that there are so-called weak modes that do not bias the residual, for example an overall scale factor.

So after we do this (and we can see things like the effect of gravity on the detector) we take particles of known mass, like the J/psi, Upsilon and Z, and adjust until they end up in the right place no matter where in the detector they are. One might discover, for example, a false curvature - a twist in the detector between the endplates that biases 1/p to one direction or another.

Once we're done with the muons, we look at the electrons. We measure electrons by their energy. In principle E/p = 1 (the electron mass makes almost no difference) but the real distribution as a width and tails because of resolution and energy loss of the electron. If we know how much material we have, this is predictable, and because we have other in situ measurements of that (worth a post in and of itself) the adjustments to the material model are minimal or absent.

Now that we know the behavior of the tracker and the calorimeter, we can infer the resolution of missing energy (from neutrinos). However, this is degraded by two factors: "pileup" and "underlying event". Pileup refers to the fact that there are additional interactions - several dozen - which add energy and momentum to the system and degrade the missing energy resolution. There are multiple handles on this, such as looking to see what the distribution looks like without W's to check that it is understood. One can also check by looking at the mass vs. number of pileup events to ensure it is flat. One needs to be careful how one blinds to ensure this check is possible without giving away the W mass.

Underlying event is harder. There are some semi-empirical models, which do have other distributions that can be checked. It's called "tuning" but is better described as "rejecting those models and parameters that make wrong predictions".

Only then can the transverse mass distribution be made and compared with predictions and then the box be opened.

Reality is even more complicated than this. But I wanted to make it clear that this is a hard measurement to do, a lot of work is involved by a lot of people, and that it's not as simple as sayng "we want to see 80 and we get 80".
 
  • Like
  • Informative
Likes Astronuc, DeBangis21, vanhees71 and 4 others
Physics news on Phys.org
  • #2
Vanadium 50 said:
Which gets us to the zeroth problem: what exactly is being measured? The number of interest is actually the electroweak component of the W mass, i.e. neg;ecting a small QCD contribution. So far as I know, nobody does this. Instead they quote the pole mass, which is a parameter in the theory. Given a pole mass and a known proton structure, one can predict the transverse mass distribution. So in principle this is easy - run off a bunch of predictions for various pole masses, and see which one best fits the data.
It's very wise to aim at the pole mass from a theoretical point of view, because this is indeed the only meaning gauge-invariant parameter characterizing the spectral function of the W boson.

From the theoretical point of view the crux is of course how well the proton structure is known.
 
  • #3
vanhees71 said:
how well the proton structure is known.
Um..."that depends".

The papers give estimates of the uncertainties, but there are subtleties. The valence distributions are known better than the sea distributions, which favors the Tevatron. The derivatives of the densities are often smaller at the LHC, which favors it. RHIC is the worst of both worlds, so while they see W's, it's better for them to do the measurement the other way: take the W mass as input and use that to infer sea quark densities.

At the Tevatron, one of the best measurements to constrain the proton structure on the mass is the W asymmetry, so there is a substantial effort to measure this and use it to maximum advantage.

At the LHC things are worse. s-quarks and c-quark distributions are less well known (and there is still some controversy about the strange sea) and they contribute more than at lower energies. So there is a lot of work that needs to be done measuring proton structure. The silver lining is that these effects impact the W+ and W- differently, so if m(W+) <> m(W-) you know you did something wrong.
 
Last edited:
  • Like
  • Informative
Likes Astronuc, mfb and vanhees71
  • #4
As a PS, and there is an argument in a slightly different context from Witek Krasny that one can constrain proton density effects by running at different energies. Z production at 8 TeV probes partonb densities very similar to W production at 7 TeV. So there are perhaps ways to constrain this that have not yet been applied.
 
  • Like
Likes ORF and vanhees71
  • #5
I am a little surprised that people are not beating me up on the details, like "there's no way you can know the magnetic field to five decimal places!"
 
  • Like
Likes Delta Prime

FAQ: How do the collider experiments measure the mass of the W boson?

How do collider experiments measure the mass of the W boson?

Collider experiments measure the mass of the W boson by analyzing the particles produced when W bosons decay. The mass is inferred from the energy and momentum of the decay products, typically leptons and neutrinos, using conservation laws and sophisticated detector technology.

What role do detectors play in measuring the W boson mass?

Detectors play a crucial role by precisely tracking and measuring the energy and momentum of particles resulting from W boson decays. High-resolution detectors, such as calorimeters and tracking systems, are essential for accurately reconstructing the W boson's properties from its decay products.

Why is it challenging to measure the W boson mass accurately?

Accurately measuring the W boson mass is challenging due to the need for precise calibration of the detectors, the complex nature of the decay processes, and the need to account for various sources of background noise and systematic uncertainties. Achieving high precision requires meticulous analysis and calibration.

What techniques are used to reduce uncertainties in W boson mass measurements?

Techniques to reduce uncertainties include using large datasets to minimize statistical errors, employing advanced algorithms for particle reconstruction, and cross-checking results with different decay channels. Systematic uncertainties are addressed through rigorous calibration of the detectors and thorough understanding of the experimental setup.

How do collider experiments ensure the reliability of their W boson mass measurements?

Reliability is ensured through multiple layers of verification, including cross-validation with theoretical predictions, consistency checks with previous measurements, and peer review. Collaborations between different experimental groups and independent analysis by various teams also contribute to the robustness of the results.

Back
Top