- #1
- 35,005
- 21,683
- TL;DR Summary
- How do the collider experiments measure the mass of the W boson?
A few months ago, there was a discussion on the W mass. It unfortunately degeneratd with posters attacking the honesty of the researchers. A pity, because we never got into the issues involved in making a sub 100 ppm measurement.
The first problem is that the decay is W to lepton + neutrino. The lepton can be measured, but the neutrino cannot. The best you can do is measure the "missing transverse energy" - the energy/momentum carried by the neutrino perpendicular to the beam. This doesn't work in the longitudinal direction, because that's where the unmeasured beam remnants go.
So now you have three observables dependent on the mass: the lepton momentum spectrum, the missing energy spectrum, and something called the "transverse mass", which is the invariant mass ignoring the z direction. All three of these depend on the W mass, and they are all correlated. I believe every experiment has chosen to publish the transverse mass as their main result, and sometimes quoting the other two as well.
Which gets us to the zeroth problem: what exactly is being measured? The number of interest is actually the electroweak component of the W mass, i.e. neglecting a small QCD contribution. So far as I know, nobody does this. Instead they quote the pole mass, which is a parameter in the theory. Given a pole mass and a known proton structure, one can predict the transverse mass distribution. So in principle this is easy - run off a bunch of predictions for various pole masses, and see which one best fits the data.
To keep from being influenced by the result, the method is "blinded". One does not know which pole mass corresponds to which prediction until the end of the analysis until the end. This si to protect against human nature - if you expect 80, look at all the problems you can think of, and get 80 you pat yourself on the back and move on. But if you see 60, you go back and look some more. That's human nature, and blinding protects against that.
Now let's consider the W -> mu nu channel. Muons are measured by their momentum which is measured by the curvature in a magnetic field. You might think we know where the position detectors are and what the magnetic field is, but what we actually know is where the detector elements were when we built them and what the magnetic field was when we could measure it - before we put the detector in. So we need to figure out where everything is. You might think that we can look at long tracks, remove one hit, refit, and see the distribution of positions for the hit we removed (i.e. if they aren't centered, the sensor is in the wrong place) The problem with this is that there are so-called weak modes that do not bias the residual, for example an overall scale factor.
So after we do this (and we can see things like the effect of gravity on the detector) we take particles of known mass, like the J/psi, Upsilon and Z, and adjust until they end up in the right place no matter where in the detector they are. One might discover, for example, a false curvature - a twist in the detector between the endplates that biases 1/p to one direction or another.
Once we're done with the muons, we look at the electrons. We measure electrons by their energy. In principle E/p = 1 (the electron mass makes almost no difference) but the real distribution as a width and tails because of resolution and energy loss of the electron. If we know how much material we have, this is predictable, and because we have other in situ measurements of that (worth a post in and of itself) the adjustments to the material model are minimal or absent.
Now that we know the behavior of the tracker and the calorimeter, we can infer the resolution of missing energy (from neutrinos). However, this is degraded by two factors: "pileup" and "underlying event". Pileup refers to the fact that there are additional interactions - several dozen - which add energy and momentum to the system and degrade the missing energy resolution. There are multiple handles on this, such as looking to see what the distribution looks like without W's to check that it is understood. One can also check by looking at the mass vs. number of pileup events to ensure it is flat. One needs to be careful how one blinds to ensure this check is possible without giving away the W mass.
Underlying event is harder. There are some semi-empirical models, which do have other distributions that can be checked. It's called "tuning" but is better described as "rejecting those models and parameters that make wrong predictions".
Only then can the transverse mass distribution be made and compared with predictions and then the box be opened.
Reality is even more complicated than this. But I wanted to make it clear that this is a hard measurement to do, a lot of work is involved by a lot of people, and that it's not as simple as sayng "we want to see 80 and we get 80".
The first problem is that the decay is W to lepton + neutrino. The lepton can be measured, but the neutrino cannot. The best you can do is measure the "missing transverse energy" - the energy/momentum carried by the neutrino perpendicular to the beam. This doesn't work in the longitudinal direction, because that's where the unmeasured beam remnants go.
So now you have three observables dependent on the mass: the lepton momentum spectrum, the missing energy spectrum, and something called the "transverse mass", which is the invariant mass ignoring the z direction. All three of these depend on the W mass, and they are all correlated. I believe every experiment has chosen to publish the transverse mass as their main result, and sometimes quoting the other two as well.
Which gets us to the zeroth problem: what exactly is being measured? The number of interest is actually the electroweak component of the W mass, i.e. neglecting a small QCD contribution. So far as I know, nobody does this. Instead they quote the pole mass, which is a parameter in the theory. Given a pole mass and a known proton structure, one can predict the transverse mass distribution. So in principle this is easy - run off a bunch of predictions for various pole masses, and see which one best fits the data.
To keep from being influenced by the result, the method is "blinded". One does not know which pole mass corresponds to which prediction until the end of the analysis until the end. This si to protect against human nature - if you expect 80, look at all the problems you can think of, and get 80 you pat yourself on the back and move on. But if you see 60, you go back and look some more. That's human nature, and blinding protects against that.
Now let's consider the W -> mu nu channel. Muons are measured by their momentum which is measured by the curvature in a magnetic field. You might think we know where the position detectors are and what the magnetic field is, but what we actually know is where the detector elements were when we built them and what the magnetic field was when we could measure it - before we put the detector in. So we need to figure out where everything is. You might think that we can look at long tracks, remove one hit, refit, and see the distribution of positions for the hit we removed (i.e. if they aren't centered, the sensor is in the wrong place) The problem with this is that there are so-called weak modes that do not bias the residual, for example an overall scale factor.
So after we do this (and we can see things like the effect of gravity on the detector) we take particles of known mass, like the J/psi, Upsilon and Z, and adjust until they end up in the right place no matter where in the detector they are. One might discover, for example, a false curvature - a twist in the detector between the endplates that biases 1/p to one direction or another.
Once we're done with the muons, we look at the electrons. We measure electrons by their energy. In principle E/p = 1 (the electron mass makes almost no difference) but the real distribution as a width and tails because of resolution and energy loss of the electron. If we know how much material we have, this is predictable, and because we have other in situ measurements of that (worth a post in and of itself) the adjustments to the material model are minimal or absent.
Now that we know the behavior of the tracker and the calorimeter, we can infer the resolution of missing energy (from neutrinos). However, this is degraded by two factors: "pileup" and "underlying event". Pileup refers to the fact that there are additional interactions - several dozen - which add energy and momentum to the system and degrade the missing energy resolution. There are multiple handles on this, such as looking to see what the distribution looks like without W's to check that it is understood. One can also check by looking at the mass vs. number of pileup events to ensure it is flat. One needs to be careful how one blinds to ensure this check is possible without giving away the W mass.
Underlying event is harder. There are some semi-empirical models, which do have other distributions that can be checked. It's called "tuning" but is better described as "rejecting those models and parameters that make wrong predictions".
Only then can the transverse mass distribution be made and compared with predictions and then the box be opened.
Reality is even more complicated than this. But I wanted to make it clear that this is a hard measurement to do, a lot of work is involved by a lot of people, and that it's not as simple as sayng "we want to see 80 and we get 80".