Normalized differential cross section

In summary: Delta p_s= \sqrt{p_s/N} ##, and ## \Delta X_s =1 ##. In summary, the conversation discusses the process of plotting a normalized differential cross section, where the y-axis represents the number of events per bin and the x-axis represents an observable. The first question involves using the number of events per bin and the bin width to calculate the differential cross section, while the second question involves dividing by the total cross section to scale the y-axis. The conversation also mentions that the statistical uncertainty is scaled with the same number as the central value, but may be different for different luminosities.
  • #1
Anava1001
5
1
TL;DR Summary
How to plot a normalized differential cross section from histogram with number of events/bin.
Hello,

I know that this question might be a bit silly but I am confused about plotting a normalized differential cross section. Suppose that I have a histogram with the x-axis representing some observable X and the y-axis the number of events per bin. I want the y-axis to show the normalized cross section, i.e., $$\frac{1}{\sigma}\frac{d\sigma}{dX}$$

First question: To obtain the differential cross section ##\frac{d\sigma}{dX}## I use ##N=\sigma L##, so ##\frac{\Delta N}{L\Delta X} = \frac{\Delta\sigma}{\Delta X}##, where ##\Delta N## is the number of events per bin, ##\Delta X## is the bin width, and L the integrated luminosity. Is this approach correct? If so, should I scale the full histogram by a factor of ##\frac{1}{L \Delta X}## or should I scale each bin by this factor?

Second question: I have to divide by the total cross section, i.e., multiply by ##\frac{L}{N}##, where N is the total number of events (the integral of the histogram). Is this right? If so, the luminosity will just cancel out, how the bin error can be scaled to the given luminosity then?

I hope these questions make sense. Thanks in advance.
 
Physics news on Phys.org
  • #3
Charles Link said:
If the y-axis has ## \frac{d \sigma}{ \sigma dX} ##, with ## N=L \sigma ##, the y-axis then becomes ## \frac{\Delta N}{N \, \Delta X} ##.

See also https://www.physicsforums.com/threa...a-rutherfords-experiment.965947/#post-6131309 for some additional info that you might find of interest.
So the histogram gets scaled by a factor of ##\frac{1}{N \Delta X}##. What happened to the statistical uncertainty though? I know the bin error should equal the square root of the bin content, but this does not depend on the luminosity, is that ok? I thought the statistical uncertainty will be different for different luminosities.
 
  • #4
Scale the uncertainty with the same number as the central value. The relative uncertainty doesn't depend on fixed prefactors.

A larger luminosity leads to a smaller relative uncertainty, which leads to a smaller uncertainty in the differential cross section.
 
  • Like
Likes hutchphd and Anava1001
  • #5
mfb said:
Scale the uncertainty with the same number as the central value. The relative uncertainty doesn't depend on fixed prefactors.

A larger luminosity leads to a smaller relative uncertainty, which leads to a smaller uncertainty in the differential cross section.
Ok. I understand that the statistical uncertainty goes like ##\frac{S}{\sqrt{N}}##, where S is the central value, so by error propagation, the uncertainty gets scaled by the same factor as the histogram. What is still unclear to me are statements like "uncertainty is statistical and scaled to some luminosity (4 ##fb^{-1}##, for example)".

Just to be clear; let's say that I have N events for some particular process. These events correspond to some luminosity L (provided I know the cross section of the process). Now, I don't want to see the uncertainty for that particular luminosity. Instead, I want to see for another L'. How is this done?

Again, I know this is probably a silly question, but I'm really confused.
 
  • #6
The number ## N_s=\Delta N ##, that wind up in each bin
[bin= ## \Delta X ## =often a section of solid angle in space where they designate it as ## \Delta \Omega ## ],
of width ## \Delta X ## in a scattering experiment, can be considered to be the successes of a binomial type variable,
[each particle scattered is an event],
with a probability ## p_s ## , with ## q_s=1-p_s ## for each bin.
[The ## p_s ## is normally different for each bin ## s ##].
For the binomial distribution, the mean of ##N_s ## is ##\bar{N}_s=Np_s ## and variance ## \sigma_{N_s}^2=N p_s q_s ##, where this ## \sigma_{N_s} ##, (the statistical standard deviation), has a totally different meaning from the cross-section ## \sigma ##, which is a target area. I don't know if this helps to solve what puzzles you, but it might.
 
Last edited:
  • #7
a follow-on to the above:
Let ## L ## be the number per unit area in the incident beam, (integrated over time).
Then ## \bar{N}_s=p_s N=L (\frac{d \sigma}{d X})(\Delta X_s) ## , where ## \frac{d \sigma}{d X}=(dN/dX)/(dN/d \sigma) ##, (evaluated at position ## s ##),
and with ## N=L \sigma ##,
we get ## p_s=(1/\sigma)(\frac{d \sigma}{d X}) (\Delta X_s) ##.
It looks like the experimental ## p_s/\Delta X_s ## is what you are wanting on your y-axis.
Experimental ## p_s=N_s/N ##. With ## \sigma_{N_s}=\sqrt{N p_s(1-p_s)} \approx \sqrt{N_s} ##, we have the uncertainty ## \Delta p_s \approx \sqrt{N_s}/N \approx \sqrt{p_s/N} ##. With a little algebra, (## N=\sigma L ##), we can see how changing ## L ## will affect the uncertainty in ## p_s/\Delta X_s ##.
##\Delta (p_s/ \Delta X_s) \approx \sqrt{(1/\sigma)(\frac{d \sigma}{d X})/(N \Delta X_s)} ##, and
with ## N=\sigma L ##, the uncertainty is proportional to ## 1/\sqrt{L} ##.

Note: Your y-axis has ## p_s/ \Delta X_s=N_s/(N \, \Delta X_s) ##.
 
Last edited:
  • Like
Likes Anava1001
  • #8
Charles Link said:
a follow-on to the above:
Let ## L ## be the number per unit area in the incident beam, (integrated over time).
Then ## \bar{N}_s=p_s N=L (\frac{d \sigma}{d X})(\Delta X_s) ## , where ## \frac{d \sigma}{d X}=(dN/dX)/(dN/d \sigma) ##, (evaluated at position ## s ##),
and with ## N=L \sigma ##,
we get ## p_s=(1/\sigma)(\frac{d \sigma}{d X}) (\Delta X_s) ##.
It looks like the experimental ## p_s/\Delta X_s ## is what you are wanting on your y-axis.
Experimental ## p_s=N_s/N ##. With ## \sigma_{N_s}=\sqrt{N p_s(1-p_s)} \approx \sqrt{N_s} ##, we have the uncertainty ## \Delta p_s \approx \sqrt{N_s}/N \approx \sqrt{p_s/N} ##. With a little algebra, (## N=\sigma L ##), we can see how changing ## L ## will affect the uncertainty in ## p_s/\Delta X_s ##.
##\Delta (p_s/ \Delta X_s) \approx \sqrt{(1/\sigma)(\frac{d \sigma}{d X})/(N \Delta X_s)} ##, and
with ## N=\sigma L ##, the uncertainty is proportional to ## 1/\sqrt{L} ##.

Note: Your y-axis has ## p_s/ \Delta X_s=N_s/(N \, \Delta X_s) ##.
Alright, I think I got it now. But just to make sure, scaling the uncertainty to some luminosity means changing N, which will then change ##p_s/\Delta X_s##. Is that how I should interpret it?
 
  • #9
Changing ## N ## will not change ## p_s/\Delta X_s ##, but it will change the uncertainty ## \Delta (p_s/\Delta X_s) ##. (You will get a more precise experimental ## p_s/ \Delta X_s ##, with a larger ## N ##, but the actual theoretical value is unchanged).

Increasing ## N ## by a factor of 100 will lower the uncertainty ## \Delta (p_s/\Delta X_s) ## by a factor of 10. Since ## N=\sigma L ##, increasing ## N ## by a factor of 100 is the same as increasing ## L ## by a factor of 100.

This should be readily apparent if you followed the last couple of calculations, where it was shown that
the uncertainty is proportional to ## 1/\sqrt{L} ##.
 
Last edited:
  • Like
Likes Anava1001 and hutchphd
  • #10
Charles Link said:
Changing ## N ## will not change ## p_s/\Delta X_s ##, but it will change the uncertainty ## \Delta (p_s/\Delta X_s) ##. (You will get a more precise experimental ## p_s/ \Delta X_s ##, with a larger ## N ##, but the actual theoretical value is unchanged).

Increasing ## N ## by a factor of 100 will lower the uncertainty ## \Delta (p_s/\Delta X_s) ## by a factor of 10. Since ## N=\sigma L ##, increasing ## N ## by a factor of 100 is the same as increasing ## L ## by a factor of 100.

This should be readily apparent if you followed the last couple of calculations, where it was shown that
the uncertainty is proportional to ## 1/\sqrt{L} ##.
Alright. It is clear now. Thanks!
 
  • Like
Likes Charles Link

FAQ: Normalized differential cross section

What is a normalized differential cross section?

A normalized differential cross section is a measure of the probability of a particle interacting with a target, taking into account the energy and angle of the particle. It is normalized to account for the number of particles and the size of the target.

How is a normalized differential cross section calculated?

A normalized differential cross section is calculated by dividing the differential cross section, which is the probability of interaction at a specific energy and angle, by the total cross section, which is the probability of interaction at any energy and angle. This results in a value between 0 and 1.

What is the significance of a normalized differential cross section?

A normalized differential cross section allows for comparisons between different experiments and different targets, as it takes into account the size and number of particles in the target. It also provides insight into the underlying physics of particle interactions.

How is a normalized differential cross section used in research?

A normalized differential cross section is used in research to study the properties of particles and their interactions with targets. It can also be used to test theoretical predictions and models of particle interactions.

Can a normalized differential cross section be measured experimentally?

Yes, a normalized differential cross section can be measured experimentally by analyzing the number of particles that interact with a target at different energies and angles. This data can then be used to calculate the normalized differential cross section.

Back
Top