Confused about error propagation

  • I
  • Thread starter kelly0303
  • Start date
In summary, "Confused about error propagation" addresses the challenges and principles associated with calculating how uncertainties in measurements affect the results of computations. It explains the difference between absolute and relative errors, the methods for combining uncertainties in different mathematical operations, and highlights the importance of maintaining clarity and consistency in reporting error estimates. The article aims to demystify the process and provide practical guidance for accurate error propagation in scientific work.
  • #1
kelly0303
580
33
Hello! I am confused about the results I am getting for an apparently simple situation. I have 2 measurements (counts), call them ##S_+## and ##S_-##. Based on these I build an asymmetry defined as:

$$A = \frac{S_+-S_-}{S_++S_-}$$

The parameter I need to extract experimentally, call it ##x## behaves like ##\frac{dx}{x} = \frac{dA}{A}## where ##dx## and ##dA## are the uncertainties on ##x## and ##A## (ignore systematic uncertainties for now). ##x## is fixed (given by the physics process I am studying) and let's say I want to extract ##x## with ##10\%## relative uncertainty i.e. ##\frac{dx}{x} = \frac{dA}{A} = \frac{1}{10}##. I have 2 situations (I will give the actual numbers I get). In the first one I have:

$$S_+ = 0.0484 N$$
$$S_- = 0.0324 N$$
where ##N## is the number of initial events and ##S_+## and ##S_-## are the events I am actually measuring. In the second case I have:

$$S_+ = 0.0085 N$$
$$S_- = 0.0027 N$$

Using the formula above, in the first case I am getting ##A_1 = 0.198## and in the second case I am getting ##A_2 = 0.519##. If I do an error propagation, I end up with the formula:

$$dA = \frac{2}{(S_++S_-)^2}\sqrt{S_+S_-^2+S_+^2S_-}$$
from which I get ##dA_1 = \frac{3.448}{\sqrt{N}}## and ##dA_2 = \frac{8.083}{\sqrt{N}}##. So I get ##\frac{dA_1}{A_1} = \frac{17.4}{\sqrt{N}}## and ##\frac{dA_2}{A_2} = \frac{15.6}{\sqrt{N}}##. Which means that in the first case I need about ##N_1 = 30276## events and in the second case I need ##N_2 = 24336## events. But this doesn't make sense to me. For a fixed ##N##, in the first case the number of events I am actually measuring are about an order of magnitude bigger than in the second case. Given that I am only looking at the statistical uncertainty, I would expect to need ~100 times more events in the second case, to reach the same uncertainty on the parameter of interest i.e. ##N_2 \sim 100N_1##. What am I doing wrong? Shouldn't I use that error propagation on ##A##? What should I do such that the uncertainty on ##x## reflects that fact that in the second case I have much lower statistics? Thank you!
 
Physics news on Phys.org
  • #2
If S = k ⋅N then σS = k⋅√N. It is ≠ √(k⋅N).

I think you missed this.
 
  • Skeptical
Likes BvU
  • #3
gleem said:
If S = k ⋅N then σS = k⋅√N. It is ≠ √(k⋅N).

I think you missed this.
Thank you for this. I am not sure I get that. For example, for the first case, for ##N = 30276## I have ##S_+ = 1465##. Shouldn't the uncertainty be given by the events I am actually measuring i.e. ##\sqrt{S_+} = 38##. Just to clarify a bit, in my case ##N## is well known (i.e. there is no uncertainty on ##N##). The coefficient k in this case is actually a binomial distribution coefficient, so for each event out of the N ones, I have k probability to get a count. On average I have ##k\cdot N## events.
 
  • #4
kelly0303 said:
Just to clarify a bit, in my case N is well known (i.e. there is no uncertainty on N)
I don't understand. How is N determined if it is events so if you repeat the experiment you get the same number?

Edit: Also how is the coefficient of N determined?
 
  • #5
gleem said:
I don't understand. How is N determined if it is events so if you repeat the experiment you get the same number?

Edit: Also how is the coefficient of N determined?
I am sorry for the confusion. I will try to give a bit more details (I should have done it from before). We have a two level quantum system (an atom in this case), which we prepare in a given state. After a fixed time we measure the probability of the system to be in the other state. This transition can be described by a binomial distribution with probability ##p<<1## (in practice this is done by detecting an ion after a given amount of time). We do this one atom at a time, so we prepare an atom in the initial state (this is done with basically 100% efficiency), then wait, then try to detect an ion (if the transition didn't happen we detect nothing). We repeat this ##N## times, so N is exactly known (i.e. there is no uncertainty associated to N), as that is given simply by how many time we repeat this initial state preparation. Then we do the measurement and extract ##S_+## as the number of detected events. We then change some experimental parameters and redo the experiment N other times and define ##S_-## as the number of events in this experimental configuration. Depending on the experimental setup, we can increase or decrease the values of ##S_+## and ##S_-##. From this step, I do the analysis in the original post.

Just for completeness, I have the following formula:

$$S_{\pm} = N(a^2 \pm ax)$$
where a is an experimental parameter, and we can assume ##a>>x##.
 
  • #6
So you want the uncertainty in S/N?
 
  • #7
gleem said:
So you want the uncertainty in S/N?
What I need is the uncertainty on x.
 
  • #8
How do you define x?
 
  • #9
gleem said:
How do you define x?
I added a formula to my post above
 
  • #10
a is fixed?
 
  • #11
gleem said:
a is fixed?
For a given experimental run (i.e. in order to perform N measurements), yes (and we can assume it doesn't have any uncertainty associated to it).
 
  • #12
So x= (S/N-a2)/a

I would say that

σx = σS/(aN)
 
  • #13
gleem said:
So x= (S/N-a2)/a

I would say that

σx = σS/(aN)
what would ##\sigma_S## be in this case?
 
  • #14
S is the number of detected events with a binomial distribution thus σs = √S
 
  • #15
Got to go will be back is about an hour.
 
  • #16
gleem said:
S is the number of detected events with a binomial distribution thus σs = √S
Thanks a lot for help and no worries! So doing what you suggested, given that a>>x, we can assume ##S = Na^2## so ##\sigma_S = \sqrt{N}a##. Then, using the formula you provided we get ##\sigma_x = \sigma_S/(aN) = \sqrt{N}a/(aN) = \frac{1}{\sqrt{N}}##. This is basically consistent with what I am getting, but I am still not sure I understand conceptually why. By increasing a, we can detect more events (for fixed N). So I would expect to have a reduced uncertainty for higher values of a (still with 1>a>>x). Why the final formula for uncertainty on x doesn't depend at all on a?
 
  • #17
Hi,

I reproduced your results (with difficulty -- as I'm somewhat rusty).

I think it's just a numerical issue. If I observe that, with ##\eta=S_-/S_+##, we have $$A = {1-\eta \over 1+\eta} $$ Such a two-step calculation (via ##\eta##) can be done for the error calculation also -- with, of course, the same result. It shows that ##\Delta \eta/\eta ## is indeed quite big in situation 2. See below. But then, in the propagation to ##dA\over A##, that is mitigated drastically due to the value of ##\eta##.As a numerical example, let me take ##N = 30000## and calculate the errors in ##\eta## and ##A##:

Situation 1:
##S_+ = 1452 ## and -- assuming Poisson statistics -- ##\ \ \Delta S_+ = \sqrt{1452} =38 ##
##S_- = \ \ 972\pm 31 ##

Situation 2:
##S_+ = \\255\pm 16##
##S_- = \ \ \ \ 81\pm 9 ##

Your one-step yields ##dA_1=0.020, \ \ dA_2 = 0.047## i.e. relative errors 10% and 9%, respectively.

For the two-step I get ##\eta_1 = 0.669 \pm 0.028, \ \ \eta_2 = 0.318 \pm 0.045##, so relative errors 4% and 13% ! (so, as our intuition expected)

But then, with $$ {dA\over d\eta}={2\over (1+\eta)^2} \Rightarrow {dA\over A} = {2\eta\over 1-\eta^2} {d\eta\over \eta}$$ the relative errors in A are the same 10% and 9% as above.

Note that, for clarity, I show errors with too much accuracy -- the error in the error usually doesn't justify more than one digit accuracy (unless the first digit is a 1)

##\ ##
 
  • #18
Be careful σS ≠ a√N

S ≅ aNx

So σS =aN σx
 
  • #19
gleem said:
Be careful σS ≠ a√N

S ≅ aNx

So σS =aN σx
Sorry I got lost. You said ##\sigma_S = \sqrt{S}## (which makes sense). Also we have that ##S = N(a^2+ax)## and ##a>>x##, so shouldn't ##S \cong Na^2 ## (as ##a^2>>ax##) and thus ##\sigma_S = a\sqrt{N}##?
 
  • #20
S = N(a2 +ax)

N and a are constants

So an incremental change in S is ΔS = a⋅N⋅Δx which we may assume that σs = a⋅N⋅σx
 
  • #21
It just occurred to me that the "A" you defined is independent of the number of trials N and the parameter x that you seek or am I missing something?
 
  • #22
gleem said:
S = N(a2 +ax)

N and a are constants

So an incremental change in S is ΔS = a⋅N⋅Δx which we may assume that σs = a⋅N⋅σx
I agree with this. What I don't understand is why ##S\cong aNx##?

gleem said:
It just occurred to me that the "A" you defined is independent of the number of trials N and the parameter x that you seek or am I missing something?
So "A" is obtained experimentally as described above. But the formula it has does involve x:

$$A = \frac{S_+-S_-}{S_++S_-} = \frac{N(a^2+ax)-N(a^2-ax)}{N(a^2+ax)+N(a^2-ax)} = \frac{2ax}{2a^2}=\frac{x}{a}$$
So by measuring A and the associated uncertainty I get x and its uncertainty. And yes, A doesn't depend on ##N##, but the uncertainty on A depends on ##N##.
 
  • #23
In my above post, I inadvertently assumed a<<x and a2 could be ignored, my error sorry for dragging this out. But even if a>>x you still may have a problem

A depends on a and x where x <<a and dA =dx/a. A is a small number.

Your data are the "S"s S= (a2 ± ax)N so dS = aNdx or dx = dS/aN

So dA = dS/(aN )

therefore dA/A = dS±/ (S± - a2N) which you want to = 0.1

dS± = σS±

N = (S±-10σS±)/a2

Please fill in the missing steps yourself to see if I did not make an error.

Does this work for your data?
 

FAQ: Confused about error propagation

What is error propagation?

Error propagation refers to the process of determining the uncertainty of an output quantity based on the uncertainties in the input quantities and the mathematical operations that relate them. It helps in understanding how measurement errors affect the final result.

Why is error propagation important?

Error propagation is crucial because it allows scientists and engineers to quantify the reliability of their results. By understanding how errors combine and affect the final outcome, one can make informed decisions about the precision and accuracy of measurements and calculations.

How do you propagate errors in addition and subtraction?

For addition and subtraction, the uncertainties of the quantities involved are combined using the square root of the sum of the squares of the individual uncertainties. Mathematically, if \(z = x + y\) or \(z = x - y\), then the uncertainty in \(z\) is given by \(\sqrt{(\Delta x)^2 + (\Delta y)^2}\), where \(\Delta x\) and \(\Delta y\) are the uncertainties in \(x\) and \(y\), respectively.

How do you propagate errors in multiplication and division?

For multiplication and division, the relative (or fractional) uncertainties of the quantities are combined. If \(z = x \cdot y\) or \(z = \frac{x}{y}\), the relative uncertainty in \(z\) is given by \(\sqrt{\left(\frac{\Delta x}{x}\right)^2 + \left(\frac{\Delta y}{y}\right)^2}\), where \(\Delta x\) and \(\Delta y\) are the uncertainties in \(x\) and \(y\), respectively.

What is the difference between absolute and relative uncertainty?

Absolute uncertainty is the actual amount of uncertainty in a measurement, expressed with the same units as the measurement itself. Relative uncertainty, on the other hand, is the ratio of the absolute uncertainty to the measured value, often expressed as a percentage. Relative uncertainty provides a sense of the size of the uncertainty in relation to the value of the measurement.

Similar threads

Replies
12
Views
2K
Replies
18
Views
3K
Replies
4
Views
1K
Replies
1
Views
904
Replies
1
Views
1K
Replies
2
Views
2K
Replies
56
Views
3K
Replies
1
Views
1K
Replies
3
Views
1K
Back
Top