- #1
ChrisVer
Gold Member
- 3,378
- 465
This could as well go to the statistics, but I am looking at it from particle physics point of view...
Why adding systematic uncertainties worsen the expected upper limits to the signal strength?
I am trying to find where the flaw enters in the following logic:
0. The model most analyses use is the following likelihood:
[itex]L( N_{obs} | b(\theta ) + \mu s(\theta ) ) = P(N_{obs} |b(\theta ) + \mu s(\theta ) ) U(\mu) \Pi_i Gaus(\theta_i | 0,1)[/itex]
Where [itex]N_{obs}[/itex] is the observed events, [itex]b/s[/itex] are the background/signal expected events, [itex]\theta_i[/itex] are the different nuisance parameters and [itex]\mu[/itex] is called the signal strength. In a Bayesian approach, one has to also to feed in a prior distribution for the signal strength parameter, which is the [itex]U(\mu)[/itex]- let's consider it Uniform. [itex]P(x|n)[/itex] is the poisson probability to get x observed given the expectation of n, and [itex]Gaus[/itex] is a way to represent the variation of the nuisance parameters (given you have symmetric errors).
1. In order for one to get the expected limits, they would set [itex]N_{obs}=N_{exp}=b[/itex].
2. Once they do it, they can start varying the background+signal uncertainties, [itex]\theta_{stat}[/itex] (+[itex]\theta_{sys}[/itex]) [these uncertainties don't affect the signal and background in the same way]
3. On the varied result, they would try to figure out what is the [itex]\mu[/itex] so that the [itex]b'+\mu s' = N_{obs}[/itex]
4. Doing that several times, you get a distribution for [itex]\mu[/itex] (after you marginalize over the uncertainties) which is called the posterior pdf...
5. From μ-distribution get the 95-quantile point.
Now for some reason, adding nuisance parameters (such as [itex]\theta_{sys}[/itex] on top of the statistical), moves the [itex]\mu_{.95}[/itex] higher.
Is that because the uncertainties are not the same for bkg/signal?
Intuitively I can see how eg by subtracting the background from the observed result with higher uncertainties is going to give a more unclear picture of how much signal you allow in the game... but I don't see where this fits in the above logic.
Why adding systematic uncertainties worsen the expected upper limits to the signal strength?
I am trying to find where the flaw enters in the following logic:
0. The model most analyses use is the following likelihood:
[itex]L( N_{obs} | b(\theta ) + \mu s(\theta ) ) = P(N_{obs} |b(\theta ) + \mu s(\theta ) ) U(\mu) \Pi_i Gaus(\theta_i | 0,1)[/itex]
Where [itex]N_{obs}[/itex] is the observed events, [itex]b/s[/itex] are the background/signal expected events, [itex]\theta_i[/itex] are the different nuisance parameters and [itex]\mu[/itex] is called the signal strength. In a Bayesian approach, one has to also to feed in a prior distribution for the signal strength parameter, which is the [itex]U(\mu)[/itex]- let's consider it Uniform. [itex]P(x|n)[/itex] is the poisson probability to get x observed given the expectation of n, and [itex]Gaus[/itex] is a way to represent the variation of the nuisance parameters (given you have symmetric errors).
1. In order for one to get the expected limits, they would set [itex]N_{obs}=N_{exp}=b[/itex].
2. Once they do it, they can start varying the background+signal uncertainties, [itex]\theta_{stat}[/itex] (+[itex]\theta_{sys}[/itex]) [these uncertainties don't affect the signal and background in the same way]
3. On the varied result, they would try to figure out what is the [itex]\mu[/itex] so that the [itex]b'+\mu s' = N_{obs}[/itex]
4. Doing that several times, you get a distribution for [itex]\mu[/itex] (after you marginalize over the uncertainties) which is called the posterior pdf...
5. From μ-distribution get the 95-quantile point.
Now for some reason, adding nuisance parameters (such as [itex]\theta_{sys}[/itex] on top of the statistical), moves the [itex]\mu_{.95}[/itex] higher.
Is that because the uncertainties are not the same for bkg/signal?
Intuitively I can see how eg by subtracting the background from the observed result with higher uncertainties is going to give a more unclear picture of how much signal you allow in the game... but I don't see where this fits in the above logic.