How can ISO 11929 norm help combine Poisson errors in low counting statistics?

You... calculate the probability of getting the observed sample data, assuming the null hypothesis is true. This is known as the p-value. If the p-value is small, typically less than 0.05, then the null hypothesis is rejected and it can be concluded that there is a significant difference between the two samples.In summary, the conversation discusses the process of testing a hypothesis and estimating parameters in a statistical analysis. The main focus is on determining the method for combining asymmetrical (Poisson) errors and showing calculations to prove a difference between two samples. The suggested approach is to calculate the mean and standard deviation of both samples and then compare them to the mean and standard deviation
  • #1
nbryan5
I have low counting stats and need to subtract background, account for efficiency, and divide by volume. How do I combine the asymmetrical (Poisson) errors?
 
Physics news on Phys.org
  • #2
Exactly what kind of analysis are you looking at performing? Are you planning to test a hypothesis? Are you going to make a confidence interval for some parameter?
 
  • #3
I want to eventually test the hypothesis that one sample is is greater than the controls. And if two samples are different from each other. This I am ok with- but I have to show all of my calculations for how I can mathematically prove the values are different.

I have small counts in 60 fields of view on a scope and I was propagating error following Gaussian error propagation- which I now know is wrong. But what do I do with these asymmetrical error bars when I want to know sample (+/- error) minus control (+/- error)?
 
  • #4
I suggest you have another try at stating your question, unless you are writing to someone on the forum who already knows what kind of experiment you are doing.
 
  • #5
How do I subtract a Poisson background from a Poisson sample and propagate the error associated with each?
 
  • #6
nbryan5 said:
How do I subtract a Poisson background from a Poisson sample and propagate the error associated with each?

That isn't a description of an experiment. As far as I know, it isn't a description of a specific problem in statistics.
 
  • #7
I am counting the number of particles in 60 fields of view on a scope. I count three pieces of a filter for a sample and three pieces of a filter for a control. All of my counts in 60 fields of view are <50 and Poisson distributed.
 
  • #8
nbryan5 said:
I am counting the number of particles in 60 fields of view on a scope. I count three pieces of a filter for a sample and three pieces of a filter for a control. All of my counts in 60 fields of view are <50 and Poisson distributed.

Estimation and "proving a difference" are technically two different statistical tasks. Statistics doesn't actually "prove" a difference. There are statistical procedures that make a decision about whether a difference in two situations exists, but these procedures are not proofs. These are regarded as "evidence". They are not a mathematical proof.

With respect to the task of estimation, are you trying to estimate the parameters of a Poission distribution that would account for the difference between the counts on the control filters and the counts on the non-control samples?

With respect to the task of giving evidence for a difference (a task called Hypothesis Testing), how many different situations are there? Are all the non-controls from the same general situation (e.g. from the livers of rats treated with drug X) or are they from different situations (e.g. some from the livers of rats treated with drug X and some from the livers of rats treated with drug Y).
 
  • #9
As others have hinted you need to specify what you are trying to test in terms of parameters (this is what estimators do - they model the parameters with random variables and you use this to make inferences) and also supply assumptions and the kind of data you have.

If you are doing a difference of means then you will be basically doing a hypothesis of something like H0: lambda1 = lambda2 => lambda1 - lambda2 = 0 vs H1: lambda1 > lambda2 or lambda1 != lambda2 or something else.

To use a normal distribution on the mean you need a large sample size. If you are not confident about that then you need to derive the joint distribution for your random variable of lambda1 - lambda2 and then get an interval (using say the likelihood ratio test) and use that to test the hypothesis.

You can do this kind of thing in SAS or R if you have it (R is free and open source and if you've done any statistical or mathematical programming then it will be fairly straightforward) and you can find the site by typing in R project in google.
 
  • #10
What everyone is asking - from a wholly different perspective: You seem to have an XY problem here. You did X and you think Y will solve it. The problem is that you are looking at Y assuming it will fix things. We think that we, that being all of us, need to get to X and start there. Please tell us precisely what you did, and what hypothesis you want to test. And importantly: why? There are lots of smart folks here, it is a virtually given that one of them can help.

http://mywiki.wooledge.org/XyProblem
 
  • #11
nbryan5 said:
I want to eventually test the hypothesis that one sample is is greater than the controls. And if two samples are different from each other. This I am ok with- but I have to show all of my calculations for how I can mathematically prove the values are different.

I have small counts in 60 fields of view on a scope and I was propagating error following Gaussian error propagation- which I now know is wrong. But what do I do with these asymmetrical error bars when I want to know sample (+/- error) minus control (+/- error)?
The standard way of testing for significant difference is:
  1. Calculate the mean and standard deviation of both your samples. Call them m1, m2, s1 and s2. Assume that m1 is the mean of the sample you are interested in.
  2. Then calculate the mean and standard deviation of the total data set (both samples merged). Call them M and S
  3. State the null hypothesis: There is no significant difference
  4. Then calculate [itex]\frac{(M-m1)}{S} [/itex]. This tells you how many standard deviations your sample is from the merged mean
  5. From that number, you can calculate the probability of the null hypothesis being true.
 
  • #12
Svein said:
From that number, you can calculate the probability of the null hypothesis being true.
You can't calculate the probability that the null hypothesis is true.

You can only assume the null hypothesis is true and calculate the probability that a number computed from the data is in some subset of the real numbers.
 
  • #13
Stephen Tashi said:
You can't calculate the probability that the null hypothesis is true.

Sorry, sloppy formulation. I was going to be more specific, but I suddenly remembered that the data are assumed to follow a Poisson distribution - and I did not quite remember how to deal with that.
 
  • #14
You could have a look at the ISO 11929 norm "
Determination of the characteristic limits (decision threshold, detection limit and limits of the confidence interval) for measurements of ionizing radiation -- Fundamentals and application"
It treats more or less exactly the situation you are describing.
 

FAQ: How can ISO 11929 norm help combine Poisson errors in low counting statistics?

What is the concept of combining Poisson error?

The concept of combining Poisson error is a statistical method used to combine multiple independent measurements of a Poisson-distributed variable into a single estimate with a more precise measurement and a smaller margin of error. This is particularly useful in situations where a single measurement may not be enough to accurately represent the true value of the variable.

Why is combining Poisson error important in scientific research?

Combining Poisson error is important in scientific research because it allows for a more accurate and precise estimation of a variable's true value. This is especially useful in experiments or studies where the measurement of the variable may be prone to random error, and combining multiple measurements can help reduce the impact of this error on the overall results.

What are the assumptions of combining Poisson error?

The assumptions of combining Poisson error include that the individual measurements are independent of each other, the variable being measured follows a Poisson distribution, and the measurements are made with the same error rate. If these assumptions are not met, the results of the combination may not be accurate.

How does combining Poisson error work?

Combining Poisson error involves taking the mean of the individual measurements and calculating the standard error using a formula that takes into account the number of measurements and the error rate. This results in a new estimate of the variable's true value with a smaller margin of error than any of the individual measurements.

In what situations should combining Poisson error be used?

Combining Poisson error should be used in situations where there are multiple independent measurements of a Poisson-distributed variable and a more precise estimate is desired. This can be in various fields of research, such as biology, physics, and social sciences, where the measurement of a variable can be challenging and prone to random error.

Similar threads

Replies
5
Views
2K
Replies
16
Views
1K
Replies
4
Views
1K
Replies
22
Views
2K
Replies
3
Views
2K
Replies
5
Views
6K
Replies
7
Views
1K
Back
Top