Combining probability distribution functions

In summary, the conversation revolves around the topic of comparing different measurement methods and combining the probability distributions of error components for each method in order to compare the overall uncertainty. There is a need for further clarification on how these error components should be combined and what is meant by "uncertainty" in mathematical terms. The use of Monte-Carlo simulation and ISO guidelines are mentioned in relation to this topic.
  • #1
hermano
41
0
Hi,

I'm comparing different measurement methods. I listed and derived an equation for each error component per measurement method and calculated the probability distribution using the Monte-Carlo method (calculating each error 300.000 times assuming a normal distribution of the input variable). However, the outcome of an Monte-Carlo simulation is a probability distribution for each error component under study. I want to combine these separate probability distribution functions per error component for each measurement methods to come to an overall probability distribution function such that I can compare the uncertainty of each measurement method. How can I do this? Anybody a good reference?
 
Physics news on Phys.org
  • #2
hermano said:
Hi,

However, the outcome of an Monte-Carlo simulation is a probability distribution for each error component under study.

What do you mean by "error component". Are you talking about the components of a vector?

I want to combine these separate probability distribution functions per error component for each measurement methods to come to an overall probability distribution function

What do you mean by "combine"? Are the "components" added together like vectors? - or like scalars? - or are they inputs to some non-linear scalar valued function?

such that I can compare the uncertainty of each measurement method. How can I do this? Anybody a good reference?

Does "uncertainty" mean the standard deviation of the measurement? If you simulated the distribution of some errors by Monte-Carlo, why didn't you also simulate the "combination" of these errors?
 
  • #3
Stephen Tashi said:
What do you mean by "error component". Are you talking about the components of a vector?

With error component I mean the error source. For example, you measure the length of a bar. Then there are different error components/sources (or uncertainty components) which contribute to the total measurement uncertainty such as, the limited resolution of your ruler, the thermal expansion of your ruler under influence of the temperature.

What do you mean by "combine"? Are the "components" added together like vectors? - or like scalars? - or are they inputs to some non-linear scalar valued function?

No, the different error components are 100.000 times calculated using a Monte-Carlo simulation assuming a normal probability distribution for each error (before doing this, an analytical expression is derived for each error component and the width 'a' of the error interval is given). This will give you a vector of 100.000 error values per error component. From these 100.000 values you can calculate the mean error, standard deviation, uncertainty etc. My question is how can I combine these various standard deviations or uncertainties from the different error components to global (total) uncertainty.

Does "uncertainty" mean the standard deviation of the measurement? If you simulated the distribution of some errors by Monte-Carlo, why didn't you also simulate the "combination" of these errors?

No, uncertainty is not the standard deviation. You can calculate the uncertainty from the standard deviation but this is not the same.
Because the errors are independent. I have an analytical expression for each error separate, but no expression for a combination of all the errors together.
 
  • #4
You didn't explain how the error "components" are to be combined. The example of the ruler suggests that they are added.

And you didn't define what you mean by "uncertainty".
 
  • #5
Stephen Tashi said:
You didn't explain how the error "components" are to be combined. The example of the ruler suggests that they are added.

And you didn't define what you mean by "uncertainty".

I want to calculate the total uncertainty. So I think you have to add them, but I am not really sure as they are independent. But that is the question of my whole problem. How do I have to "combine" the probability distributions from the various error components?

Uncertainty is the component of a reported value that characterizes the range of values within which the true value is asserted to lie. An uncertainty estimate should address error from all possible effects (both systematic and random) and, therefore, usually is the most appropriate means of expressing the accuracy of results. This is consistent with ISO guidelines.
 
  • #6
hermano said:
I want to calculate the total uncertainty. So I think you have to add them, but I am not really sure as they are independent.

I think you mean "whether they are independent".

Since you can't describe how the errors "combine", perhaps you should state the details of this problem and perhaps someone can interpret it from that perspective.

But that is the question of my whole problem. How do I have to "combine" the probability distributions from the various error components?

If you can estimate the covariance of the errors, you can estimate the standard deviation of their sum, even if the errors are dependent.

Uncertainty is the component of a reported value that characterizes the range of values within which the true value is asserted to lie. An uncertainty estimate should address error from all possible effects (both systematic and random) and, therefore, usually is the most appropriate means of expressing the accuracy of results. This is consistent with ISO guidelines.

That may be fine for ISO guidelines, but it doesn't define "uncertainty" in mathematical terms. You stated that uncertainty can be calculated from the standard deviation of the distribution of a measurement but you didn't specify how it would be calculated. Is "uncertainty" supposed to be some kind of "confidence interval"?
 

FAQ: Combining probability distribution functions

What is the purpose of combining probability distribution functions?

The purpose of combining probability distribution functions is to create a new probability distribution that reflects the likelihood of an event occurring based on multiple variables or factors.

What types of probability distribution functions can be combined?

Any type of probability distribution function can be combined, including normal, binomial, Poisson, and exponential distributions.

How is the combined probability distribution function calculated?

The combined probability distribution function is calculated by taking the product of the individual probability distribution functions for each variable. This means multiplying the probabilities of each outcome for every combination of variables.

What is the difference between combining probability distribution functions and adding probabilities?

Combining probability distribution functions takes into account the joint probabilities of multiple variables, while adding probabilities only considers the individual probabilities of each variable. Combining probabilities is a more accurate way of predicting the likelihood of an event.

Can combining probability distribution functions be used in real-life situations?

Yes, combining probability distribution functions can be used in many real-life situations where multiple variables affect the probability of an event occurring. This can include predicting the outcome of medical treatments, financial investments, and weather forecasts.

Similar threads

Back
Top