Multiply Probabilities vs. Sum of the Squares

In summary, when dealing with uncorrelated probabilistic events, if they are independent, you multiply the probabilities. However, if they are correlated, you can use the sum of the squares of the standard deviations to calculate the variance of the sum. It is not common to use the sum of the squares of the probabilities as it can result in a number greater than 1, which is not a valid probability.
  • #1
jaydnul
558
15
Hi! I'm getting confused by these two things. If I have two uncorrelated probabilistic events, and I want to know the probability of seeing them both land beyond 3.3 sigma (for example), do I multiply the probabilities .001*.001 or do I do sum of the squares sqrt(.001^2 + .001^2). I assume it is the former, but can you explain what context we would use sum of the squares instead?
 
Physics news on Phys.org
  • #2
jaydnul said:
If I have two uncorrelated probabilistic events, and I want to know the probability of seeing them both land beyond 3.3 sigma (for example), do I multiply the probabilities .001*.001 or do I do sum of the squares sqrt(.001^2 + .001^2).
Uncorrelated and independent are different things. If they are independent then you multiply the probabilities. Correlation is simply one specific type of dependence. However, it is easy to come up with examples where one event causes the other with 100% certainty (so the probability of seeing them both is equal to the probability of the cause) and yet they are uncorrelated.

jaydnul said:
explain what context we would use sum of the squares instead?
I don't know of a context where you use the sum of the squares of the probabilities. That could easily lead to a number greater than 1, which couldn’t be a probability.

Often you use the sum of the squares of the standard deviations. For example, to calculate the variance of the sum of two random variables.
 
Last edited:
  • Like
Likes PeroK and FactChecker
  • #3
Ok thanks that makes sense!
 

FAQ: Multiply Probabilities vs. Sum of the Squares

What is the difference between multiplying probabilities and summing the squares of probabilities?

Multiplying probabilities is used to find the joint probability of two independent events both occurring. Summing the squares of probabilities, on the other hand, is often used in statistical methods like calculating variance or certain types of error metrics, where each probability is squared and then summed to provide a measure of dispersion or error.

When should I use the product of probabilities?

You should use the product of probabilities when you are dealing with independent events and you want to determine the likelihood of both events occurring simultaneously. For example, if you want to know the probability of flipping two heads in a row with a fair coin, you would multiply the probability of flipping one head (0.5) by itself: 0.5 * 0.5 = 0.25.

Can you give an example of summing the squares of probabilities?

Sure! One common example is in the calculation of the variance of a discrete random variable. If you have probabilities p1, p2, ..., pn associated with outcomes x1, x2, ..., xn, the variance is calculated as the sum of the squares of the deviations from the mean, weighted by their probabilities. Specifically, if μ is the mean, the variance is Σ(pi * (xi - μ)^2).

Why is multiplying probabilities important in statistics?

Multiplying probabilities is crucial in statistics because it allows us to determine the likelihood of multiple independent events occurring together. This concept is foundational in various statistical methods, including hypothesis testing, Bayesian inference, and constructing probability models.

How does summing the squares of probabilities relate to error measurement?

Summing the squares of probabilities is related to error measurement in methods like the Mean Squared Error (MSE) and Root Mean Squared Error (RMSE). These metrics are used to quantify the difference between predicted and observed values. By squaring the errors, summing them, and then averaging (and taking the square root for RMSE), we obtain a single value that represents the overall error magnitude, giving us insight into the accuracy of our predictive models.

Similar threads

Replies
2
Views
2K
Replies
11
Views
237
Replies
57
Views
4K
Replies
1
Views
1K
Replies
11
Views
2K
Replies
12
Views
2K
Replies
4
Views
6K
Replies
7
Views
2K
Back
Top