What Is the Difference Between Expected Value Calculation Methods?

In summary, Expected Values can be calculated using different methods depending on the type of distribution being used. For probability distributions, the expected value is the sum of the product of the outcomes and their respective probabilities. For binomial distributions, the expected value can be calculated by multiplying the number of trials by the probability of success. However, this method is derived from the same procedure used for probability distributions and can also be solved using a table of outcomes. It is important to use the appropriate method based on the type of distribution being used.
  • #1
Peter G.
442
0
Hi,

I have to work with Expected Values and I am extremely confused over the following:

In the part of my book that teaches me about Probability Distribution, in order to calculate the Expected Value I have to:

Lets say we toss a coin twice. We can get 0 Heads, 1 Heads or 2 Heads

I then draw a probability distribution table and the expected value is the sum of the product of the number of heads and their respective probabilities.

When I get to the part that I learn about Binomial Distributions, in order to get the expected value all I have to do is multiply n by p whereas n is the number of tries and p the probability of success.

What is the difference between the two methods? When should I use each?

Thanks!
 
Last edited:
Physics news on Phys.org
  • #2
Peter G. said:
Hi,

I have to work with Expected Values and I am extremely confused over the following:

In the part of my book that teaches me about Probability Distribution, in order to calculate the Expected Value I have to:

Lets say we toss a coin twice. We can get 0 Heads, 1 Heads or 2 Heads

I then draw a probability distribution table and the expected value is the sum of the product of the number of heads and their respective probabilities.

When I get to the part that I learn about Binomial Distributions, in order to get the expected value all I have to do is multiply n by p whereas n is the number of tries and p the probability of success.

What is the difference between the two methods? When should I use each?

Thanks!

The equation for the EV of a binomial distribution is derived from the exact same procedure the you have described (sum the product of the outcomes with their respective probabilities) i.e. if X~Bin(n,p), then:

[tex]E(X)=\sum_{x=0}^{n}xP(X=x)[/tex]

So for any n, you could in fact just draw up a table of outcomes and then continue with how you originally solved it, however that will be a lot more work for large n. It is probably a good exercise to try and actually derive the equation E(X)=np from the equation i have posted above, so you can see for yourself that they are identical.

EDIT:
Have a look at this:

http://amath.colorado.edu/courses/4570/2007fall/HandOuts/binexp.pdf

It will show you the derivation. Also keep in mind that this only works for binomial random variables.
 
Last edited by a moderator:

FAQ: What Is the Difference Between Expected Value Calculation Methods?

What is the definition of expected value in statistics?

The expected value in statistics is the sum of all possible outcomes of a random variable, weighted by their respective probabilities. It represents the average value that would be obtained if the experiment is repeated an infinite number of times.

How is expected value calculated?

To calculate expected value, you multiply each possible outcome by its respective probability and then sum up these values. The resulting value is the expected value.

Why is expected value important in statistics?

Expected value is important in statistics because it allows us to make predictions about the future based on the likelihood of different outcomes. It is also used to measure the risk and uncertainty associated with a particular event or decision.

What is the difference between expected value and mean?

Expected value and mean are often used interchangeably, but there is a subtle difference between the two. Expected value is a theoretical concept that represents the average value over an infinite number of trials, while the mean is the average value observed in a finite number of trials.

Can expected value be negative?

Yes, expected value can be negative. This can happen if the possible outcomes of a random variable include negative numbers and their corresponding probabilities are high enough to outweigh the positive outcomes. In such cases, the expected value can be a negative number.

Back
Top