The expectation value of superimposed probability functions

In summary, the conversation discusses how to find the resulting expectation value when a second probability function is superimposed on an initial probability function. The suggested approaches include averaging the two expectation values or using the convolution of the two probability functions. The conversation also delves into the concept of Linearity of Expectations and how it can simplify the calculation. Ultimately, the individual is seeking clarification on how to find the average of two expectation values.
  • #1
redtree
322
13
I apologize for the simplicity of the question (NOT homework). This is a statistical question (not necessarily a quantum mechanical one).

If I have an initial probability function with an associated expected value and then a second probability function is superimposed on the initial probability function, how do I find resulting expectation value? Do I simply average the two expectation values (how do I weight?)? Would I use the convolution of the two probability functions to calculate the combined probability density function and then use that to calculate the resulting expected value? Or something else?
 
Physics news on Phys.org
  • #2
What exactly do you mean by "superimposed"?
Added? Average them, with the weight coming from the total integral of the functions.
Convoluted? Add the expectation values.
 
  • #3
This is what I mean, though I am not sure if it is correct:

Given, where ##\vec{X}## and ##\vec{Y}## are independent variables:

\begin{equation}

\begin{split}

\vec{Z}&=\vec{X}+\vec{Y}

\end{split}

\end{equation}Where the probability density function of ##\vec{X}## is given by ##f_{\vec{X}}(\vec{X})## and the probability density function of ##\vec{Y}## is given by ##f_{\vec{Y}}(\vec{Y})##, such that:

\begin{equation}

\begin{split}

f_{\vec{Z}}(\vec{Z})&=\int_{-\infty}^{\infty} d\vec{X} \int_{-\infty}^{\vec{Y}} d\vec{Y} f(\vec{X},\vec{Y})

\end{split}

\end{equation}Where:

\begin{equation}

\begin{split}

\int_{-\infty}^{\infty} d\vec{X} \int_{-\infty}^{\vec{Y}} d\vec{Y} f(\vec{X},\vec{Y})&=\int_{-\infty}^{\infty} d\vec{X} f_{\vec{Y}}(\vec{Y})f_{\vec{X}}(\vec{X})

\end{split}

\end{equation}Given ##\vec{Y}=\vec{Z}-\vec{X}##:

\begin{equation}

\begin{split}

\int_{-\infty}^{\infty} d\vec{X} f_{\vec{Y}}(\vec{Y})f_{\vec{X}}(\vec{X})&=\int_{-\infty}^{\infty} d\vec{X} f_{\vec{Y}}(\vec{Z}-\vec{X})f_{\vec{X}}(\vec{X})

\end{split}

\end{equation}Which is the convolution ##f(f_{\vec{X}}*f_{\vec{Y}})##, such that:

\begin{equation}

\begin{split}

f_{\vec{Z}}(\vec{Z})&=(f_{\vec{X}}*f_{\vec{Y}})(\vec{Z})

\end{split}

\end{equation}
Given:

\begin{equation}

\begin{split}

E_{\vec{Z}}(\vec{Z})=\int_{-\infty}^{\infty} d\vec{Z} f_{\vec{Z}}(\vec{Z}) \vec{Z}

\end{split}

\end{equation}Such that:

\begin{equation}

\begin{split}

E_{\vec{Z}}(\vec{Z})=\int_{-\infty}^{\infty} d\vec{Z} (f_{\vec{X}}*f_{\vec{Y}})(\vec{Z}) \vec{Z}

\end{split}

\end{equation}
 
  • #4
redtree said:
This is what I mean, though I am not sure if it is correct:

Given, where ##\vec{X}## and ##\vec{Y}## are independent variables:

\begin{equation}

\begin{split}

\vec{Z}&=\vec{X}+\vec{Y}

\end{split}

\end{equation}
Then E(Z)=E(X)+E(Y). This is a well-known result (and it doesn't even need independence). What is new?
 
  • Like
Likes StoneTemplePython
  • #5
redtree said:
Would I use the convolution of the two probability functions to calculate the combined probability density function and then use that to calculate the resulting expected value?

This is technically valid approach, but involves more work than needed.

redtree said:
Given, where ##\vec{X}## and ##\vec{Y}## are independent variables:

\begin{equation}

\begin{split}

\vec{Z}&=\vec{X}+\vec{Y}

\end{split}

\end{equation}

Note: The this right here is a convolution. But rather than going through the weeds of the underlying calculations, you can make use of the Linearity of Expectations and find

##E[Z] = E[ X + Y] = E[X] + E[Y]##

redtree said:
This is what I mean, though I am not sure if it is correct:

Where the probability density function of ##\vec{X}## is given by ##f_{\vec{X}}(\vec{X})## and the probability density function of ##\vec{Y}## is given by ##f_{\vec{Y}}(\vec{Y})##, such that:

\begin{equation}

\begin{split}

f_{\vec{Z}}(\vec{Z})&=\int_{-\infty}^{\infty} d\vec{X} \int_{-\infty}^{\vec{Y}} d\vec{Y} f(\vec{X},\vec{Y})

\end{split}

\end{equation}

This isn't a convolution... I'm not really sure what it is. The below would be the convolution of the two random variables written out in integral form. But again, using linearity of expectations makes your life a lot easier.

##f_Z(z) = \int_{-\infty}^{\infty} f_X(x) f_Y(z-x) dx##

## \int_{-\infty}^{\infty} z f_Z(z) = E[Z] = E[ X + Y] = E[X] + E[Y]##
 
  • #6
Nothing new. I just want to make sure I'm understanding correctly.

In this context, am I correct in the following:

If I want to find the average of two expectation values, such that:

\begin{equation}

\begin{split}

E_{\vec{Z}}(\vec{Z})&=\text{Avg}\left(E_{\vec{X}}(\vec{X}),E_{\vec{Y}}(\vec{Y}) \right)

\end{split}

\end{equation}Do I need to perform a weighted average or can I just assume the following?:

\begin{equation}

\begin{split}

E_{\vec{Z}}(\vec{Z})&=\frac{E_{\vec{X}}(\vec{X})+E_{\vec{Y}}(\vec{Y})}{2}

\end{split}

\end{equation}Such that:

\begin{equation}

\begin{split}

\vec{Z}&=\frac{\vec{X}+\vec{Y}}{2}

\end{split}

\end{equation}
 
  • #7
If your goal is to define Z

such that ##Z := \frac{1}{2} \big(X + Y\big) = \frac{1}{2} X + \frac{1}{2} Y##

then take the expectation of both sides and see

##E[Z] =E[\frac{1}{2} X + \frac{1}{2} Y] =\frac{1}{2} E[X] + \frac{1}{2} E[Y]##

It really is that simple. But you need to be very clear on what your goal is, and define Z accordingly. (The wording in this thread has not been so clear.)
 
  • #8
I want to make sure I understand correctly how to find the average of two expectation values. My sense was exactly what you just stated, but I wasn't sure.
 
  • #9
By the way, you had a question about the equation:

\begin{equation}
\begin{split}
f_{\vec{Z}}(\vec{Z})&=\int_{-\infty}^{\infty} d\vec{X} \int_{-\infty}^{\vec{Y}} d\vec{Y} f(\vec{X},\vec{Y})
\end{split}
\end{equation}

I got that from the following source: http://statweb.stanford.edu/~susan/courses/s116/node114.html
 
  • #10
Probably should have written it as follows:

\begin{equation}
\begin{split}
f_{\vec{Z}}(\vec{Z})&=\int_{-\infty}^{\infty} d\vec{Y} \int_{-\infty}^{\vec{Y}} d\vec{X} f(\vec{X},\vec{Y})
\end{split}
\end{equation}
 
  • #11
There's still a lot of problems with that equation, as you've written it. (Notice that line 1 of the Stanford link is basically how I wrote it.)

You have your ##d## being applied to a vector valued random variable. Ignoring the vector notation, you are using capital letters which denotes the random variable itself and you've applied the "d" to those random variables and are using a random variable as a limit of your inner integral. Lower case letters are specific values that the random variable can take on as in ##X(\omega) = x## or in more common shorthand for each ##X = x##.

I.e. what you wrote is materially different than what's in that link, and from what I can tell it's not well defined.
 
  • #12
Thanks for the notational correction.
 

FAQ: The expectation value of superimposed probability functions

1. What is the definition of "expectation value" in relation to probability functions?

The expectation value of a probability function is the sum of the products of each possible outcome and its corresponding probability. It represents the average value that would be obtained if the experiment were repeated many times.

2. Can you provide an example of calculating the expectation value for superimposed probability functions?

For example, if we have two probability functions, P(x) and Q(x), the expectation value for the superimposed function would be: ∫x(P(x) + Q(x))dx. This means we would multiply each possible outcome by the sum of the probabilities from both functions at that point, and then integrate over all possible outcomes to find the overall expectation value.

3. How does superimposing two probability functions affect the overall expectation value?

Superimposing two probability functions can change the overall expectation value, as it takes into account the combined probabilities of both functions. If the two functions have similar probabilities at a certain point, the overall expectation value may increase. However, if the two functions have opposite probabilities at a certain point, the overall expectation value may decrease.

4. Is the expectation value of superimposed probability functions always a valid measure of central tendency?

No, the expectation value may not always be a valid measure of central tendency for superimposed probability functions. This is because it only takes into account the average value of the probabilities, and does not consider the spread or variability of the data.

5. How is the concept of expectation value used in practical applications?

The concept of expectation value is commonly used in statistics, finance, and other fields to calculate the expected outcome of a random variable. It can also be used to compare different scenarios or make predictions based on past data. For example, in finance, the expected return on an investment can be calculated using the expectation value of its potential outcomes.

Similar threads

Back
Top