No problem! Happy to help. (Smile)

  • MHB
  • Thread starter mathmari
  • Start date
  • Tags
    Covariance
In summary, we discussed the distribution, expected value, and covariance of the random variables $Y_i=\max \{X_i,X_{i+1}\}$, $i=1,\ldots , n-1$, which are derived from independent and identically distributed random variables $X_1, \ldots , X_n$ with $P(X_i=-1)=P(X_i=1)=\frac{1}{2}$. We determined that the distribution of $Y_i$ is given by $P(Y_i=1)=\frac{3}{4}$ and $P(Y_i=-1)=\frac{1}{4}$ and that the expected value of $Y_i$ is $\frac{1}{
  • #1
mathmari
Gold Member
MHB
5,049
7
Hey! :giggle:Let $X_1, \ldots , X_n$ be independent, identically distributed random variables with $$P(X_i=-1)=P(X_i=1)=\frac{1}{2}$$
We consider the random variables $Y_i=\max \{X_i,X_{i+1}\}$, $i=1,\ldots , n-1$.
(a) Determine the distribution of $Y_i$, $i=1,\ldots , n-1$.
(b) Calculate the expected value of $Y_i$, $i=1,\ldots , n-1$.
(c) Calculate the covariance of $Y_i$ and $Y_j$, i.e. $\text{Cov}(Y_i, Y_j)=E(Y_iY_j)-E(Y_i)E(Y_j)$, $i,j=1,\ldots , n-1$.For (a) we have :
The results for $(X_i, X_{i+1})$ each with probability $\frac{1}{4}$ are :
\begin{equation*}(-1,-1), (1,-1), (-1,1), (1,1)\end{equation*}
When we consider of the two values eeach time we get $1$ in three cases and $-1$ in one case.
So we get \begin{equation*}P(Y_i = 1) = \frac{3}{4}, \ P(Y_i=-1) = \frac{1}{4}\end{equation*}

For (b)
The expected value of $Y_i$ is \begin{equation*}E[Y_i]=1\cdot P(Y_i = 1)+(-1)\cdot P(Y_i=-1)=\frac{3}{4}-\frac{1}{4}=\frac{2}{4}=\frac{1}{2}\end{equation*}

For (c) :
The covariance of $Y_i$ and $Y_j$ is \begin{equation*}\text{Cov}(Y_i, Y_j)=E(Y_iY_j)-E(Y_i)E(Y_j)=E(Y_iY_j)-\frac{1}{2}\cdot \frac{1}{2}=E(Y_iY_j)-\frac{1}{4}\end{equation*}
When $j>i+1$ then $Y_i$ and $Y_j$ are independent and so the covariance is $0$.
How can we calculate the covariance when $j=i+1$ ?:unsure:
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
mathmari said:
For (c) :
The covariance of $Y_i$ and $Y_j$ is \begin{equation*}\text{Cov}(Y_i, Y_j)=E(Y_iY_j)-E(Y_i)E(Y_j)=E(Y_iY_j)-\frac{1}{2}\cdot \frac{1}{2}=E(Y_iY_j)-\frac{1}{4}\end{equation*}
When $j>i+1$ then $Y_i$ and $Y_j$ are independent and so the covariance is $0$.
How can we calculate the covariance when $j=i+1$ ?
Hey mathmari!

We can substitute the definitions of $Y_i$ and $Y_j$ and expand to the possible cases, just like we did in (a), can't we? 🤔
 
  • #3
Klaas van Aarsen said:
We can substitute the definitions of $Y_i$ and $Y_j$ and expand to the possible cases, just like we did in (a), can't we? 🤔

We have that $Y_i=\max \{X_i, X_{i+1}\}$ and $Y_{i+1}+\max \{X_{i+1}, X_{i+2}\}$.

We have the possible values:
$(X_i, X_{i+1},X_{i+2})\in \{(-1,-1,-1),(-1,-1,1),(-1,1,-1),(1,-1,-1),(-1,1,1),(1,-1,1),(1,1,-1),(1,1,1) \} $

Right? What do we have to calculate now using this information? :unsure:
 
  • #4
We can find the possible combinations of $Y_i$ and $Y_j$ and their probabilities, can't we? 🤔
 
  • #5
Klaas van Aarsen said:
We can find the possible combinations of $Y_i$ and $Y_j$ and their probabilities, can't we? 🤔

Do we have to calculate first the product of $Y_i$ and $Y_j$? :unsure:
 
  • #6
mathmari said:
Do we have to calculate first the product of $Y_i$ and $Y_j$? :unsure:

We have
\begin{equation*}(X_i, X_{i+1},X_{i+2})\in \{(-1,-1,-1),(-1,-1,1),(-1,1,-1),(1,-1,-1),(-1,1,1),(1,-1,1),(1,1,-1),(1,1,1) \}\end{equation*}
Then \begin{equation*}Y_iY_{i+1}\in \{(-1)\cdot (-1),(-1)\cdot 1,1\cdot 1,1\cdot (-1),1\cdot 1,1\cdot 1,1\cdot 1,1\cdot 1\}=\{1,-1,1,-1,1,1,1,1\}\end{equation*}
So $P(Y_iY_{i+1}=-1)=\frac{2}{8}$ and $P(Y_iY_{i+1}=1)=\frac{6}{8}$.

The expected value of $Y_iY_{i+1}$ is \begin{equation*}E[Y_iY_{i+1}]=1\cdot P(Y_i = 1)+(-1)\cdot P(Y_i=-1)=\frac{6}{8}-\frac{2}{8}=\frac{4}{8}=\frac{1}{2}\end{equation*}
So the covariance of $Y_i$ and $Y_j$ is \begin{equation*}\text{Cov}(Y_i, Y_j)=E(Y_iY_j)-\frac{1}{4}=\frac{1}{2}-\frac{1}{4}=\frac{1}{4}\end{equation*}

:unsure:
 
Last edited by a moderator:
  • #7
Correct - if $j=i+1$. (Nod)
 
  • #8
Klaas van Aarsen said:
Correct - if $j=i+1$. (Nod)

In other case it is equl to $0$, right? :unsure:
 
  • #9
mathmari said:
In other case it is equl to $0$, right?
Yes. (Nod)
 
  • #10
Klaas van Aarsen said:
Yes. (Nod)

Great! Thank you! (Sun)
 

FAQ: No problem! Happy to help. (Smile)

What does "No problem!" mean in this context?

"No problem!" is a common informal response that indicates that the person is happy to assist and that it was not a burden for them to do so. It is often used as a polite way to acknowledge gratitude or thanks.

Why does the person say "Happy to help"?

The phrase "Happy to help" expresses the person's willingness and eagerness to assist. It conveys a positive attitude and shows that the person is genuinely interested in helping others.

Is "No problem!" a suitable response in a professional setting?

While "No problem!" is a common and acceptable response in informal situations, it may not be appropriate in a professional setting. In a professional context, it is better to use a more formal response such as "You're welcome" or "My pleasure."

Why does the person add a smile at the end of the response?

The smile at the end of the response adds a friendly and positive tone to the message. It conveys warmth and sincerity and can help to build a good rapport with the person being helped.

Is it necessary to say "No problem!" and "Happy to help" together?

No, it is not necessary to say both phrases together. They are often used interchangeably and convey a similar message. However, some people may prefer one phrase over the other, so it is a matter of personal preference.

Similar threads

Replies
5
Views
1K
Replies
1
Views
1K
Replies
1
Views
1K
Replies
1
Views
1K
Replies
43
Views
4K
Replies
18
Views
1K
Back
Top