Confusion about a random process

In summary, the problem discusses a random process X(t) and the need to find the value of a for which it is a wide-sense stationary process. The mean and autocorrelation function are calculated, leading to a confusion about the definition of the autocorrelation function and the use of the same random variable \alpha at two time instants. The correct way is to use two independent random variables \alpha_1 and \alpha_2.
  • #1
ait.abd
26
0
Question already asked on http://math.stackexchange.com/quest...m-process?noredirect=1#comment2661260_1310194, but couldn't get an answer so reposting here
------------------------------------------------------------------------------------------------------------------------------------------
Let [itex]X(t)[/itex] be a random process such that:
$$
X(t) = \begin{cases}
t & \text{with probability } \frac{1}{2} \\
2-at & \text{with probability } \frac{1}{2} \\
\end{cases},
$$
where [itex]a[/itex] is a constant.
I need to find the value of [itex]a[/itex] for which [itex]X(t)[/itex] is a wide-sense stationary process. I have made the following definition of the random process:
$$
\begin{equation}
X(t) = \alpha t + (1-\alpha)(2-at),
\end{equation}
$$
where [itex]\alpha[/itex] is a Bernoulli random variable with [itex]p=q=0.5[/itex].

For mean, we have
$$
E[X(t)] = \frac{t + 2-at}{2}.
$$
For the autocorrelation function,
$$
\begin{align*}
R_X(t_1,t_2) &= E[X(t_1)X(t_2)]\\
&=E[(\alpha t_1 + (1-\alpha)(2-at_1))\times(\alpha t_2 + (1-\alpha)(2-at_2))]\\
&=E[\alpha^2 t_1 t_2 + (1-\alpha)^2(2-at_1)(2-at_2)]\\
&=\frac{t_1 t_2}{2} + \frac{4-a(t_1+at_2)+a^2 t_1 t_2}{2}\\
\end{align*}
$$
As clear from the above equation, there is no value of [itex]a[/itex] for which autocorrelation function is a function of time difference [itex]t_2-t_1[/itex].

My confusion starts from the way we define autocorrelation function as above in so many textbooks. The above-mentioned definition shows that we sample the ensemble at two time instants [itex]t_1[/itex] and [itex]t_2[/itex] to get two random variables. The two random variables [itex]X(t_1)[/itex] and [itex]X(t_2)[/itex] are (possibly different) functions of the same random variable [itex]\alpha[/itex]. My question is why do we need to take the same random variable [itex]\alpha[/itex] at two time instants? It is like saying that if we know [itex]X(t_1)[/itex], we can figure out [itex]X(t_2)[/itex] right away. Shouldn't it be like that at [itex]t_1[/itex] we should take [itex]\alpha_1[/itex] and at [itex]t_2[/itex] we should take [itex]\alpha_2[/itex], where both [itex]\alpha_1[/itex] and [itex]\alpha_2[/itex] are Bernoulli with [itex]p=0.5[/itex].

I can describe the confusion like the following as well. When we sample the random process at two time instants, we get two random variables [itex]A = X(t_1)[/itex] and [itex]B = X(t_2)[/itex], where
$$
A = \begin{cases}
t_1 & \text{ with probability 0.5} \\
2-at_1 & \text{ with probability 0.5} \\
\end{cases}
$$
and
$$
B = \begin{cases}
t_2 & \text{ with probability 0.5} \\
2-at_2 & \text{ with probability 0.5} \\
\end{cases}.
$$
To calculate [itex]E[AB][/itex], we need to take the cases into account where [itex]A=t_1[/itex] and [itex]B=2-at_2[/itex], and [itex]A=2-at_1[/itex] and [itex]B=t_2[/itex]. Both of these cases do not appear in the calculation of ensemble autocorrelation function [itex]R_X(t_1,t_2)[/itex]. Why do I get to take the two cases into account when I use the formulation in terms of [itex]A[/itex] and [itex]B[/itex], and these two cases do not appear when I calculated [itex]R_X(t_1, t_2)[/itex] using the formulation with [itex]\alpha[/itex] as discussed in the start of the problem?
 
Physics news on Phys.org
  • #2
Your covariance is incorrect. You have to subtract the product of the means.

[itex]Cov(X(t_1)X(t_2))=E(X(t_1)X(t_2))-E(X(t_1))E(X(t_2))[/itex]
 
  • #3
ait.abd said:
Shouldn't it be like that at [itex]t_1[/itex] we should take [itex]\alpha_1[/itex] and at [itex]t_2[/itex] we should take [itex]\alpha_2[/itex], where both [itex]\alpha_1[/itex] and [itex]\alpha_2[/itex] are Bernoulli with [itex]p=0.5[/itex].

Yes, that is the correct way to look at it.

The formula [itex] R(\tau) = \frac{ E( X_{\tau} - \mu) E(X_{t + \tau} - \mu)} {\sigma^2} [/itex] does not assert that the random variables [itex] X_{\tau} [/itex] and [itex] X_{t + \tau} [/itex] are functions of the same random variable [itex] \alpha [/itex]. In this problem, using two independent random variables [itex] \alpha_1, \alpha_2 [/itex] is correct and does not contradict that formula.
 

FAQ: Confusion about a random process

1. What is a random process?

A random process is a sequence of events or outcomes that occur in a seemingly unpredictable manner. It is often characterized by the absence of any discernible pattern or rule.

2. How do you determine if a process is truly random?

Determining if a process is truly random can be challenging, as there is no definitive way to prove randomness. However, there are statistical tests that can be performed to measure the level of randomness in a process. These tests look for patterns or correlations in the data and can provide evidence for or against randomness.

3. Can a random process produce the same outcome multiple times?

Yes, a random process can produce the same outcome multiple times. Just because a process is random does not mean that it cannot produce repetitive patterns or outcomes. In fact, it is possible for a random process to produce the same outcome infinitely many times, although this is highly unlikely.

4. Are there any applications of random processes in science?

Yes, random processes have numerous applications in science. They are used in fields such as statistics, physics, biology, and computer science. For example, random processes are used to model the behavior of complex systems, simulate natural phenomena, and generate random numbers for statistical analysis.

5. Can we control or manipulate a random process?

No, we cannot control or manipulate a truly random process. As the name suggests, random processes are inherently unpredictable and cannot be influenced by external factors. However, we can create pseudo-random processes using algorithms that mimic the behavior of random processes. These processes can be controlled and manipulated to some extent.

Back
Top