Exploring the Expectation of a Random Process with Unequal Probabilities

In summary: I believe I can say that if ##z_n## is uniformly distributed in the range [a,b], then ##z_{n+1}## is uniformly distributed in [a+1,b+1]. I tried a Monte Carlo simulation and this seems to be confirmed. This means that the mean of ##z_n## is ##(b-a)/2## and therefore that ##\mu=5/2##. So, I have finally shown that the mean of y_n is between 2.5 and 3.0505050 which is a pretty sharp bound. I believe I can say that if ##y_n## is uniformly distributed in the range [a,b], then ##y_{n+1}
  • #1
dman12
13
0
Moved from a technical forum, so homework template missing
Hello,

So I was asked this question the other day and don't really know how to go about it:

"Define a random process by:

Xn+1 = Xn+1 with probability 1/2
Xn+1 = 1/Xn with probability 1/2

Given that X0= 100, find the expectation of X10 "

I've only ever really met random processes like this when you consider equal step random walks and can use characteristic functions. How would I go about solving this problem?

Any help would be very much appreciated!
 
Physics news on Phys.org
  • #2
Maybe we could compute the average of the next step instead of having a tree ? We could also see the average as a fix point of the average recurrence relation.
 
Last edited:
  • #3
In other words we write $$x_{n+1}=1/2*(x_n+1+1/x_n) $$

Computing the fix point is not exact for n=10 this would need a computer program but this method gives average at infinity is $$\phi=\frac {1+\sqrt {5}}{2}\approx 1.62$$

Average at ten using a program gave me 1.59
 
  • #4
jk22 said:
In other words we write $$x_{n+1}=1/2*(x_n+1+1/x_n) $$

Computing the fix point is not exact for n=10 this would need a computer program but this method gives average at infinity is $$\phi=\frac {1+\sqrt {5}}{2}\approx 1.62$$

Average at ten using a program gave me 1.59

No, that first equation you wrote is false:
[tex] E \left( \frac{1}{X_n} \right) \neq \frac{1}{E X_n} [/tex]
For instance,
[tex] X_1 = \begin{cases} 101 & \Pr = 1/2 \\
\displaystyle \frac{1}{100} & \Pr = 1/2
\end{cases}
[/tex]
Thus,
[tex] E X_1 = \frac{1}{2} \left( 101 + \frac{1}{100} \right) = 50.50500000 [/tex]
so
[tex] \frac{1}{E X_1} \doteq 0.01980001980 [/tex]
but
[tex] E \left( \frac{1}{X_1} \right) = \frac{1}{2} \left( \frac{1}{101} + 100 \right) \doteq 50.00495050
[/tex]

Just for interest's sake: what was the nature of the program you used?

When I do it in Maple 11 I get the following ##E X_n## values:
[tex] \begin{array}{rl}
n & EX_n \\
2 & 50.75497525 \\
3 & 38.62872549 \\
4 & 32.59731395 \\
5 & 26.56027159 \\
6 & 22.03256118 \\
7 & 18.26083028 \\
8 & 15.24344673 \\
9 & 12.79228905 \\
10 & 10.81267707 \\
\end{array}
[/tex]

Basically, starting from a sequence ##V_n## of ##X_n## values (which may have repeated values, so each entry has probability ##2^{-n}##), I form two new sequences ##M_{n+1} = [V_n(1)+1, \ldots ,V_n(2^n)+1]## and ##N_{n+1} = [1/V_n(1) , \ldots, 1/V_n(2^n) ]##, then get a new sequence ##V_{n+1}## of ##X_{n+1}## values by concatenating ##M_{n+1}## and ##N_{n+1}##, then sorting the entries from smallest to largest. The sequence ##V_{n+1}## has ##2^{n+1}## entries, but many entries are duplicated, so each individual entry has probability ##2^{-n-1}##. Of course, the sorting step is unnecessary, but makes the results easier to read.

Maple did all the computations of the ##V_n## in exact rational arithmetic, so roundoff errors are not an issue. However, I let the ##V_n(i)## values be converted to floating-point numbers in the computation of ##EX_n## because the exact rational value of ##EX_n## becomes a ratio of two huge integers when ##n## gets close to 10. In fact, the exact value of ##EX_{10}## is
[tex] \begin{array}{lcl} E X_{10} &= & \displaystyle\frac{I_{10}}{J_{10}}, \; \text{where}\\
I_{10} &=& 1469579070445432944325748152877976451732541041634060024\\ &&653347052338236193574669483177095563513720147511715879 \\
J_{10} &=& 1359126015084446126651435240624997445643066808400449966906136188164565\\
&&14601326679253516313161435366438748160
\end{array}
[/tex]
 
Last edited:
  • #5
This recurrence relation is not the recurrence relation of the average but the average recurrence relation, it's not intended to be exact.

I wrote a small C program( with an error, see the next post)

C:
#include<stdio.h>

double avg=0.0;

void tree(double val, int depth)
{
   if(depth<10)
   {
     val=val+1.0;
     tree(val, depth+1);

     val=1.0/val;
     tree(val, depth+1);
   }
   else
     avg+=val;
}

int main(void)
{  
   int val0=100.0;

   tree(val0,0);

   printf("Avg : %lf\n",avg/1024.0);
}
 
Last edited:
  • #6
I discovered an error in the code, it gives avg near 10.8.
C:
#include<stdio.h>

double avg=0.0;

void tree(double val, int depth)
{
   if(depth<10)
   {
     tree(val+1.0, depth+1);
     tree(1.0/val, depth+1);
   }
   else
     avg+=val;
}

int main(void)
{  
   int val0=100.0;

   tree(val0,0);

   printf("Avg : %lf\n",avg/1024.0);
}

------------------
It's by making mistakes that I learn.
 
Last edited:
  • #7
We obtain different results, i surely made a mistake. It builds a tree starting from 100 and using both recurrence relation up to the level then the final result is averaged.
 
  • #8
I obtain different values, like rounded :

E7 18...
E8 15,...
E9 12,...
 
  • #9
jk22 said:
We obtain different results, i surely made a mistake. It builds a tree starting from 100 and using both recurrence relation up to the level then the final result is averaged.

Actually, we obtain the same results, or close to it. I had made an error in the recursions for ##n = 7## to ##n = 10##, so those results for ##EX_n## were erroneous. They have now been corrected in the post, and I get ##E X_{10} \approx 10.81##, as do you.
 
Last edited:
  • #10
The question is now if we got to infinity does it converge to phi ?
 
  • #11
jk22 said:
The question is now if we got to infinity does it converge to phi ?
I can show that asymptotically the mean value lies between 2.3 and 3.
 
  • #12
I'm interested in the proof you have.
 
  • #13
jk22 said:
I'm interested in the proof you have.
Whenever the sequence goes <1, the next step is >1. So I considered a subsequence which skips the <1 step. Sometimes that means we get two consecutive terms the same since it might follow the invert rule twice, so I skip the repeats too. This gets me to a sequence
##y_{n+1}=y_n+1## with probability 2/3;
##=1/y_n+1## with prob 1/3.
Let the mean of this sequence be ##\mu## and that of its inverse be ##\nu##. We obtain ##\mu=3+\nu##.
Since each term >1, 1>##\nu##>=##1/\mu##. Hence ##4>\mu>3.3##.
Relating this back to the original sequence, the mean of that is ##\frac 23\mu+\frac 13\nu##, and hence equals ##\mu-1##.
 
Last edited:
  • #14
I don't understand : how do you get $$\mu=3+\nu $$ ?
 
  • #15
jk22 said:
I don't understand : how do you get $$\mu=3+\nu $$ ?
##\mu=E(y_{n+1})=\frac 23 E(y_n+1)+\frac 13 E(1/y_n+1)=\frac 23(\mu+1)+\frac 13(\nu+1)## etc.
 
  • #16
It is indeed a deep result.
 
  • #17
jk22 said:
It is indeed a deep result.
Wasn't happy with the upper bound, so I tried comparing the y sequence with the sequence
##z_{n+1}=z_n+1## with prob 2/3; =1 with prob 1/3.
In a sense which I believe can be made rigorous, this is everywhere less than the y sequence. At least, in the sense that E(z)<E(y) and, more importantly, E(1/z)>E(1/y). E(1/z)=ln(3)/2<0.55, so the average of the original sequence lies between 2.3 and 2.55.

Update: I generated some 2000-long y sequences starting at 3. The mean of a sequnce rarely fell outside the range 3.3 to 3.55.
 
Last edited:

FAQ: Exploring the Expectation of a Random Process with Unequal Probabilities

1. What is an interesting random process?

An interesting random process is a phenomenon that occurs randomly and unpredictably, making it difficult to accurately predict or control. It is often characterized by a lack of pattern or regularity, and can occur in various fields such as science, mathematics, and statistics.

2. How is an interesting random process different from a regular random process?

An interesting random process is different from a regular random process in that it exhibits unique or unexpected behaviors, making it more challenging to study and understand. Regular random processes, on the other hand, follow a specific pattern or distribution and can be more easily studied and predicted.

3. Can interesting random processes be found in everyday life?

Yes, interesting random processes can be found in everyday life. Examples include the unpredictable movement of particles in a gas, the fluctuation of stock prices, and the random occurrence of natural disasters.

4. How do scientists study and analyze interesting random processes?

Scientists study and analyze interesting random processes through various methods such as statistical analysis, computer simulations, and mathematical models. They also use experimental data and observations to gain a better understanding of the underlying patterns and behaviors of these processes.

5. What is the significance of studying interesting random processes?

Studying interesting random processes can help scientists gain a deeper understanding of the natural world and complex systems. It can also lead to the development of new theories, technologies, and applications in various fields such as finance, biology, and physics.

Similar threads

Replies
9
Views
2K
Replies
8
Views
925
Replies
5
Views
2K
Replies
1
Views
838
Replies
1
Views
876
Back
Top