The infinite limits of the probability transition matrix for Markov chain

In summary, the conversation discusses the use of higher powers of matrices to prove the existence of a limit for a Markov chain with state space {1, 2, 3, 4} and transition matrix P. One method suggested is to use pure mathematics, such as noticing the pattern in the matrix and showing the convergence of the non-zero values. However, the question indicates that a more logical approach is needed, rather than just using a calculator to compute the matrices. It is mentioned that the sequence of matrices consists of two alternating subsequences with entries in complementary positions, and it is necessary to determine whether these subsequences converge or not.
  • #1
Joe1998
36
4
Misplaced Homework Thread
Consider a Markov chain with state space {1, 2, 3, 4} and transition matrix P given below:

phpmOCEDl.png


phpOI4hM6.png

Now, I have already figured out the solutions for parts a,b and c. However, I don't know how to go about solving part d? I mean the question says we can't use higher powers of matrices to justify our answer, then what else can we use to prove that the limit of that transition matrix exist?
Any help would be appreciate it, thanks.
 
Physics news on Phys.org
  • #2
Joe1998 said:
However, I don't know how to go about solving part d? I mean the question says we can't use higher powers of matrices to justify our answer, then what else can we use to prove that the limit of that transition matrix exist?
You could try a little pure mathematics.
 
  • #3
PeroK said:
You could try a little pure mathematics.
Can you please explain more and be more specific by giving me clear guidance how to do it?
Thanks
 
  • #4
For i) you could notice that:
$$
\begin{bmatrix}
0 & x & x & 0 \\
x & 0 & 0 & x \\
x & 0 & 0 & x \\
0 & x & x & 0
\end{bmatrix}^2
=
\begin{bmatrix}
x & 0 & 0 & x \\
0 & x & x & 0 \\
0 & x & x & 0 \\
x & 0 & 0 & x
\end{bmatrix}
$$Where ##x## just stands for not ##0##. That should give you a clue.
 
  • #5
PeroK said:
For i) you could notice that:
$$
\begin{bmatrix}
0 & x & x & 0 \\
x & 0 & 0 & x \\
x & 0 & 0 & x \\
0 & x & x & 0
\end{bmatrix}^2
=
\begin{bmatrix}
x & 0 & 0 & x \\
0 & x & x & 0 \\
0 & x & x & 0 \\
x & 0 & 0 & x
\end{bmatrix}
$$Where ##x## just stands for not ##0##. That should give you a clue.
Thanks for that. However, the question says we can't use higher powers of matrices to justify our answer, so how can I use your method in this case?
 
  • #6
Joe1998 said:
Thanks for that. However, the question says we can't use higher powers of matrices to justify our answer, so how can I use your method in this case?
It wasn't a method, it was a clue. You were supposed to take the next step and realize that:
$$
P = \begin{bmatrix}
0 & x & x & 0 \\
x & 0 & 0 & x \\
x & 0 & 0 & x \\
0 & x & x & 0
\end{bmatrix}$$$$
P^2 =
\begin{bmatrix}
x & 0 & 0 & x \\
0 & x & x & 0 \\
0 & x & x & 0 \\
x & 0 & 0 & x
\end{bmatrix}$$$$
P^3 =
\begin{bmatrix}
0 & x & x & 0 \\
x & 0 & 0 & x \\
x & 0 & 0 & x \\
0 & x & x & 0
\end{bmatrix}
$$Where ##x## just stands for not ##0##.

Then it depends what you know about limits. If ##\lim_{n \to \infty} P^n## exists then each element of the matrix must converge to some number. But, each element alternates between ##0## and some non-zero values. In any case, the only possible limit for ##P^n## is the zero matrix.

That's most of part i). You just need to show whether all the non-zero values converge to zero or not.

For parts ii) and iii), each martrix in the sequence is involves multiplying by ##P^2##. So, you end up with the two subsequences with non-zero entries in the same positions. So, you have to work out whether they converge or not.

In other words, you should have identified that the sequence ##P^n## consists of two alternating subsequences of matrices with entries in complementary positions.

The question, IMO, was indicating that you needed to provide that sort of logic, not just put the matrices into an engine and crunch the numbers.
 
  • #7
PeroK said:
It wasn't a method, it was a clue. You were supposed to take the next step and realize that:
$$
P = \begin{bmatrix}
0 & x & x & 0 \\
x & 0 & 0 & x \\
x & 0 & 0 & x \\
0 & x & x & 0
\end{bmatrix}$$$$
P^2 =
\begin{bmatrix}
x & 0 & 0 & x \\
0 & x & x & 0 \\
0 & x & x & 0 \\
x & 0 & 0 & x
\end{bmatrix}$$$$
P^3 =
\begin{bmatrix}
0 & x & x & 0 \\
x & 0 & 0 & x \\
x & 0 & 0 & x \\
0 & x & x & 0
\end{bmatrix}
$$Where ##x## just stands for not ##0##.

Then it depends what you know about limits. If ##\lim_{n \to \infty} P^n## exists then each element of the matrix must converge to some number. But, each element alternates between ##0## and some non-zero values. In any case, the only possible limit for ##P^n## is the zero matrix.

That's most of part i). You just need to show whether all the non-zero values converge to zero or not.

For parts ii) and iii), each martrix in the sequence is involves multiplying by ##P^2##. So, you end up with the two subsequences with non-zero entries in the same positions. So, you have to work out whether they converge or not.

In other words, you should have identified that the sequence ##P^n## consists of two alternating subsequences of matrices with entries in complementary positions.

The question, IMO, was indicating that you needed to provide that sort of logic, not just put the matrices into an engine and crunch the numbers.
Oh I see, thanks for that. So using your logic then for parts ii and iii, we will always be alternating between the zero and non-zero numbers for the higher powers of matrices in the form of either P or P^2, right? So in this case, none of the limits actually exist, except for for P^n is the zero matrix, or P^(2n+1) is a zero matrix, or P^2n is the zero matrix, is that correct?
Thanks
 
  • #8
Joe1998 said:
Oh I see, thanks for that. So using your logic then for parts ii and iii, we will always be alternating between the zero and non-zero numbers for the higher powers of matrices in the form of either P or P^2, right? So in this case, none of the limits actually exist, except for for P^n is the zero matrix, or P^(2n+1) is a zero matrix, or P^2n is the zero matrix, is that correct?
Completely wrong!
 
  • #9
PeroK said:
Completely wrong!
Why?
 
  • #10
Joe1998 said:
Why?
Let's just focus on ##P_{11}##, the top left entry. That element of ##P^n## is a sequence of numbers:
$$P^n_{11} = 0, x_1, 0, x_2, 0, x_3 \dots$$Part i) asks whether that sequence has a limit. The only possible limit is ##0##. But, you might suspect that ##\lim_{n \to \infty} x_n \ne 0##. You might suspect that the limit does not exist, but you still have to prove it.

For part ii) you have:
$$P^{2n}_{11} = x_1, x_2, x_3 \dots$$That's the same question as above.

For part iii) you have:
$$P^{2n+1}_{11} = 0, 0, 0 \dots$$That clearly converges to zero, but ##P^{2n+1}_{12}## etc. will be the problem to solve.
 
  • #11
PS I don't know what the answer is because I haven't done the work yet!
 
  • #12
Hmmm, I get it now. But if I somehow figure out that the limit does converge, then how would I know to what given that it is just alternating between 0 and 1 randomly? Also, if one of the elements of the matrix diverge to infinity, then it simply means all of the limit does not exist anymore (even if all of the other elements of the matrix converge to a certain value), correct?
 
  • #13
Joe1998 said:
Hmmm, I get it now. But if I somehow figure out that the limit does converge, then how would I know to what given that it is just alternating between 0 and 1 randomly? Also, if one of the elements of the matrix diverge to infinity, then it simply means all of the limit does not exist anymore (even if all of the other elements of the matrix converge to a certain value), correct?
You need to do some work to get an expression for these matrix entries. There's nothing random here. Nor is anything going to diverge to infinity.
 
  • #14
PeroK said:
You need to do some work to get an expression for these matrix entries. There's nothing random here. Nor is anything going to diverge to infinity.
so if there is nothing random and nothing diverges to infinity, does that mean the limits for parts i,ii and iii do exist?
 
  • #15
Joe1998 said:
so if there is nothing random and nothing diverges to infinity, does that mean the limits for parts i,ii and iii do exist?
I don't know. You must put pen to paper.
 
  • #16
What I can guess is this. ##P^{2n}## and ##P^{2n+1}## probably both converge. And, it can't be the zero matrix. That means that ##P^n## does not converge.

The main task is to analyse ##P^{2n}## and try to figure out what, if anything, it converges to. Everything else follows from that.
 
  • #17
PeroK said:
I don't know. You must put pen to paper.

PeroK said:
What I can guess is this. ##P^{2n}## and ##P^{2n+1}## probably both converge. And, it can't be the zero matrix. That means that ##P^n## does not converge.

The main task is to analyse ##P^{2n}## and try to figure out what, if anything, it converges to. Everything else follows from that.
I have been able to figure out that P^2n converges and to where it converges, but then how can we have a situation where P^2n and P^(2n+1) converge but P^n does not converge?
 
  • #18
Joe1998 said:
I have been able to figure out that P^2n converges and to where it converges,
I'm impressed by that!
Joe1998 said:
but then how can we have a situation where P^2n and P^(2n+1) converge but P^n does not converge?
You have a sequence comprising alternating convergent subsequences. The subsequences both converge to their respective limits; but the main sequence cannot itself be convergent. Compare the sequence:
$$s_n = 1, 2, 1, 2 \dots$$That sequence does not converge, but it comprises two alternating convergent sequences.

In your example, you have oscillatory behaviour for large ##n##. ##P^n## oscillates between ##P^{2n}## and ##P^{2n+1}##.
 
  • #19
PeroK said:
I'm impressed by that!

You have a sequence comprising alternating convergent subsequences. The subsequences both converge to their respective limits; but the main sequence cannot itself be convergent. Compare the sequence:
$$s_n = 1, 2, 1, 2 \dots$$That sequence does not converge, but it comprises two alternating convergent sequences.

In your example, you have oscillatory behaviour for large ##n##. ##P^n## oscillates between ##P^{2n}## and ##P^{2n+1}##.
Oh I see! Thanks for this great explanation. So now I have pretty much knocked out both P^n and P^2n, but I am not able to FIGURE OUT P^(2n+1), any suggestions? Thanks.
 
  • #20
Joe1998 said:
Oh I see! Thanks for this great explanation. So now I have pretty much knocked out both P^n and P^2n, but I am not able to FIGURE OUT P^(2n+1), any suggestions? Thanks.
That's the easy bit. ##P^{2n+1} = PP^{2n}##.
 
  • #21
PeroK said:
That's the easy bit. ##P^{2n+1} = PP^{2n}##.
Oh ahahahahaha, that's when I try doing this problem late at night, my brain is already dead...anyway thank you very much dude!
 
  • #22
PeroK said:
You have a sequence comprising alternating convergent subsequences. The subsequences both converge to their respective limits; but the main sequence cannot itself be convergent.
Er, it can if both subsequences converge to the same limit (example: ## S = 1, -1, \frac 1 2, -\frac 1 2, \frac 1 3, -\frac 1 3 \dots ##). Is this the case here?
 
  • Skeptical
Likes PeroK
  • #23
pbuk said:
Er, it can if both subsequences converge to the same limit (example: ## S = 1, -1, \frac 1 2, -\frac 1 2, \frac 1 3, -\frac 1 3 \dots ##). Is this the case here?
We've already established that the alternating matrices in the sequence have non-zero values in different entries. The matrices cannot be equal. Unless the limits are the zero matrix. This has all been covered above.
 
  • #24
PeroK said:
Unless the limits are the zero matrix.
Indeed.
 
  • #25
pbuk said:
Indeed.
Post #6:
PeroK said:
It wasn't a method, it was a clue. You were supposed to take the next step and realize that:
$$
P = \begin{bmatrix}
0 & x & x & 0 \\
x & 0 & 0 & x \\
x & 0 & 0 & x \\
0 & x & x & 0
\end{bmatrix}$$$$
P^2 =
\begin{bmatrix}
x & 0 & 0 & x \\
0 & x & x & 0 \\
0 & x & x & 0 \\
x & 0 & 0 & x
\end{bmatrix}$$$$
P^3 =
\begin{bmatrix}
0 & x & x & 0 \\
x & 0 & 0 & x \\
x & 0 & 0 & x \\
0 & x & x & 0
\end{bmatrix}
$$Where ##x## just stands for not ##0##.

Then it depends what you know about limits. If ##\lim_{n \to \infty} P^n## exists then each element of the matrix must converge to some number. But, each element alternates between ##0## and some non-zero values. In any case, the only possible limit for ##P^n## is the zero matrix.

That's most of part i). You just need to show whether all the non-zero values converge to zero or not.

For parts ii) and iii), each martrix in the sequence is involves multiplying by ##P^2##. So, you end up with the two subsequences with non-zero entries in the same positions. So, you have to work out whether they converge or not.

In other words, you should have identified that the sequence ##P^n## consists of two alternating subsequences of matrices with entries in complementary positions.

The question, IMO, was indicating that you needed to provide that sort of logic, not just put the matrices into an engine and crunch the numbers.
 
  • Like
Likes pbuk

FAQ: The infinite limits of the probability transition matrix for Markov chain

What is a Markov chain?

A Markov chain is a mathematical model that describes a sequence of events where the probability of transitioning from one state to another depends only on the current state and not on any previous states.

What is a probability transition matrix?

A probability transition matrix is a square matrix that represents the probabilities of transitioning from one state to another in a Markov chain. Each row and column in the matrix represents a state, and the values in the matrix represent the probabilities of transitioning from one state to another.

What are the infinite limits of the probability transition matrix for Markov chain?

The infinite limits of the probability transition matrix refer to the long-term behavior of a Markov chain. As the number of transitions increases, the probabilities in the matrix converge to certain values, known as the steady-state probabilities. These steady-state probabilities represent the long-term probabilities of being in each state.

How can the infinite limits of the probability transition matrix be calculated?

The infinite limits of the probability transition matrix can be calculated by finding the eigenvector corresponding to the eigenvalue of 1. This eigenvector represents the steady-state probabilities of the Markov chain.

What is the significance of the infinite limits of the probability transition matrix?

The infinite limits of the probability transition matrix are important in understanding the long-term behavior of a Markov chain. They can be used to predict the future states of the system and make decisions based on the probabilities of being in different states. They also provide insights into the stability and convergence of the Markov chain.

Similar threads

Back
Top