Markov Chains - Finding a Transition Matrix for Probabilities

In summary, Umar explained how to construct the transition matrix for the corresponding probabilities.
  • #1
Umar
37
0
Hi! I have a question regarding making the transition matrix for the corresponding probabilities. The main problem I feel I have here is figuring out how to represent the probabilities in the question in the transition matrix. Like if something is 7 times more likely than something else.. Any help would be appreciated.

View attachment 6432
 

Attachments

  • Screenshot (25).png
    Screenshot (25).png
    17.2 KB · Views: 97
Last edited:
Physics news on Phys.org
  • #2
Umar said:
Hi! I have a question regarding making the transition matrix for the corresponding probabilities. The main problem I feel I have here is figuring out how to represent the probabilities in the question in the transition matrix. Like if something is 7 times more likely than something else.. Any help would be appreciated.

Hi Umar! ;)

If I'm not mistaken $a_{31}$ is the probability that the quantum state shifts from state 1 to state 3? (Wondering)

Then $a_{21}$ is the probability of the transition of state 1 to 2.
So if the latter is 7 times more likely then the first, that implies that $a_{21} = 7 a_{31}$.

The given statements result in:
$$A = \begin{bmatrix}
& a_{12} & a_{13} \\
7a_{31} && a_{13} \\
a_{31} &4a_{12} &
\end{bmatrix}$$
We can combine it with the fact that every row sums to $1$.
 
Last edited:
  • #3
I like Serena said:
Hi Umar! ;)

If I'm not mistaken $a_{31}$ is the probability that the quantum state shifts from state 1 to state 3? (Wondering)

Then $a_{21}$ is the probability of the transition of state 1 to 2.
So if the latter is 7 times more likely then the first, that implies that $a_{21} = 7 a_{31}$.

The given statements result in:
$$A = \begin{bmatrix}
& a_{12} & a_{13} \\
7a_{31} && a_{13} \\
a_{31} &4a_{12} &
\end{bmatrix}$$
When we combine it with the fact that every row sums to $1$, we have a system of 3 equations and 3 unknowns...

Thanks for the reply! Your explanation really helped me to understand how to construct the transition matrix. So when I tried to solve the system for a31, a32 and a33, I got 1/29, 28/29 and 0 respectively. When I put this into my assignment website, I got 1/3... which I think was probably for getting the 0 right. But I don't get what mistake I could have made, because plugging in those values into the rows gives me the expected result of 1...
 
  • #4
Umar said:
Thanks for the reply! Your explanation really helped me to understand how to construct the transition matrix. So when I tried to solve the system for a31, a32 and a33, I got 1/29, 28/29 and 0 respectively. When I put this into my assignment website, I got 1/3... which I think was probably for getting the 0 right. But I don't get what mistake I could have made, because plugging in those values into the rows gives me the expected result of 1...

Rereading the problem statement, I think we can deduce the following:
$$a_{11}=a_{22}=a_{33}=0$$
since it is given that the particle makes a transition to a different state at every step.
(And otherwise we wouldn't be able to solve the system, since we'd have 6 unknowns.)

Additionally, I think it's the column sum that has to add up to 1 instead of the row sum.
That's because if it's given that the particle is in state 1, then afterwards it's either in state 2 ($a_{21}$) or state 3 ($a_{31}$).
So together those should yield a probability of 1.

Finally, are we talking probability amplitudes or actual probabilities? (Wondering)
If they are probability amplitudes, perhaps we have to square them before comparing or adding?
 
  • #5
I like Serena said:
Rereading the problem statement, I think we can deduce the following:
$$a_{11}=a_{22}=a_{33}=0$$
since it is given that the particle makes a transition to a different state at every step.
(And otherwise we wouldn't be able to solve the system, since we'd have 6 unknowns.)

Additionally, I think it's the column sum that has to add up to 1 instead of the row sum.
That's because if it's given that the particle is in state 1, then afterwards it's either in state 2 ($a_{21}$) or state 3 ($a_{31}$).
So together those should yield a probability of 1.

Finally, are we talking probability amplitudes or actual probabilities? (Wondering)
If they are probability amplitudes, perhaps we have to square them before comparing or adding?

Oh okay, I guess you're right with the columns having to add up to 1. I'll try that and report back what I get.
 
  • #6
I like Serena said:
Rereading the problem statement, I think we can deduce the following:
$$a_{11}=a_{22}=a_{33}=0$$
since it is given that the particle makes a transition to a different state at every step.
(And otherwise we wouldn't be able to solve the system, since we'd have 6 unknowns.)

Additionally, I think it's the column sum that has to add up to 1 instead of the row sum.
That's because if it's given that the particle is in state 1, then afterwards it's either in state 2 ($a_{21}$) or state 3 ($a_{31}$).
So together those should yield a probability of 1.

Finally, are we talking probability amplitudes or actual probabilities? (Wondering)
If they are probability amplitudes, perhaps we have to square them before comparing or adding?

Hi! Just wanted to let you know that it worked, thanks for your help!
 

FAQ: Markov Chains - Finding a Transition Matrix for Probabilities

1. What is a Markov Chain?

A Markov Chain is a mathematical model used to describe the probability of transitioning from one state to another over a sequence of events. It is based on the principle that the probability of moving to the next state only depends on the current state and not on any previous states.

2. How do you calculate the transition matrix for a Markov Chain?

The transition matrix for a Markov Chain is calculated by dividing the number of transitions from one state to another by the total number of transitions from that state. This matrix represents the probabilities of transitioning from one state to another in a Markov Chain.

3. What is the importance of finding the transition matrix in a Markov Chain?

The transition matrix is important because it allows us to predict the future behavior of a system based on its current state. It also helps us to analyze the stability and long-term behavior of a system.

4. Can the transition matrix change over time in a Markov Chain?

Yes, the transition matrix can change over time in a Markov Chain. This can happen if the system being modeled experiences changes, such as new inputs or external influences.

5. How can Markov Chains be applied in real-life situations?

Markov Chains can be applied in various real-life situations, such as predicting stock market trends, weather forecasting, and analyzing customer behavior in businesses. They are also used in natural language processing and speech recognition.

Similar threads

Back
Top