Calculating Transition Probabilities & Expected Values of a Markov Bus Chain

If you can show that the probability distribution of ##X_n## converges to a limiting distribution as ##n \to \infty##, then you can get the expected value of the limiting distribution, which will then be the expected value starting from any initial state.
  • #1
O_o
32
3

Homework Statement


-A bus is moving along an infinite route and has stops at n = 0, 1, 2, 3, ...
-The bus can hold up to B people
-The number of people waiting at stop n, Yn, is distributed Poisson(10) and is independent from the number waiting at all the other stops.
-At any given stop each passenger has probability p of getting off and the passengers are mutually independent.
-Assume after the passengers get off at stop n, everyone waiting at that stop gets on as long as there are spots remaining
[tex] \{X_n; n \geq 0\} [/tex] is a Markov Chain representing the number of passengers on the bus after it leaves stop n.1) What are the transition probabilities, pij?
2) What is [tex] E[X_n] [/tex] for n = 0, 1, 2, ..., 20 if p = 0.4 and B = 20

Homework Equations

The Attempt at a Solution


(1)
[tex]X_n = \min\{X_{n-1} - L_n + Y_n, B\}[/tex]
Where L_n ~ Binomial(X_{n-1}, p) is the number of people who got off at stop n

The transition probabilities need to be broken up into cases. For i > 0, j < i [tex] p_{i,j} = \sum_{k=i-j}^i {i \choose k} p^k(1-p)^{i-k} \left( \frac{10^{j-i+k} e^{-10}}{(j-i+k)!}\right)[/tex]

for i>0, B > j >= i [tex]
p_{i,j} = \sum_{k=0}^i {i \choose k} p^k(1-p)^{i-k} \left( \frac{10^{j-i+k} e^{-10}}{(j-i+k)!}\right)[/tex]

for i>=0 j = B[tex]
p_{i,j} = \sum_{k=0}^i {i \choose k} p^k(1-p)^{i-k} \left(\sum_{h=j-i+k}^{\infty}\frac{10^{h} e^{-10}}{(h)!}\right)[/tex]
(2)
I feel reasonably okay about question (1) although maybe it's completely wrong. My main concern was part 2. If I want to use [tex]X_n = \min\{X_{n-1} - L_n + Y_n, B\} [/tex] to find the expected value, I get this [tex] E[X_n] =\min\{E[X_{n-1}] - E[L_n] + E[Y_n], B\}[/tex]

which seems like it should be doable Except L_n ~ binomial (X_{n-1}, p) so the parameter is a random variable. Am I allowed to say[tex]E[L_n] = E[X_{n-1}]p[/tex]

We haven't learned anything about having random variables in sum boundaries so I'm a bit clueless. Thank you.
 
Last edited:
Physics news on Phys.org
  • #2
o_O said:

Homework Statement


-A bus is moving along an infinite route and has stops at n = 0, 1, 2, 3, ...
-The bus can hold up to B people
-The number of people waiting at stop n, Yn, is distributed Poisson(10) and is independent from the number waiting at all the other stops.
-At any given stop each passenger has probability p of getting off and the passengers are mutually independent.
-Assume after the passengers get off at stop n, everyone waiting at that stop gets on as long as there are spots remaining
[tex] \{X_n; n \geq 0\} [/tex] is a Markov Chain representing the number of passengers on the bus after it leaves stop n.1) What are the transition probabilities, pij?
2) What is [tex] E[X_n] [/tex] for n = 0, 1, 2, ..., 20 if p = 0.4 and B = 20

Homework Equations

The Attempt at a Solution


(1)
[tex]X_n = \min\{X_{n-1} - L_n + Y_n, B\}[/tex]
Where L_n ~ Binomial(X_{n-1}, p) is the number of people who got off at stop n

The transition probabilities need to be broken up into cases. For i > 0, j < i [tex] p_{i,j} = \sum_{k=i-j}^i {i \choose k} p^k(1-p)^{i-k} \left( \frac{10^{j-i+k} e^{-10}}{(j-i+k)!}\right)[/tex]

for i>0, B > j >= i [tex]
p_{i,j} = \sum_{k=0}^i {i \choose k} p^k(1-p)^{i-k} \left( \frac{10^{j-i+k} e^{-10}}{(j-i+k)!}\right)[/tex]

for i>=0 j = B[tex]
p_{i,j} = \sum_{k=0}^i {i \choose k} p^k(1-p)^{i-k} \left(\sum_{h=j-i+k}^{\infty}\frac{10^{h} e^{-10}}{(h)!}\right)[/tex]
(2)
I feel reasonably okay about question (1) although maybe it's completely wrong. My main concern was part 2. If I want to use [tex]X_n = \min\{X_{n-1} - L_n + Y_n, B\} [/tex] to find the expected value, I get this [tex] E[X_n] =\min\{E[X_{n-1}] - E[L_n] + E[Y_n], B\}[/tex]

which seems like it should be doable Except L_n ~ binomial (X_{n-1}, p) so the parameter is a random variable. Am I allowed to say[tex]E[L_n] = E[X_{n-1}]p[/tex]

We haven't learned anything about having random variables in sum boundaries so I'm a bit clueless. Thank you.

I guess in part (2) you are supposed to use the numerical values of B and p to get a numerical ##21 \times 21## one-step transition matrix ##P = \left( p_{i,j} \right) ##. Then, if you are given a starting state ##X_0 \in \{ 0, \ldots, 20 \}##, say ##X_0 = i_0##, the distribution of ##X_n## is just row ##i_0## of the matrix ##P^n##, so the expected value is readily computable. This is more easily obtained by starting with initial state probability (row) vector ##v_0 = (0,0, \ldots,1,0 \ldots, 0)##, where the 1 is in position ##i_0##. Then the row vector ##v_n## of state probabilities at time ##n## is given by ##v_n = v_{n-1} P##, so we can get ##v_1, v_2, \ldots, v_{20}## recursively, without the need to compute the whole matrix ##P^n## for each ##n##. In general, the values of ##EX_n = \sum_{k=0}^{20} k v_n(k)## will depend on the starting state ##i_0##.
 

FAQ: Calculating Transition Probabilities & Expected Values of a Markov Bus Chain

1. What is a Markov Bus Chain?

A Markov Bus Chain is a mathematical model used to analyze the transition of a system between different states over time. It is based on the concept of a Markov process, where the probability of transitioning to a new state depends only on the current state and not on any previous states.

2. How do you calculate transition probabilities in a Markov Bus Chain?

To calculate transition probabilities in a Markov Bus Chain, you will need to create a transition matrix that represents the probabilities of transitioning from one state to another. This matrix can be constructed by dividing the number of transitions from one state to another by the total number of transitions from the current state.

3. What are expected values in a Markov Bus Chain?

Expected values in a Markov Bus Chain refer to the average expected outcome of a system over a given period of time. It takes into account the transition probabilities and the potential rewards or costs associated with each state to calculate the overall expected value of the system.

4. How do you use Markov Bus Chains in real-world applications?

Markov Bus Chains have various real-world applications, such as predicting stock prices, analyzing customer behavior in marketing, and modeling the spread of diseases. They can also be used in natural language processing and speech recognition, as well as in weather forecasting and predicting traffic patterns.

5. What are the limitations of using Markov Bus Chains?

One limitation of using Markov Bus Chains is the assumption of independence between states, which may not always hold true in real-world scenarios. Additionally, the accuracy of predictions can be affected by the quality of data used to construct the transition matrix. Markov Bus Chains also do not account for external factors that may influence the system, which can lead to inaccurate results.

Back
Top