What is the proportion of time the runner runs barefooted using a Markov Chain?

In summary: This probability is .5, but there's also the probability of .25 that he leaves through the front door and returns through the front door, returning us to state 3.Am I setting up the chain incorrectly? Basically, 50% of those 75% transition probabilities account for him running barefoot, but I don't see anyway to include that in the transition matrixHi your matrix looks alright to me, you can recheck it looking in each state at what happens in each case : front-to-back (front shoe count drops by 1 unless already 0), back-to
  • #1
ElijahRockers
Gold Member
270
10

Homework Statement


A certain laid-back runner owns three pairs of shoes, which he keeps by either the front
or back doors of his house. Each morning he is equally likely to leave through the front
or back door, and then after his run, he is equally likely to enter through the front or
back door. He takes off his shoes and leaves them at whatever door he enters. If there are
no shoes at the door he leaves from, then he runs barefooted. We want to determine the
proportion of time that the runner runs barefooted, in the long run (pun intended). Solve
this problem using a Markov Chain (specify the states, give the transition probabilities,
and determine the required steady-state probability). Hint: You only need to keep track
of the number of pairs of shoes at one of the doors.

The Attempt at a Solution


Not sure how to set this one up, not much experience with Markov chains.

I've set up a chain with 4 states: 0, 1, 2, 3, each state representing the number of shoes at the front door, with transition matrix:

$$ P =
\left( \begin{array}{cccc}
.75 & .25 & 0 & 0 \\
.25 & .5 & .25 & 0 \\
0 & .25 & .5 & .25 \\
0 & 0 & .25 & .75\end{array} \right)$$

I don't know if this is helpful though, because what I'm really looking for is a probability that's sort of buried in the .75 probabilities. e.g. the probability there are three shoes at the front door, and he leaves through the back door. This probability is .5, but there's also the probability of .25 that he leaves through the front door and returns through the front door, returning us to state 3.

Am I setting up the chain incorrectly? Basically, 50% of those 75% transition probabilities account for him running bare-foot, but I don't see anyway to include that in the transition matrix
 
Physics news on Phys.org
  • #2
Hi your matrix looks alright to me, you can recheck it looking in each state at what happens in each case : front-to-back (front shoe count drops by 1 unless already 0), back-to-front (up 1 unless already 3) and other cases (no change).
 
  • #3
wabbit said:
Hi your matrix looks alright to me, you can recheck it looking in each state at what happens in each case : front-to-back (front shoe count drops by 1 unless already 0), back-to-front (up 1 unless already 3) and other cases (no change).

Thanks for replying. Yea I get the transition probabilities are correct, but I'm not sure how to extract the barefoot running part of the question. If I could somehow represent that in the transition matrix, then I could find the steady state behavior and find the answer
 
  • #4
I'm not sure I get your point - the barefoot case is already included, that s why starting with state 0 and applying transition front-to-back you know what happens (nothing as it turns out) - without barefoot that would just be left undefined.Ooooh OK missed that question sorry. Give me a minute.

Never mind, thought you were asked to determine the whole steady state distribution - anyway that's what I'd do :
He runs barefoot when
- starting from state 0 and going to front door, or
- state 3 / back door
So what you need is the steady state probability of state 0 and 3 (and 1 and 2 while you're at it). So your first task is to compute that steady state distribution. Do you know how to do that ?
(You don't need to worry about bare feet at this point)
 
Last edited:
  • #5
ElijahRockers said:

Homework Statement


A certain laid-back runner owns three pairs of shoes, which he keeps by either the front
or back doors of his house. Each morning he is equally likely to leave through the front
or back door, and then after his run, he is equally likely to enter through the front or
back door. He takes off his shoes and leaves them at whatever door he enters. If there are
no shoes at the door he leaves from, then he runs barefooted. We want to determine the
proportion of time that the runner runs barefooted, in the long run (pun intended). Solve
this problem using a Markov Chain (specify the states, give the transition probabilities,
and determine the required steady-state probability). Hint: You only need to keep track
of the number of pairs of shoes at one of the doors.

The Attempt at a Solution


Not sure how to set this one up, not much experience with Markov chains.

I've set up a chain with 4 states: 0, 1, 2, 3, each state representing the number of shoes at the front door, with transition matrix:

$$ P =
\left( \begin{array}{cccc}
.75 & .25 & 0 & 0 \\
.25 & .5 & .25 & 0 \\
0 & .25 & .5 & .25 \\
0 & 0 & .25 & .75\end{array} \right)$$

I don't know if this is helpful though, because what I'm really looking for is a probability that's sort of buried in the .75 probabilities. e.g. the probability there are three shoes at the front door, and he leaves through the back door. This probability is .5, but there's also the probability of .25 that he leaves through the front door and returns through the front door, returning us to state 3.

Am I setting up the chain incorrectly? Basically, 50% of those 75% transition probabilities account for him running bare-foot, but I don't see anyway to include that in the transition matrix

I think your matrix ##P## is OK, so I will deal mostly with other aspects of your question.

If ##q(n) = (q_0(n), q_1(n),q_2(n),q_3(n))## (a row-vector) is the vector of state-occupancy probabilities at time ##n = 0,1,2, \ldots##, then we have
[tex] q(n+1) = q(n) \cdot P [/tex]
Note: this assumes the usual, but not universal) convention that ##p_{ij}## is the 1-step probability of going from state ##i## to state ##j##---so that the rows of ##P## sum to 1. Occasionally one sees the opposite convention, where the columns sum to 1 instead. (In your case it does not really matter, since ##P## is symmetric, so is its own transpose.)

If we let ##n## denote the "leaving time", he runs barefoot with probability 1/2 in states 0 and 3, so the probability ##b(n)## that he runs barefoot at time ##n## is
[tex] b(n) = \frac{1}{2} q_0(n) + \frac{1}{2} q_3(n) [/tex]
Can you find the long-run average of the ##b(n)##, or the limiting value ##b(\infty) = \lim_{n \to \infty} b(n)##?

You could also try to incorporate the bare/shod running conditions by re-defining "state" to mean "j shoes were at front door and am now running shod" or "j shoes were at front door and am now running barefoot". If we label the states as (0,s),(1,s),(2,s),(3,s), (0,b),(1,b), (2,b),(3,b) [s = shod, b = bare] then we have an 8-state Markov chain. However, you can easily recognize that states (1,b) and (2,b) do not really exist (because if there were 1 or 2 shoes at the front door he would not be running barefoot, no matter which door he used to go on his run). In other words, states (1,b) and (2,b) can never be occupied: we cannot start in either of them, and we can never reach them from any other state---so we might as well throw them out. Therefore, there are really just 6 states (0,s),(1,s),(2,s),(3,s),(0,b),(3,b). You can work out the 6×6 1-step transition probability matrix, and then get the long-run barefoot proportion by working out the long probabilities of being in states (0,b) or (3,b).
 
Last edited:
  • #6
Ray Vickson said:
I think your matrix ##P## is OK, so I will deal mostly with other aspects of your question.

If ##q(n) = (q_0(n), q_1(n),q_2(n),q_3(n))## (a row-vector) is the vector of state-occupancy probabilities at time ##n = 0,1,2, \ldots##, then we have
[tex] q(n+1) = q(n) \cdot P [/tex]
Note: this assumes the usual, but not universal) convention that ##p_{ij}## is the 1-step probability of going from state ##i## to state ##j##---so that the rows of ##P## sum to 1. Occasionally one sees the opposite convention, where the columns sum to 1 instead. (In your case it does not really matter, since ##P## is symmetric, so is its own transpose.)

If we let ##n## denote the "leaving time", he runs barefoot with probability 1/2 in states 0 and 3, so the probability ##b(n)## that he runs barefoot at time ##n## is
[tex] b(n) = \frac{1}{2} q_0(n) + \frac{1}{2} q_3(n) [/tex]
Can you find the long-run average of the ##b(n)##, or the limiting value ##b(\infty) = \lim_{n \to \infty} b(n)##?

You could also try to incorporate the bare/shod running conditions by re-defining "state" to mean "j shoes were at front door and am now running shod" or "j shoes were at front door and am now running barefoot". If we label the states as (0,s),(1,s),(2,s),(3,s), (0,b),(1,b), (2,b),(3,b) [s = shod, b = bare] then we have an 8-state Markov chain. However, you can easily recognize that states (1,b) and (2,b) do not really exist (because if there were 1 or 2 shoes at the front door he would not be running barefoot, no matter which door he used to go on his run). In other words, states (1,b) and (2,b) can never be occupied: we cannot start in either of them, and we can never reach them from any other state---so we might as well throw them out. Therefore, there are really just 6 states (0,s),(1,s),(2,s),(3,s),(0,b),(3,b). You can work out the 6×6 1-step transition probability matrix, and then get the long-run barefoot proportion by working out the long probabilities of being in states (0,b) or (3,b).
Thanks I think this will help. I tried making a state diagram of shod and bare earlier, but I switched to the shore count space because I thought it would be easier. I'll try it that way instead when I get home.
 
  • #7
Elijah : don't give up on getting the steady state distribution from your matrix. It's not difficult.
Hint Ray included tte equation for q(n). What happens when n gets large ?
 
  • #8
wabbit said:
Ray included the equation for q(n).
Yes, but if you notice the symmetry it's very easy to get the steady state distribution. It immediately comes down to only one unknown.
 
  • #9
wabbit said:
Elijah : don't give up on getting the steady state distribution from your matrix. It's not difficult.
Hint Ray included tte equation for q(n). What happens when n gets large ?

Solved. Used the 4-state matrix, found the long run state-occupancy of state 0 and 3, and multiplied each by a half. Thanks y'all. :)
 
  • #10
Well done sir! :)
 

FAQ: What is the proportion of time the runner runs barefooted using a Markov Chain?

1. What is a Markov Chain?

A Markov Chain is a mathematical model that describes a sequence of events in which the probability of each event depends only on the state of the previous event. It is used to model systems with random, time-dependent behavior.

2. How do I set up a Markov Chain?

To set up a Markov Chain, you will need to identify the states of the system, determine the transition probabilities between states, and create a transition matrix. This matrix will be used to calculate the probability of transitioning from one state to another.

3. What is a transition matrix?

A transition matrix is a square matrix that shows the probabilities of transitioning from one state to another in a Markov Chain. Each row represents the current state, and each column represents the next state. The values in the matrix represent the probability of transitioning from the current state to the next state.

4. How do I interpret the results of a Markov Chain?

The results of a Markov Chain can be interpreted as the proportion of time that the system will spend in each state. This can be used to predict the future behavior of the system and identify any patterns or trends.

5. Can a Markov Chain be used for any type of system?

A Markov Chain can be used to model a wide range of systems, including biological, physical, and social systems. However, it is important to note that the Markov Chain assumes that the system is in a steady-state and that the transition probabilities remain constant over time.

Back
Top