Question on Discrete Parameter Markov Chains

In summary, the formula for the probability of return to some state in a Markov chain at time n can be expressed in terms of the probability of return to that state at time n - k and the probability of first return at time k. This information can be found by deriving the different ways in which a return at time n can occur, such as being the first time returning at time n, returning at time 1 and then again at time n, or returning at time 2 and then again at time n.
  • #1
AD
72
0
I am required to find a formula expressing the probability of return to some state in a Markov chain at time n in terms of the probability of return to that state at time n - k and the probability of first return at time k. I cannot find this in my notes, and I have tried looking at several online resources. Can anyone help me?
 
Physics news on Phys.org
  • #2
It is no longer necessary for you to answer this question as I have just discovered the answer elsewhere.
 
  • #3
You could just derive it. How many ways can you return at time n?

One way is to have time n be the first time you return.
A second way is to return at time 1, and then have n be the next time you return.
Yet another way is to return at time 2, and then have n be the next time you return...
 

FAQ: Question on Discrete Parameter Markov Chains

What is a discrete parameter Markov chain?

A discrete parameter Markov chain is a mathematical model used to describe a system where the state of the system changes over time according to a set of probabilities. It is a type of stochastic process that is characterized by a finite number of states and a set of transition probabilities between those states.

How is a discrete parameter Markov chain different from a continuous parameter Markov chain?

The main difference between a discrete parameter Markov chain and a continuous parameter Markov chain is that in a discrete parameter Markov chain, the state of the system can only change at certain time points, while in a continuous parameter Markov chain, the state can change at any time. Additionally, the transition probabilities in a discrete parameter Markov chain are based on discrete time intervals, while in a continuous parameter Markov chain they are based on continuous time intervals.

What are the applications of discrete parameter Markov chains?

Discrete parameter Markov chains have a wide range of applications, including in the fields of biology, finance, physics, and computer science. They can be used to model processes such as genetic inheritance, stock market fluctuations, particle movement, and network traffic.

What is the difference between a first-order and higher-order Markov chain?

A first-order Markov chain is a type of discrete parameter Markov chain where the probability of transitioning from one state to another depends only on the current state. In a higher-order Markov chain, the probability of transitioning depends on the current state as well as the previous states.

How is the stability of a discrete parameter Markov chain determined?

The stability of a discrete parameter Markov chain is determined by the long-term behavior of the system. A Markov chain is considered stable if, over time, the probabilities of being in each state converge to a steady-state distribution. This can be determined using techniques such as eigenvalue analysis or simulation.

Back
Top