How often do you return to the origin in a random walk?

In summary: H, is greater than the accumulated tails, nT. If Δn>0, then the most probable case is that the accumulated heads, nH, is less than the accumulated tails, nT.
  • #1
redphoton
12
0
How often do you return AND CROSS the origin in a random walk?

Is there a probability distribution for how many times can you expect a change in the lead of a coin tossing contest between two players (heads Vs. tails)?

Which is the most likely # of changes in the lead?
 
Physics news on Phys.org
  • #2
redphoton said:
Is there a probability distribution for how many times can you expect a change in the lead of a coin tossing contest between two players (heads Vs. tails)?

Which is the most likely # of changes in the lead?

Welcome to the PF. What is the context of the question? Is it for school work?
 
  • #3
no, this is not for school, i just found the random walk article on the wolfram mathworld website and got this question. can you help me? thanks.
 
  • #4
Probability is not one of my strong points, unfortunately. But it looks like you asked the question in the right place -- hopefully you'll get some good feedback in the next day or so.
 
  • #5
This is probability (1/2), (1/2) random walk on the integers? Then you cross the origin infinitely many times.
 
  • #6
g_edgar said:
This is probability (1/2), (1/2) random walk on the integers? Then you cross the origin infinitely many times.

I think the question is:

Given a random walk starting at the origin and moving left or right with probability 1/2, what is the expected number (extension: what is the distribution) of crossings of the origin after N steps?

A crossing in this context is moving from a positive number to any number of nonnegative numbers to a negative number, or from a negative number to any number of nonpositive numbers to a positive number.
 
Last edited:
  • #7
First we need to define what it means to cross the origin. Let us say that we have crossed the origin if we are at position +1 and get two tails (-1 each) in a row, or if we are at position -1 and get two heads (+1 each) in a row.

Let f(n,k) be the number of ways to cross the origin in n steps given that you start k spaces to the right of it (k may be negative). A path that crosses the origin twice is counted twice. That is, if P(n,k) is the set of all paths of length n starting at position k, and g(p) is the number of times path p crosses the origin, then
[tex]f(n,k) = \sum_{p \in P(n,k)} g(p)[/tex].
We seek to find f(n,0).

Now, if we have zero or one moves left there is no way to cross the origin:
f(0,k) = f(1,k) = 0

Otherwise, if we have at least 2 moves left, then
P(n,k) = {p + 'hh' | p is in P(n-2,k-2)} union {p + 'tt' | p is in P(n-2,k+2)} union {p + 'th' | p is in P(n-2,k)} union {p + 'ht' | p is in P(n-2,k)}. Call these four subsets A,B,C,D.

C and D each contribute f(n-2,k) to the total of f(n,k), so together they contribute 2f(n-2,k). If k is not 1 or -1, then A contributes f(n-2,k-2) to the total of f(n,k), and B contributes f(n-2,k+2) to the total of f(n,k). So if k is not 1 or -1, we have
f(n,k) = f(n-2,k-2) + 2f(n-2,k) + f(n-2,k+2)

If k = 1, then each element of A is such that g(p + 'hh') = 1 + g(p). So all the strings together in this set contribute
[tex]\sum_{p \in P(n-2,-1)} (1 + g(p)) = 2^{n-2} + f(n-2,k-2)[/tex] towards the total of f(n,k). A similar thing holds for when k=-1 with B. So we have:
f(n,k) = 2^(n-2) + f(n-2,k-2) + 2f(n-2,k) + f(n-2,k+2)
when k = 1 or k = -1.

In summary
f(0,k) = f(1,k) = 0
--if n > 1 and|k| != 1:
f(n,k) = f(n-2,k-2) + 2f(n-2,k) + f(n-2,k+2)
--if n > 1 and |k| = 1:
f(n,k) = 2^(n-2) + f(n-2,k-2) + 2f(n-2,k) + f(n-2,k+2)

This is enough information to calculate f(n,k) for any n and k. To get the expected number of crossings, simply take f(n,k)/2^n.
 
  • #8
I will offer a very visual, heuristic explanation. If you want to follow it, then you might want to draw what I describe. I have offered one crude ascii-art diagram to help you understand what I'm talking about. I hope that the drawing is straightforward.

I'm assuming that you want to consider the difference in the number of accumulated heads, I will call nH, vs. the number of accumulated tails, I will call nT. Then, for instance, given nH-nT≡Δn>0 after ti tosses, what is the probability that Δn<0 after tf-ti more tosses.

You can consider a 2-dimensional discrete space, where one dimension is the toss number, t, and the other dimension is the accumulated difference of the results, Δn. The support of the distribution is shaped like a trianglular lattice, whose lattice sites make a diamond pattern. If you know that Δn=Δni at t=ti with 100% probability, and you want to find the pobability that Δn=Δnf after t=tf tosses satisfies the condition sgn(Δnf)=-sgn(Δni), then the relevant distribution is supported by the finite triangular lattice with vertices at (ti,Δni), (tf,Δni+tf-ti), and (tf,Δni-tf+ti). In particular, consider the line of lattice points at t=tf. The probability that you are interested in is the sum of the probability of every point on this line such that sgn(Δnf)=-sgn(Δni). This condition can be represented by drawing the line Δn=0 and then considering every lattice point on the line t=tf that is on the the side of Δn=0 opposite from the starting point, (ti,Δni).

This all sounds complicated, but a simple diagram should make this quite clear. I will offer one that I hope will give you an idea. The only relevant features of the diagram are the top row, bottom appex, and vertical line. I drew the entire diagram for clarity.
Code:
                |
+   +   +   +   0   -   -
  +   +   +   + | -   -
    +   +   +   0   -
      +   +   + | - 
        +   +   0
          +   + |
            +   |
                |
Bottom tip + represents (ti,Δni)=(ti,2). The top row of ++++0-- represents the possible outcomes after tf-ti=6 more tosses. I have used +, 0, and - for the lattice points to emphasize where the accumulated value, Δn, is either positive, zero, or negative, respectively. The vertical line of vertical bars is the Δn=0 line. So, your relevant probability is the probability of the two - points on the right end of the top row. Caution: the probability of these two points is not simply 2/7!

You can determine the probability of each relavant lattice point in two ways:

1) If you know the probability distribution of the distribution starting at t=0 with Δn=0, then simply shift everything by redefining t→t-ti and Δn→Δn-Δni, and then add up the probabilities of the relevant lattice points.

2) Since you are specifying a fair coin, then you can determine the probability of each value of, Δn, at t=tf by counting the number of allowed paths from (ti,Δni) to (tf,Δn), and then dividing by the number of allowed paths from (ti,Δni) to any lattice point at t=tf.

A third way of determining the probability just occurred to me. I think it's kind of slick.
3) You can count the number of diagonals between nearest neighbors that connecet to a - on top, and then divide by the total number of diagonals between nearest neighbors. This approach makes full use of the diagram as drawn, and it is very quick and intuitive for t<~10.

For the diagram above, I get 8/42=19%. That is, if the coin tosses are two counts in your favor, then, after six more tosses, there is a 19% probability that you will lose.
 
Last edited:
  • #9
The technique I described only works for computing the expected value, not the exact distribution. To compute the exact distribution, add a third argument to f, so we have f(n,k,j), which is the number of ways to get j crossings given that we take n steps starting from position k. Using similar reasoning as I used in my other post,
f(0,k,0) = f(1,k,0) = 1
f(0,k,j) = f(1,k,j) = 0 (if j > 0)
f(n,k,j) = f(n-2,k-2,j) + 2f(n-2,k,j) + f(n-2,k+2,j) (if |k| != 1 and n > 1)
f(n,k,j) = f(n-2,k-2,j-1) + 2f(n-2,k,j) + f(n-2,k+2,j) (if k = 1 and n > 1)
f(n,k,j) = f(n-2,k-2,j) + 2f(n-2,k,j) + f(n-2,k+2,j-1) (if k = -1 and n > 1)
and to get the probability, divide by 2^n as before.
 

FAQ: How often do you return to the origin in a random walk?

How is "return to the origin" defined in a random walk?

In a random walk, "return to the origin" refers to the event where the walker returns to the starting point or the origin after a certain number of steps.

What is the probability of returning to the origin in a random walk?

The probability of returning to the origin in a random walk depends on the number of steps and the dimension of the walk. For a one-dimensional random walk, the probability is 1 as the walker can only move back and forth. In higher dimensions, the probability decreases with increasing number of steps.

Does the shape of the walk affect the likelihood of returning to the origin?

Yes, the shape of the walk can affect the likelihood of returning to the origin. For example, in a two-dimensional random walk, the walker has a higher chance of returning to the origin if the walk is confined within a circle or a square compared to a triangle or a star shape.

Can the frequency of return to the origin be calculated analytically?

Yes, for a one-dimensional random walk, the frequency of return to the origin can be calculated analytically using the Gaussian distribution. However, for higher dimensions and more complex shapes, the calculation becomes more complicated and often requires numerical methods.

How does the frequency of return to the origin change with different parameters in a random walk?

The frequency of return to the origin can change depending on the number of steps, the dimension of the walk, and the shape of the walk. In general, as the number of steps increases, the frequency decreases. Higher dimensions and more complex shapes also tend to decrease the frequency of return to the origin.

Similar threads

Replies
10
Views
2K
Replies
7
Views
3K
Replies
45
Views
4K
Replies
57
Views
4K
Replies
1
Views
1K
Replies
17
Views
3K
Replies
1
Views
2K
Back
Top