Average Expected Stopping Time for Two Consecutive Heads Flips

  • Thread starter cappadonza
  • Start date
  • Tags
    Time
In summary, the conversation discusses the average time it takes to get two heads in a row when flipping an unbiased coin every second. One approach is to consider P(k) as the probability of getting two heads after k seconds and the average time is calculated by multiplying the probability of each possible outcome by the number of seconds it takes. Another approach is to calculate the probability of not getting two heads in the first two tosses and consider the average time as the time it takes to get a probability of 0.5 or less. The concept of expectation is also discussed, with the idea that it can be calculated as the weighted sum of values X can take. The conversation ends with a question about what would be a reasonable expected value in a binomial
  • #1
cappadonza
27
0
this is not a homework question, it was a interview question

Question:

suppose we keep flipping an unbiased coin every second, how long do you have to wait on average before you get two heads in a row.

similar question here http://www.wilmott.com/messageview.cfm?catid=26&threadid=16483 but i don't follow their arguments
hoping someone could help me out
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
Let's call P(k) the probability of getting those two heads right after k seconds.
So, the average time is
0*P(0) + 1*P(1) + 2*P(2) + ... + n*P(n) + ...
 
  • #3
thanks i understand that suppose the Probabilty of head is 0.5. apparently the answer is 6 secons but how does
0*P(0) + 1*P(1) + 2*P(2) + ... + n*P(n) + ... = 6 ? that's what i don't understand yet
 
  • #4
For an answer see thread "number of rolls of a die to get a particular #"
Marc
 
  • #5
The probability of not getting two heads in the first two tosses is 3/4. Consider the average time as the time (or number of tosses) it takes to get a probability of 0.5 or less.
 
  • #6
thanks all for you replies, i understand the idea, but still not sure how Average turns out to be 6.i think i am getting it. i need to think it over see what i come up with
 
  • #7
cappadonza said:
thanks all for you replies, i understand the idea, but still not sure how Average turns out to be 6.i think i am getting it. i need to think it over see what i come up with

P(2 heads in a row)=1 -(.75)^3 = 0.58. That's three possible pairs (1,2; 3,4; 5,6), totaling 6 tosses or six seconds. However, if you consider possible pairs 1,2; 2,3, 3,4 you should be able, on average, to get a two in a row in four tosses. Which do you think is correct?
 
Last edited:
  • #8
SW VandeCarr said:
The probability of not getting two heads in the first two tosses is 3/4. Consider the average time as the time (or number of tosses) it takes to get a probability of 0.5 or less.

I don't understand that reasoning. Wouldn't that give the median time (approximately), and not the average?
 
  • #9
techmologist said:
I don't understand that reasoning. Wouldn't that give the median time (approximately), and not the average?

If you ran an experiment, what would the expectation be for the number of tosses before a pair of heads? I called it the "average" and I would expect the median would approach the mean for multiple fair trials. Do you disagree?
 
  • #10
SW VandeCarr said:
If you ran an experiment, what would the expectation be for the number of tosses before a pair of heads? I called it the "average" and I would expect the median would approach the mean for multiple fair trials. Do you disagree?

It might, but it might not. I don't know for this particular problem. Some waiting times have distributions that are so weird that the median isn't any where near the mean. The waiting time until (# of heads - # of tails) = 1 has infinite mean, but I'm pretty sure the median is finite.

EDIT: I guess the median in my example would be 1, since the probability of starting out with a toss of heads is 1/2. Or is it 1.5? I forget exactly how median is defined for discrete random variables.
 
Last edited:
  • #11
techmologist said:
It might, but it might not. I don't know for this particular problem. Some waiting times have distributions that are so weird that the median isn't any where near the mean. The waiting time until (# of heads - # of tails) = 1 has infinite mean, but I'm pretty sure the median is finite.

EDIT: I guess the median in my example would be 1, since the probability of starting out with a toss of heads is 1/2. Or is it 1.5? I forget exactly how median is defined for discrete random variables.

The parameter is mean number of tosses to obtain the first run of two heads=success.The usual method of calculating the probability of success over successive identical independent binomial trials (S or F) is to multiply the probability of failure and subtract that from one, P(S)=1-P(F)^k where k is the number of trials. I took the expectation to be lowest value of k where P(S) = 0.5 or more. Note the unit trial is two tosses, not one.
 
Last edited:
  • #12
SW VandeCarr said:
I took the expectation to be lowest value of k where P(S) = 0.5 or more. ...

That's the part I don't get. That isn't the definition of expectation. Why do you take it to be that?

EDIT: Specifically, I think the idea that the expectation and the median will be the same needs justification. It isn't true in general.
 
  • #13
techmologist said:
That's the part I don't get. That isn't the definition of expectation. Why do you take it to be that?

EDIT: Specifically, I think the idea that the expectation and the median will be the same needs justification. It isn't true in general.

The expectation can simply be the ponderated sum of values X can take. The point where the probability of success exceeds that of failure seems reasonable for a binomial process. What would you choose as the expected value, p=1? It's actually not the mean because the actual mean 4,8125 in a fitted model or 6 tosses if we take discrete trials to be non overlapping sets of two coin flips (and that's the real question 1,2; 3,4; 5,6 or 1,2; 2,3, 3,4).

http://www.aiaccess.net/English/Glossaries/GlosMod/e_gm_expecta tion.htm
 
Last edited by a moderator:
  • #14
The article you link to defines expectation as the "ponderated" or weighted sum (values weighted by their respective probabilities). That is not what you are describing. "Average", "mean", "expectation", and "expected value" are all interchangeable. That's what I assumed the OP meant when he said "on average". In a distribution where the mean is infinite, the median might be a better "typical value" statistic. But here the mean exists. You can find it by conditioning on when the first tail comes up. If the random variable T is the waiting time for k heads in a row, and the random variable U is the waiting time for the first tail to appear, then you can find the expected value of T like this:

E[T] = E[E[T|U]] = k*Pr(U>k) + Pr(U=1)*(E[T]+1) + Pr(U=2)*(E[T]+2) + ... + Pr(U=k)(E[T]+k)

since E[T|U=j] = E[T]+j for j < k+1. If you do this for k=2, two heads in a row, you get E[T] = 6.
 
Last edited:
  • #15
techmologist said:
The article you link to defines expectation as the "ponderated" or weighted sum (values weighted by their respective probabilities). That is not what you are describing. "Average", "mean", "expectation", and "expected value" are all interchangeable. That's what I assumed the OP meant when he said "on average". In a distribution where the mean is infinite, the median might be a better "typical value" statistic. But here the mean exists. You can find it by conditioning on when the first tail comes up. If the random variable T is the waiting time for k heads in a row, and the random variable U is the waiting time for the first tail to appear, then you can find the expected value of T like this:

E[T] = E[E[T|U]] = k*Pr(U>k) + P(U=1)*(E[T]+1) + P(U=2)*(E[T]+2) + ... + P(U=k)(E[T]+k)

since E[T|U=j] = E[T]+j. If you do this for k=2, two heads in a row, you get E[T] = 6.

And that's exactly what I got in terms of tosses.(see post 7)
 
  • #16
SW VandeCarr said:
And that's exactly what I got in terms of tosses.

Yes, but how could you know beforehand that the median and mean, two different statistics, would be the same? Consider the problem of how many rolls it takes to roll a five, say, on a single die. The break-even point is between 3 and 4 rolls. 3 rolls gives you P<0.5 for success, while 4 rolls gives you P>0.5 for success. But the mean number of rolls is 6.
 
  • #17
techmologist said:
Yes, but how could you know beforehand that the median and mean, two different statistics, would be the same? Consider the problem of how many rolls it takes to roll a five, say, on a single die. The break-even point is between 3 and 4 rolls. 3 rolls gives you P<0.5 for success, while 4 rolls gives you P>0.5 for success. But the mean number of rolls is 6.

I got a calculated break even point from ln(0.75)x=ln(0.5); x=-0.693/-0.288=2.40625 trials=4.8125 rolls. This is the mean of a fitted pdf, but the discrete distribution mean is 6 rolls with the trials consisting of two roll non overlapping sets.

Why is the mean number of rolls (to get a given face) for a die 6?
 
Last edited:
  • #18
SW VandeCarr said:
I got a calculated break even point from ln(0.75)x=ln(0.5); x=-0.693/-0.288=2.40625 trials=4.8125 rolls. This is the mean of a fitted pdf, but the discrete distribution mean is 6 rolls with the trials consisting of two roll non overlapping sets.

I don't think your method of using non-overlapping sets of two rolls is valid for this problem. You have to allow for the possibility that the first run of two heads occurs on tosses 2 and 3, for instance, and not one of {1,2}, {3,4}, or {5,6}. I think it is an accident your method gives 6 as the median, while the correct answer for the mean is also 6. You might try your method for three heads in a row and see what answer it gives. The correct mean for that is 14 tosses.

Also, consider the simpler problem of how many tosses it takes to get one head in a row. Obviously one toss of the coin gives a chance of 1/2 of getting a head immediately, so the median is 1. But the mean number of tosses to get one head is 2 (see below).

Why is the mean number of rolls (to get a given face) for a die 6?

When you have a Bernoulli process in which the probability of success on each trial is p, the expected waiting time T until the first success is 1/p. The random variable T has a geometric distribution. For a coin, p = 1/2 and the mean waiting time is 2. For a die, p = 1/6, so the mean waiting time is 6. To show this, let q = 1-p be the probability of failure on each trial:

E[T] = p(1 + 2q + 3q2 + ... + nqn-1 + ...) = p/(1-q)2 = p/p2

If we go back to your method of considering non-overlapping pairs of coin tosses, the probability of success there is p = 1/4, as you have pointed out. So the mean number of trials needed to get a success is 4 trials, for a total of 8 tosses. The answer of the OP's problem is smaller because your method leaves out possible ways of getting success.
 
Last edited:
  • #19
techmologist said:
back to your method of considering non-overlapping pairs of coin tosses, the probability of success there is p = 1/4, as you have pointed out. So the mean number of trials needed to get a success is 4 trials, for a total of 8 tosses. The answer of the OP's problem is smaller because your method leaves out possible ways of getting success.

That doesn't make sense. If my method leaves out ways of getting success, it should get a result larger than 8, not smaller than 8.

I do think that overlapping sets should be considered. So consider 12, 23, 34. That's three sets, the same number of tosses and sets as before. .

I don't understand your method of calculation. You're thinking in terms of waiting times and I'm thinking in terms of a series of random trials. You don't add probabilities in this situation. You subtract the product of the probabilities of failure from one to get the probability of success. P(S)=1-P(F)^k where k is the number of trials with a constant probability of failure.
 
Last edited:
  • #20
If you use the mean r (p/1-p) where r=1st success in this context (r=1), the break even point and the mean number of trials is the same for two heads in a row (3). For three consecutive heads in the mean number of trials is 0.875/0.125=7 trials. However the break even point p>0,5 is 6 trials. That's the point where half the players will succeed. This is more to intuitive to me and I think post people. The negative binomial distribution has an infinite tail and the mean will be affected by outliers to the point where its less meaningful.

BTW when a trial consists of three tosses the number of tosses to break even is 18 and to the mean is 21.
 
Last edited:
  • #21
SW VandeCarr said:
That doesn't make sense. If my method leaves out ways of getting success, it should get a result larger than 8, not smaller than 8.

I'm not sure what you are saying here. If your method (as I understand it) leaves out possible ways of success, that means it should result in an answer that is too large. We are looking for the number of tosses it takes to get the first success (two heads in a row).

I do think that overlapping sets should be considered. So consider 12, 23, 34. That's three sets, the same number of tosses and sets as before.

I may have misunderstood your calculation. It seemed like you were considering non-overlapping (and therefore independent) blocks of two tosses each so that their probabilities of failure could be multiplied together. That resulted in calculation of the break even point being between 2 and 3 trials. You rounded up to 3, for a total of 6 tosses. Is that not right?

I don't understand your method of calculation. You're thinking in terms of waiting times and I'm thinking in terms of a series of random trials. You don't add probabilities in this situation. You subtract the product of the probabilities of failure from one to get the probability of success. P(S)=1-P(F)^k where k is the number of trials with a constant probability of failure.


I am thinking in terms of waiting times (stopping time) because that is what the thread is about. From Post #1:

cappadonza said:
suppose we keep flipping an unbiased coin every second, how long do you have to wait on average before you get two heads in a row.

Your method of multiplying failure probabilities together cannot be applied to trials that overlap, because they are not indpendent. The event "heads on tosses 2 and 3" is not independent of the event "heads on tosses 3 and 4". So you can't just multiply the probabilities together. That's why I assumed you were only considering non-overlapping trials.

SW VandeCarr said:
If you use the mean r (p/1-p) where r=1st success in this context, the break even point and the mean number of trials is the same for two heads in a row (3).

Where did that formula for the mean, p/(1-p), come from? Did you intend 1/p?

However the break even point p>0,5 is 6 trials. That's the point where half the players will succeed. This is more to intuitive to me and I think post people. The distribution has an infinite tail and the mean will be affected by outliers to the point where its less meaningful.

I actually agree with you that the break even point is the more interesting and intuitive parameter. It is also more readily applied to gambling situations. But the original poster specifically asked for the expected value (mean, average), not the break even point. Since the mean actually exists (is finite), why not give the answer asked for?
 
Last edited:
  • #22
techmologist said:
I'm not sure what you are saying here. If your method (as I understand it) leaves out possible ways of success, that means it should result in an answer that is too large. We are looking for the number of tosses it takes to get the first success (two heads in a row).

Are you playing games here? Go back and read what I quoted. You're talking about (4 trials, 8 tosses) and my result was 3 trials, 6 tosses.
I may have misunderstood your calculation. It seemed like you were considering non-overlapping (and therefore independent) blocks of two tosses each so that their probabilities of failure could be multiplied together. That resulted in calculation of the break even point being between 2 and 3 trials. You rounded up to 3, for a total of 6 tosses. Is that not right?

Every toss is independent and every trial is independent. 123, 234, 345, 456, 567, 678, 789
I showed it doesn't matter how you number them. The same probability applies at any point in the sequence. There's no preferred point where a run of 3 need begin.

Yes, I rounded up to three for the break even point.

I am thinking in terms of waiting times (stopping time) because that is what the thread is about. From Post #1
:

It is only because the OP introduced a one flip per second add on. It's a sequence of random trials which follow a negative binomial distribution to first success.


Your method of multiplying failure probabilities together cannot be applied to trials that overlap, because they are not independent. The event "heads on tosses 2 and 3" is not independent of the event "heads on tosses 3 and 4". So you can't just multiply the probabilities together. That's why I assumed you were only considering non-overlapping trials.

See, my response above. I'm calculating P(S) for up to any point in the sequence. You need at least two tosses (one trial) to get a success of 2 heads in a row. In fact the trials can start anywhere. You can have up to 7 successful "runs of three heads" trials in 21 tosses. They don't overlap. To talk about overlapping trials is meaningless except to document where the run occurred. Talking about non overlapping and overlapping successful trial possibilities is just a way to think about the problem post hoc.

r(p/1-p) is the mean of the negative binomial distribution.

No need to respond to me. I'm finished here.
 
Last edited:

FAQ: Average Expected Stopping Time for Two Consecutive Heads Flips

What is the meaning of "Average Expected Stopping Time"?

The Average Expected Stopping Time refers to the number of coin flips expected to occur before a specific outcome is reached. In this case, it is the number of flips expected before two consecutive heads appear.

How is the Average Expected Stopping Time calculated?

The Average Expected Stopping Time is calculated by taking the sum of the products of the number of flips and the probability of that number of flips occurring, for all possible outcomes. This is also known as the mean or expected value.

Why is the Average Expected Stopping Time important?

The Average Expected Stopping Time is important because it can give insight into the probability of a certain outcome occurring. It can also be used to make decisions in situations where multiple outcomes are possible.

What factors can influence the Average Expected Stopping Time?

The main factor that can influence the Average Expected Stopping Time for two consecutive heads flips is the probability of getting a heads on each flip. Other factors may include the number of flips allowed and any external factors that may affect the coin flips, such as wind or temperature.

How can the Average Expected Stopping Time be applied in real-life situations?

The Average Expected Stopping Time can be applied in various real-life situations, such as in gambling or decision-making processes. It can also be used in statistical analysis and simulations to predict the likelihood of certain outcomes. Additionally, it can be used in game theory to determine optimal strategies.

Back
Top