Coin toss with changing probability

In summary: & 0 & 0 & 0 & 0\\ .1 &...& .9 &...& 0 & 0\\ .1 &...& 0 &...& .9 &...& 0\\ .1 &...& 0 &...& 0 &...& 0\\ .5 &...& 0 &...& 0 &...& .5\end {array}$$
  • #1
jk22
729
24
Hello I'm trying to solve the following problem : a coin is tossed with probability 1/2 of giving head but if it lands head the probability of head rise to .9. If it gives tail again or if the number of head is bigger equal 5 in a row then it fails down to .5.

I wrote a code running for 100000000 trials and found the probability of head
C:
#include<stdio.h>
#include<time.h>
#include<stdlib.h>

double rnd()
{
   int maxv=1<<24;
   int val=rand()%maxv;
   return((double)val/(double)maxv);
}

int main(void)
{
   int iter=100000000;
   int numpile=0;
   double p=.5;
   int pileserie=0;

   srand(time(0));
 
   for(int i=0;i<iter;i++)
   {
       if(rnd()<p)   // choose head
       {
           numpile++;
           pileserie++;
           if(pileserie>=5)
               p=.5;
           else
               p=.9;
       }
       else       // else tail
       {
           pileserie=0;
           p=.5;
       }
   }

   printf("%lf\n",(double)numpile/(double)iter);
}
were near .703

I would like to calculate this probability exactly I started with an empirical equation $$p (h)=\underbrace {(.9-(.9-.5)e^{-5a})}_{p_1-(p_1-p_0)e^{-an}}p (h)+p_0*p (t) $$ ??*

With $$p (h)+p (t)=1$$.
I don't understand how to find the exact formula * for the probabilities given the problem since the parameter a above should be fitted with numerical results.
 
Last edited:
  • Like
Likes lurflurf
Physics news on Phys.org
  • #2
You are dealing with a discrete-time Ergotic (irreducible) Markov chain with 5 states and an associated transition matrix. Suppose the states are labeled I=initial state, h1= 1 head,..., h4=4 heads. You can make a transition matrix of the probabilities of transitioning from one state to another and study the results. Let T designate the transition matrix. You can find the "stationary distribution", d, of T such that dT = d (see http://www.mast.queensu.ca/~stat455/lecturenotes/set3.pdf ). That is what you want. For an example of solving for the stationary distribution using MATLAB, see http://stackoverflow.com/questions/...ov-chain-stationary-distribution-solving-eqns
 
  • #3
I just read a bit about Markov chains but I don't understand two things :

1) is this not a two states chain (head n tail) ?
2) is this process with memory ? i read in wikipedia that markov processes were supposed memoryless ?
 
  • #4
jk22 said:
I just read a bit about Markov chains but I don't understand two things :

1) is this not a two states chain (head n tail) ?
2) is this process with memory ? i read in wikipedia that markov processes were supposed memoryless ?
In this problem, you can embed the memory into different states. One state, h0, can be the initial state with equal H &T probabilities. A second state, h1, can be after the first Head. A third state, h2, can be after two heads in a row. h3 after 3 heads in a row. The last state, h4, is after 4 heads in a row. If the system is in state hn (n=0..3) and a head is tossed, it transitions to state hn+1. If the system is in state h4 and a head is tossed, it transitions to state h0. In any state, if a tails is tossed, it transitions to state h0. These states include all the past history that is needed to determine the transition probabilities. So there is no memory needed other than what is in the definition of the state. It is a Markov process.

In each state, it is easy to specify the probabilities of transitioning to the other states. That will give you the transition matrix.
 
  • #5
So is it : $$\begin {array} 0.5 & .5 & 0 & 0 & 0 & 0\\ .1 & 0 & .9 & 0 & 0 & 0\\ .1 & 0 & 0 & .9 & 0 & 0\\ .1 & 0 & 0 & 0 & .9 & 0 \\ .1 & 0 & 0 & 0 & 0 & .9\\ .5 & 0 & 0 & 0 & 0 & .5\end {array}$$ for the column vector t h1 h2 h3 h4 h5 ?

Or is the last line .5 .5 0 0 0 0 ?

Then the invariant density were the eigenvector for eigenvalue 1 and p (h)=p (h1)+p (h2)+p (h3)+p (h4)+p (h5)
 
  • Like
Likes lurflurf
  • #6
jk22 said:
So is it : $$\begin {array} 0.5 & .5 & 0 & 0 & 0 & 0\\ .1 & 0 & .9 & 0 & 0 & 0\\ .1 & 0 & 0 & .9 & 0 & 0\\ .1 & 0 & 0 & 0 & .9 & 0 \\ .1 & 0 & 0 & 0 & 0 & .9\\ .5 & 0 & 0 & 0 & 0 & .5\end {array}$$ for the column vector t h1 h2 h3 h4 h5 ?
That is close to what I was thinking, but I am afraid that I was a little sloppy in my thinking. What you have might be better. You need to be able to get the probability that the last toss was heads by totaling some state probabilities. So I think that your addition of the last row for 5 heads (or more) in a row might be needed. That is a state that I had not listed.
What happens after a 5'th head returns the probability to 0.5? This transition matrix leaves it in the same state and the next head will not go to 0.9. That depends on the problem statement. After 5 heads, should a sixth head have probability 0.9 or 0.5?
Or is the last line .5 .5 0 0 0 0 ?
So this would make a 5'th head go to state h1 as though there was just 1 head in a row. That depends on the problem statement. Should 6 heads in a row be treated exactly like 1 head in a row or like 4 heads in a row?
 
  • #7
After the fourth head probabilities are .5 to head or tail and then h6 .5 to head or tail aso always .5 as soon as you reach 5 heads.

So its not the same as coming back to h1 so i suppose my first table was right.
 
  • Like
Likes FactChecker
  • #8
One more question : is there any theorem showing that the eigenvalue one exists and that the sign of the elements of its eigenvector is always the same ?
 
  • #9
Yes. In the reference above ( http://www.mast.queensu.ca/~stat455/lecturenotes/set3.pdf ) at the bottom of page 121 it says "We will show that for an irreducible Markov chain, a stationary distribution exists if and only if all states are positive recurrent, and in this case the stationary distribution is unique."
 
  • Like
Likes jk22
  • #10
So we can modelize processes with this kind of memory with a Markov's chain. But could could we modelize the following :

A fair coin is tossed but :

-If the two previous were the same : probability of that output is .9
-Else : probability is .5 ?

I think it's not possible since the probability is path dependent.
 
  • #11
jk22 said:
So we can modelize processes with this kind of memory with a Markov's chain. But could could we modelize the following :

A fair coin is tossed but :

-If the two previous were the same : probability of that output is .9
I assume you mean the prob of a third of the same type and not two more of the same
-Else : probability is .5 ?
probability of what? The same type as the prior?
I think it's not possible since the probability is path dependent.
The "path" information seems easy to define as a state. Only the prior two tosses are significant, so there are not many states.

Although I have not analyzed them mathematically as Markov processes, there are state transition diagrams that are huge compared to these examples. They can have hundreds of states that encapsulate a complicated series of historical events.
 
  • #12
FactChecker said:
I assume you mean the prob of a third of the same type and not two more of the same. probability of what? The same type as the prior?

I mean something like

H1->H2 .5
H1->T2 .5
H2->H3 .9
H2->T3 .1
T1->H2 .5
T1->T2 .5
...

But in fact it does not work since it depends on the series, for example T1->H2->T3 is all with probability .5 but H1->H2->T3 is .5 and .1

We could also imagine a random generator giving H or T, as soon as one event arrives it has probability .9. Then the long term probability would be 1/2. However the average length of the series consisting of the all the same result would be 90 compared to 2 with probability .5.
 
Last edited:
  • #13
You can use states that are defined by the prior two results: SHH, SHT, STH, STT. With those states you can make a state transition matrix of the probabilities of transitioning between the states on the next coin toss. It is a Markov chain.
 
  • #14
Here's an alternative. Consider each sequence as a game that ends with a tail.

0.5 games have one play of a Tail.

0.5(0.1) games have two plays: HT

0.5(0.9)(0.1) games have three plays- HHT

Etc.

Note that after 5 heads the game becomes 50-50 again.

This gives a sequence of probabilities for the length of a game with always one tail and a variable number of heads.

From this you can calculate the expected number of tails per game ( which is 1) and the expected number of heads per game, which I get to be 2.34.

This gives a probability of almost exactly 0.7 that any particular toss is a head.
 
  • Like
Likes jk22 and FactChecker
  • #15
To generalise: If the temporary probability of a head is ##a## then the expected number of heads per game is:

##E(H) = \frac12 [\frac{1-a^5}{1-a} + a^4]##

And the probability of a head is

##p(H) = \frac{E(H)}{E(H)+1}##

With ##a=0.9## this gives ##p(H) = 0.704##
 
  • #16
If one consider that the probability to remain were a for both then the length would increase but on the long run both probabilities were 1/2 ?
 
  • #17
jk22 said:
If one consider that the probability to remain were a for both then the length would increase but on the long run both probabilities were 1/2 ?

If the game is symmetric in heads and tails, then the probability of a head must be 1/2 in the long run.

A harder problem would be: After ##1-n## heads, the probability of a head is ##a## and after ##1-m## tails the probability of a tail is ##b##.

In your original problem ##n =4, a=0.9, m=1, b = 0.5##.
 

Related to Coin toss with changing probability

What is a coin toss with changing probability?

A coin toss with changing probability is a game where the likelihood of getting heads or tails on a coin toss changes over time. This means that the probability of getting a certain outcome is not fixed and can vary with each toss.

How is the probability of a coin toss changed?

The probability of a coin toss can be changed by manipulating the conditions under which the toss is performed. This could include altering the weight of the coin, the height from which it is tossed, or the surface on which it lands.

What is the purpose of studying coin toss with changing probability?

Studying coin toss with changing probability can provide valuable insights into how random events are influenced by different factors. It can also help in understanding probability and how it can be manipulated.

What are some real-life applications of coin toss with changing probability?

Coin toss with changing probability can be used in various fields such as gambling, sports, and decision-making processes. For example, it can be used in sports betting to calculate the odds of a team winning based on the probability of a coin toss.

How can coin toss with changing probability be studied and analyzed?

Coin toss with changing probability can be studied and analyzed using mathematical models and simulations. Data can also be collected from real-life experiments and analyzed using statistical methods to understand the patterns and trends in the changing probabilities.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
2
Replies
41
Views
4K
  • Set Theory, Logic, Probability, Statistics
Replies
14
Views
3K
  • Programming and Computer Science
Replies
10
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
11
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
21
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
2K
  • Set Theory, Logic, Probability, Statistics
2
Replies
45
Views
4K
  • Set Theory, Logic, Probability, Statistics
Replies
12
Views
1K
Back
Top