Can a Mathematician Prove a Coin is Biased?

In summary: If by "figure" you mean "prove with absolute certainty" then you are right. You can never prove anything empirically with absolute certainty.The more difficult case is when you are only slightly biased. For any specified degree of bias there is a certain number of tosses that will produce the biased outcome with a certain probability of being correct.You want this formula. I don't know it but I know that a statistician can derive it for you.
  • #1
musicgold
304
19
Hi,

I am struggling with the following question:
How many tosses does it take for a mathematician to figure that a coin is biased?

I think that it is impossible. Here is my logic. Assume that there is a biased coin, which always comes up heads and that the mathematician is testing the coin. She has tossed it 1000 times consecutively and has got heads everytime. She still can’t form the opinion that the coin is baised because this outcome is likely (a very low probability though) with a fair coin.

Is my logic correct?

Thanks.
 
Physics news on Phys.org
  • #2
Depends. What degree of certainty do you want? What probability of a false positive would you be willing to accept? If you want absolute metaphysical certitude, then probability is not the branch of mathematics for you.
 
  • #3
musicgold said:
She still can’t form the opinion that the coin is baised because this outcome is likely (a very low probability though) with a fair coin.

She could form an opnion, but she couldn't offer any mathematical proof her opinion was correct.

A statistician could do a "hypothesis test" and "reject" the hypothesis that the coin was fair, but this is simply a procedure, not a mathematical proof.

If she knew or assumed more information about the coin, she might be able to make a statement about the probability that the coin was fair.
 
  • #4
Stephen Tashi said:
If she knew or assumed more information about the coin, she might be able to make a statement about the probability that the coin was fair.

Can you give an example of this?
 
  • #5
musicgold said:
Can you give an example of this?

For example, assume the coin is drawn from a population of coins and that the distribution of the parameter p that gives the probability of a coin landing heads is uniformly distributed over the interval [0,1].

Or as another example, assume there were two coins, one with a probablity of 0.99 of landing heads and anothe with a probability of 0.5 of landing heads. Assume the coin used in the experiment was picked at random from these two coins.

Those types of assumptions permit drawing conclusion about whether the coin used in the experiment was a fair coin. It is the nature of probability theory that if you wish to say something about the probability of the coin being fair given the experimental data you must set up a scenario that specifies that there is something probabilistic about the coin being fair or not.

This general approach is called a Bayesian approach and the assumption (or knowledge) about how the coin is probabilistically selected is called a "prior distribution". People criticize this approach for being "subjective", but nothing can be said about the probability that the coin is fair given the experimental data unless "prior" information is given. Without prior information, asking the probability that the coin is fair is like asking to find the sides and angles of a triangle given one side and one angle. There simply isn't enough given information to do it.
 
Last edited:
  • #6
musicgold said:
Hi,

I am struggling with the following question:
How many tosses does it take for a mathematician to figure that a coin is biased?

I think that it is impossible. Here is my logic. Assume that there is a biased coin, which always comes up heads and that the mathematician is testing the coin. She has tossed it 1000 times consecutively and has got heads everytime. She still can’t form the opinion that the coin is baised because this outcome is likely (a very low probability though) with a fair coin.

Is my logic correct?

Thanks.

If by "figure" you mean "prove with absolute certainty" then you are right. You can never prove anything empirically with absolute certainty.
 
  • #7
It's quite easy to see this.

Suppose you had a coin that came up heads every time. After 32 tosses, you'd know the coin was 100% biased with a probability of being incorrect at around 4 billion to 1.

The more difficult case is when you are only slightly biased. For any specified degree of bias there is a certain number of tosses that will produce the biased outcome with a certain probability of being correct.

You want this formula. I don't know it but I know that a statistician can derive it for you.
 
  • #8
Antiphon said:
Suppose you had a coin that came up heads every time. After 32 tosses, you'd know the coin was 100% biased with a probability of being incorrect at around 4 billion to 1.

That's not correct (with a probability of 1.0). I think your'e making a common misinterpretation of a "confidence interval". However, if we assume some prior distribution on the fairness of the coin, we can probably get that conclusion by using a Bayesian "credible interval".
 
  • #9
Stephen Tashi said:
That's not correct (with a probability of 1.0). I think your'e making a common misinterpretation of a "confidence interval". However, if we assume some prior distribution on the fairness of the coin, we can probably get that conclusion by using a Bayesian "credible interval".

I agree; 1 doesn't equal 1 - 1/4,294,967,296. The latter does equal 1.0 however if you understand how to use the two significant figures you gave.

I'm not referring to confidence intervals. But if I were, what would be the misunderstanding?

Since you seem to know some statistics, why don't you derive the answer?
 
  • #10
Antiphon said:
I agree; 1 doesn't equal 1 - 1/4,294,967,296. The latter does equal 1.0 however if you understand how to use the two significant figures you gave.
Were you using one of the prior distributions that I proposed? If so, I apologize. I didn't realize that.

I'm not referring to confidence intervals. But if I were, what would be the misunderstanding?

Without assuming any prior distribution on the probability of success, it is possible to calculate a result of this form:

Let p(h) be the unknown probability that the coin lands heads.
Let N be the number of tosses
Let f be the observed fraction of tosses that are heads
Let epsilon > 0 be a given number.

From that and the assumption of independent tosses, it is possible to calculate:

P =the probability that f is within plus or minus epsilon of p(h)

A common misunderstanding of this result is to take the fraction of heads f0 that is observed in a particular group of N tosses (such as 32/32 = 1) and assert that P is the probability that f0 is within plus or minus epsilon of p(h). The value P refers to a statistical property of the distribution of the random variable f, not to a property of one paritcular value it may take.
 
  • #11
...and another common misinterpretation in statistics is to mistinterpret a "p-value" computed on the basis of assuming a "null hypothesis".

If we assume the following
The coin is fair.
Let the total number of tosses be N
Let N0 be a given number of tosses

On the assumption of indepedent tosses it is possible to compute a result of the form
P = the probability that the observed number of tosses is equal or greater than N0

A common misinterpretation of P is that (1.0 - P) is the probablity that the coin is not fair by being biased toward heads. However P is computed on the basis of assuming (with certainty) that the coin is fair. So no valid calculation based on that assumption can produce information about the coin not being fair.
 
  • #12
Stephen Tashi said:
...and another common misinterpretation in statistics is to mistinterpret a "p-value" computed on the basis of assuming a "null hypothesis".

If we assume the following
The coin is fair.
Let the total number of tosses be N
Let N0 be a given number of tosses

On the assumption of indepedent tosses it is possible to compute a result of the form
P = the probability that the observed number of tosses is equal or greater than N0

A common misinterpretation of P is that (1.0 - P) is the probablity that the coin is not fair by being biased toward heads. However P is computed on the basis of assuming (with certainty) that the coin is fair. So no valid calculation based on that assumption can produce information about the coin not being fair.


I'm not sure if that's completely accurate in this case. The null is that the coin is fair, so the p value would be the probability of falsely rejecting that the coin is fair. I would consider that information about the coin not being fair.
 
  • #13
Stata said:
The null is that the coin is fair, so the p value would be the probability of falsely rejecting that the coin is fair.

I agree.

I would consider that information about the coin not being fair.

It's information about the peformance of the test when it tests a fair coin. It isn't information about a particular coin.

You can't calculate a probability (different than 1.0) that the coin is fair by beginning with the assumption that the coin is fair. You need a prior distribution for the fairness of the coin in order to do that.
 
  • #14
You can't determine whether the coin is biased or not by simply questioning the ratio of heads : tails per M coin tosses. The ratio is determined by the experiment, but the validity of the experiment is not discernibly determined by the expectancy fallout percentage - the margin or error in conduct by relation to theory. To determine whether the coin really is biased or not, you need to think outside the box and conduct experiments regardless of the head :tail ratio, but still dependent on the proportional mass(es) of the coin and other con outlets. If the coin turns out to be biased, you can easily tell if it is by some means other than the tail : head ratio. For example, if one side was coated with a thick spray of metallic paint whereas the other face was only sprinkled with a light shade, then the coin is obiously going to favor the heavier side. However, a mathematician won't be able to derive such a resolution unless (s)he disposes of heads/tails ratios altogether - they won't help and are a matter of diversion.
 
  • #15
One thing that should be pointed out is that measuring one attribute of a process does not mean you measure the entire process and a lot of people make this mistake.

The other thing that has been pointed out has to do with the Type I and Type II errors and as long as you have a positive size and an appropriate power, even if you disregard the above, you still have these errors to deal with statistically and probabilistically.
 
  • #16
A useful thought experiment is to imagine applying your fairness test to quadrillions of coins. You'd be sure to reject some wrongly. Besides, all real coins are unfair, it's just a question of how unfair.
 

FAQ: Can a Mathematician Prove a Coin is Biased?

1. Can a mathematician prove that a coin is biased?

Yes, a mathematician can use probability theory and statistical analysis to prove whether a coin is biased or not. This involves conducting experiments and gathering data to calculate the probability of getting certain outcomes from the coin toss.

2. How can a mathematician prove the bias of a coin?

A mathematician can prove the bias of a coin by comparing the observed outcomes of multiple coin tosses to the expected outcomes based on the assumption of a fair coin. If there is a significant difference between the two, it can indicate bias.

3. Is it possible for a coin to appear biased even though it is fair?

Yes, it is possible for a coin to appear biased due to chance. For example, if you toss a coin 10 times and get 8 heads, it may seem like the coin is biased towards heads. However, this can happen with a fair coin as well and does not necessarily indicate bias.

4. Can a biased coin always be identified by a mathematician?

No, there is always a chance that a biased coin may go undetected. This is because probability and statistics can only provide a probability or confidence level of the coin being biased, and not a definitive answer.

5. How can I know if a coin is biased without using math?

The only way to know for sure if a coin is biased is by conducting a large number of coin tosses and recording the outcomes. If there is a significant difference between the expected and observed outcomes, it may indicate bias. However, this method is not as reliable as using mathematical principles to analyze the data.

Back
Top