The Sum of Geometric Series from Probability Theory
Here I present a simple (but to the best of my knowledge, new) derivation of the formula for the sum of the infinite geometric series. The derivation is based on the use of basic probability theory.
Suppose that you play a game (e.g. lottery or roulette) for which the probability of winning is ##p\neq 0##. And suppose that you decide to repeat the game until you win, after which you stop playing. The probability that you will win on the 1st try is ##p_1=p##. The probability that you will win on the 2nd try is the probability that you will first lose once and then win, so the probability that you will win on the 2nd try is ##p_2=(1-p)p##. The probability that you will win on the 3rd try is the probability that you will first lose twice and then win, so ##p_3=(1-p)^2p##. Hence the probability that sooner or later you will eventually win is
$$P=p_1+p_2+p_3+\cdots = p + (1-p)p+(1-p)^2p + \cdots$$
which can be written as
$$P=p\sum_{k=0}^{\infty}(1-p)^k = (1-q)\sum_{k=0}^{\infty}q^k $$
where ##q\equiv 1-p##. But it is certain that sooner or later you will win, that is, from the theory of probability it is clear that ##P=1##. Hence
$$\sum_{k=0}^{\infty}q^k = \frac{1}{1-q}$$
This result is nothing but the formula for the sum of the geometric series, which I derived here from the theory of probability.
Note that the formula is not valid for ##q>1##, which has an interpretation in probability theory. The quantity ##q=1-p## is the probability of not winning the game, and it does not make sense to have a probability ##q## larger than 1.
Read my next article on Anyons
Theoretical physicist from Croatia
Say what?
If the probability of winning one play is 1/4 then the probability of not winning after k plays is (1- 1/4)[SUP]k[/SUP] and lim k→∞ (3/4)[SUP]k[/SUP] = 0. I.e. the probability of NOT winning after ∞ plays is 0. You will eventually win regardless of p, assuming 0<p<1..
The fact that p = 1/4 or some other smaller but finite positive number < 1 is immaterial.
Good paper!If the win-probabilities ##p_n## change with the round-number ##n## you may not be guaranteed an ultimate win. For instance, if ##p_n = 1/2^n,## the probability of losing the first ##N## games is
$$P(N) = prod_{n=1}^N left( 1 – frac{1}{2^n} right),$$ and the probability of no-win is ##P^* = lim_{N to infty} P(N).## This limit is between 0.2887880492 and 0.2887881410, so the probability of a win is ##1-P^* doteq 0.71121 ##
Say what?
If the probability of winning one play is 1/4 then the probability of not winning after k plays is (1- 1/4)[SUP]k[/SUP] and lim k→∞ (3/4)[SUP]k[/SUP] = 0. I.e. the probability of NOT winning after ∞ plays is 0. You will eventually win regardless of p, assuming 0<p<1..
The fact that p = 1/4 or some other smaller but finite positive number < 1 is immaterial.
Good paper!I think the point is that if the odds are CHANGING, then you have nonzero chance of losing, even with infinitely many trials. For example, if the first time you play, the probability of winning is 1/4, and the next time it's 1/8, and the next time it's 1/16, etc, then your probability of winning for infinitely many trials is:
1/4 + 3/4 x 1/8 + 3/4 x 7/8 x 1/16 + …
which is…I don't know what it is, but it adds up to something less than 1/2.
Say what?
If the probability of winning one play is 1/4 then the probability of not winning after k plays is (1- 1/4)[SUP]k[/SUP] and lim k→∞ (3/4)[SUP]k[/SUP] = 0. I.e. the probability of NOT winning after ∞ plays is 0. You will eventually win regardless of p, assuming 0<p<1..
The fact that p = 1/4 or some other smaller but finite positive number < 1 is immaterial.
Good paper!Not all series sum to 1.
It's true that for fixed ##p## you must eventually win, but if the probability of winning on the ##k##th play reduces quickly, then you may not eventually win and the sum may be less than ##1##. Say what?
If the probability of winning one play is 1/4 then the probability of not winning after k plays is (1- 1/4)[SUP]k[/SUP] and lim k→∞ (3/4)[SUP]k[/SUP] = 0. I.e. the probability of NOT winning after ∞ plays is 0. You will eventually win regardless of p, assuming 0<p<1..
The fact that p = 1/4 or some other smaller but finite positive number < 1 is immaterial.
Good paper!
Of course, but the point is that the formula is valid if ##x_k## does not go to ##0## in the limit ##krightarrowinfty##. And even more important point is that it is not easy to see that without using probability theory.
If one wants to put it more precisely, the formula is valid if there is ##epsilon>0## such that ##epsilon<x_kleq 1## for all ##k##.One can do it with algebra instead of probability. For finite ##n## let
$$S = x_1 + (1-x_1)x_2 +(1-x_1)(1-x_2) x_3 + cdots +(1-x_2)(1-x_2) cdots (1-x_n) x_{n+1}.$$
We have
$$1-S = (1-x_1)(1-x_2) cdots (1-x_{n+1}).$$
That is a slick trick, which I think would be teaching someone how to figure the geometric series formula.
Should you state at the beginning that the probability ## p > 0 ##, rather than saying ##pneq 0##? This goes along with ## q < 1 ## constraint at the end.
If you say ## pneq 0 ##, it could suggest that p is less than zero. With a case of less than zero chance, you cannot accept that eventually you will win, anymore.
To put a finer point on this, you have a sequence of independent events (coin tosses) with a goal in mind that they keep occurring until the first success. They are inhomogenous. In slightly more general settings you can get in trouble with this line of thinking because stochastic processes can be defective (i.e. CDF doesn't tend to one, aka Complementary CDF doesn't tend to zero).
– – – –
For this particular problem with probability of success on trial ##i## given by ##p_i##, the complementary CDF after trial ##n## is given by
##prod_{i=1}^n (1-p_i) = prod_{i=1}^n q_i##
passing limits as ##nto infty## we have the standard analytic fact that for ##p_i in [0,1]##
##0lt prod_{i=1}^infty (1-p_i)##
iff ##sum_{i=1}^infty p_i lt infty##
If one wants to put it more precisely, the formula is valid if there is ##epsilon>0## such that ##epsilon<x_kleq 1## for all ##k##.in this case since ## p_i## is bounded below by a positive constant, the series diverges which implies that the complementary CDF tends to zero and thus the probability distribution sums to one.
– – – –
For a slightly different look, in effect you're using the standard (countable state) Renewal Matrix for Age of a stochastic process
##
left[begin{matrix}
p_1 & q_1 & 0 & 0 & 0 & 0 & dots\
p_2 & 0 & q_2& 0 & 0 & 0 & dots\
p_3 & 0 & 0 & q_3 & 0 & 0 & dots\
p_4 & 0 & 0 & 0 & q_4 & 0 & dots\
p_5 & 0 & 0 & 0 & 0 & q_5 & dots\
p_6 & 0 & 0 & 0 & 0 & 0 & dots\
vdots & vdots & vdots & vdots & vdots & vdots & ddotsend{matrix}right]##
and trying to verify that state zero is not transient.
Of course, but the point is that the formula is valid if ##x_k## does not go to ##0## in the limit ##krightarrowinfty##. And even more important point is that it is not easy to see that without using probability theory.
If one wants to put it more precisely, the formula is valid if there is ##epsilon>0## such that ##epsilon<x_kleq 1## for all ##k##.When I first saw the previous post, I assumed you meant ##x_k rightarrow 0##. Certainly, if ##x_k## is bounded below, then it's a really clever result.
It's true that for fixed p you must eventually win, but if the probability of winning on the kth play reduces quicklyOf course, but the point is that the formula is valid if ##x_k## does not go to ##0## in the limit ##krightarrowinfty##. And even more important point is that it is not easy to see that without using probability theory.
If one wants to put it more precisely, the formula is valid if there is ##epsilon>0## such that ##epsilon<x_kleq 1## for all ##k##.
@Math_QED your objection makes sense from the point of mathematical rigor. But mathematical rigor is not the point of my "derivation". The point is a practical trick that allows you to find a shortcut in finding mathematical identities. This becomes particularly interesting if one generalizes my trick to something more complicated. For instance, suppose that ##p## is not constant but has a different value ##x_k## in the ##k##th play of the game. Then my trick gives an immediate identity
$$1=x_1+(1-x_1)x_2+(1-x_1)(1-x_2)x_3+cdots$$
Try to prove this identity by more conventional methods that do not depend on probability theory. It's not so easy. The probability theory allows you to see that it is true immediately, without any hard work. I find it amazing.It's true that for fixed ##p## you must eventually win, but if the probability of winning on the ##k##th play reduces quickly, then you may not eventually win and the sum may be less than ##1##. For example:
If we let ##x_1 = frac14## and ##x_k = frac{1}{4^k}(1-x_1)(1- x_2) dots (1-x_{k-1})##, then:
##x_1 + (1-x_1)x_2 + (1-x_1)(1-x_2)x_3 dots = frac14 + frac{1}{16} + frac{1}{64} dots = frac13##
@Math_QED your objection makes sense from the point of mathematical rigor. But mathematical rigor is not the point of my "derivation". The point is a practical trick that allows you to find a shortcut in finding mathematical identities. This becomes particularly interesting if one generalizes my trick to something more complicated. For instance, suppose that ##p## is not constant but has a different value ##x_k## in the ##k##th play of the game. Then my trick gives am immediate identity
$$1=x_1+(1-x_1)x_2+(1-x_1)(1-x_2)x_3+cdots$$
Try to prove this identity by more conventional methods that do not depend on probability theory. It's not so easy. The probability theory allows you to see that it is true immediately, without any hard work. I find it amazing.
Greg Bernhardt submitted a new blog post
The Sum of Geometric Series from Probability Theory
View attachment 238939
Continue reading the Original Blog Post.I think there is some circular reasoning here. Before you can calculate probabilities, there should be a fixed probability measure no? That is, you start with a given probability space ##(Omega, mathcal{F}, mathbb{P}) ## where ##mathbb{P}## is a probability measure. In particular, we must have ##mathbb{P}(Omega) =1##. But at the end of your 'proof', you use this.