FLINE: Solving Infinity Conundrum Problems

  • Thread starter Oldfart
  • Start date
  • Tags
    Infinity
In summary, the conversation discussed the concept of a random walk with coin flips and the possibility of ending up an infinite distance from the road's centerline. It was also mentioned that the probability of landing on either heads or tails with a flipped coin is near one, but not perfect, due to factors such as the coin being unbalanced or landing on its edge.
  • #1
Oldfart
195
1
I hope that this is thr right place to post this.

If I walk down an infinite yellow brick road, flipping a fair coin with each step, and taking a step to the left if tails, a step to the right if heads, it would seem possible that I could end up a trillion miles from the road's centerline. (a) But is it certain that this must occur? (b) Is it possible that I could end up an infinite distance from the road's centerline? and (c) Is it possible that after an infinite walk, I would never have exceeded one step to the right or left of the centerline?

I seem to have severe problems with infinity...

OF
 
Mathematics news on Phys.org
  • #2
You can calculate the probability that, after N steps, you are, say, between a and b steps away from the certerline. If N is large, the central limit theorem says that your stochastic walk is a gaussian aleatory variable with mean value zero and root mean square [tex]\sqrt{N}[/tex].
For example, after one million steps, there's a probability of 68% that you are less than thousand steps away from the centerline. To have such a high probability to go a trillion miles away, you need to walk for a (trillion)2 miles... good luck, Forrest Gump!
 
  • #3
First, I have to ask... why are you taking steps right and left instead of down and up the road :P

More seriously.

a) There is no certainty that you will ever make any distance. Consider the (highly unlikely) event that you always flip heads then tails and your flips are always opposite the last one. In the limit, you have averaged no distance.

b) In the (equally unlikely) event that you always flip heads (or always flip tails), in the limit, you will have traveled an infinite amount of distance. So this, too, is a possibility.

And c) is obviously possible as I described in a).

Now, calculating the probabilities of these things happening can be tricky, but save that for the homework :)

If you are having trouble understanding it, you might think of the problem this way instead.

Replace the coin flips with a sequence of -1's and +1's. The total distance traveled is the infinite sum of this sequence. The infinite sum is just the limit of the partial sums (with the possibility that the limit diverges).
 
  • #4
In the simple one dimensional random walk (integer steps with equal prob. in either direction), the probability that any point will be reached eventually is one. This is also true for two dimensions, but not for three or more.
 
  • #5
This statement
mathman said:
In the simple one dimensional random walk (integer steps with equal prob. in either direction), the probability that any point will be reached eventually is one.
is related to the follofing problem:

[tex]p(n,k)=\frac{1}{2}\big[p(n-1,k-1)+p(n-1,k+1)\big]\qquad\textrm{for}\quad n>0,\,\forall k[/tex]

with initial conditions

[tex]p(0,k)=\delta(k)\qquad\forall k[/tex]

with n and k integers. Anyone knows a general method for solving this kind of equations (difference equations with more than one variable)?
 
  • #6
mathman said:
In the simple one dimensional random walk (integer steps with equal prob. in either direction), the probability that any point will be reached eventually is one.

Thanks for the replies!

Does the above quote directly conflict with Tac-Tic's "There is no certainty that you will make any distance."

Perhaps I'm just having a problem with mathspeak, don't know but need help!

OF
 
  • #7


According to Alex Bellos, author of Here's Looking at Euclid(p.234) if the coin is tossed infinitely many times, the most likely number of times he will cross the starting point is zero.
The next most likely number is one, then two,three, and so on.

This is against what I first thought: fair coin ,50% probablity. You would think that you would spend equal time on both sides of the starting point. Not true.
 
  • #8
Does the above quote directly conflict with Tac-Tic's "There is no certainty that you will make any distance."

It doesn't conflict. Probability 1 doesn't mean certainty.
 
  • #9
mathman said:
It doesn't conflict. Probability 1 doesn't mean certainty.

OK, thanks, I was sort of afraid of something like that. Dang mathspeak...

Can someone briefly explain to me what a probability of 1 conveys? (I thought it was like the probability of a flipped coin landing either heads or tails is !, or 100%.)

Thanks, OF
 
  • #10
IMO, it's just like how 0.999 is basically 1 in most applications, but isn't 100% certainty

probabilistic calculations never ensure certainty, except in super rare cases...what they do is plot data according to (usually) a Gaussian or other distribution, in that even events that are so rare as to appear impossible (sigma 5 or greater), can still occur, because of a phenomenon called a fat tail
 
  • #11
OK...

I knew that I shouldn't have come in here...

But as I sneak back out, could anyone tell me what the probability is of a flipped coin landing either heads or tails?

Thanks, OF
 
  • #12
Near one, but not perfect.

If you're talking about an actual physical coin, the odds of landing on either side aren't perfectly even, btw, as it is unlikely that the coin is perfectly balanced...

but in a thought experiment, the odds are exactly 50/50, because it is a use of binary logic, 0 or 1, which has a wide range of uses, even outside of computer science. :)

I'm sure that someone with more reading on probability could give you better answers, I personally haven't studied probability at all, but am simply using my logic as a tool to answer your questions.
 
  • #13
G037H3 said:
Near one, but not perfect.

If you're talking about an actual physical coin, the odds of landing on either side aren't perfectly even, btw, as it is unlikely that the coin is perfectly balanced...

Thanks for trying to help, G03!

What causes the "...not perfect."? Lands on edge?

And won't an unbalanced coin still always land either heads or tails?

Thanks, OF
 
  • #14
Yeah, the coin could land on its edge, likely less than 1/10^6 (1 in 1 million) probability, but it's always possible.

But like I said, the problems/models with coin flipping are hypothetical, where it is a virtual coin, and the odds of either side are exactly 50/50.

Its a pretty basic example of the contrast between a mathematical ideal and reality. The odds of the coin not landing on a side are so low that it is safe to ignore it, and still have accurate results using the binary paradigm represented by the coin. But my point was just that you have to keep in mind that what you're dealing with isn't a real physical coin, you shouldn't think of it that way. It's just an intuitive crutch to help you understand 50/50 probability.
 
Last edited:
  • #15
I think that the OP might enjoy reading the Wikipedia page on http://en.wikipedia.org/wiki/Almost_surely" .
 
Last edited by a moderator:
  • #16
The_Duck said:
I think that the OP might enjoy reading the Wikipedia page on http://en.wikipedia.org/wiki/Almost_surely" .

Thanks for the link, Duck! Very informative!

OF
 
Last edited by a moderator:
  • #17
Oldfart said:
Can someone briefly explain to me what a probability of 1 conveys? (I thought it was like the probability of a flipped coin landing either heads or tails is !, or 100%.)
A probability measure evaluated on an event gives the value 1.

I assume, however, you are interested not in what it means mathematically, but how it might be interpreted into the real world. Frequentism is common -- if we repeat the same experiment indefinitely, the proportion of times the event occurred converges to 1.

(philosophical issues brushed aside)

In such an interpretation, a particular event can be probability 1, so long as it fails to happen sufficiently infrequently. e.g. if it failed the first time, but occurred every other time.
 
  • #18
Hurkyl said:
A probability measure evaluated on an event gives the value 1.

I assume, however, you are interested not in what it means mathematically, but how it might be interpreted into the real world. Frequentism is common -- if we repeat the same experiment indefinitely, the proportion of times the event occurred converges to 1.

(philosophical issues brushed aside)

In such an interpretation, a particular event can be probability 1, so long as it fails to happen sufficiently infrequently. e.g. if it failed the first time, but occurred every other time.

Can you give us a practical example (with coins, dice, etc) of this? Note that the example given by Mathman doesn't apply to this, because he simply says that the probability of his particular event will EVENTUALLY occur is one, so you can never say a particular realization of the experiment failed. In other words: can you give an example of an event having p = 1 but that it can also not occur? (with discrete aleatory variables, so we can rule out problems of zero-measure sets)
 
  • #19
A simple illustration of the difference between probability one and certainty is by considering an infinite sequence of (fair) coin flips. The probability having an approximately equal number of heads and tails is one (law of large numbers), but it it not certain. It is possible that all the flips could turn up heads, but the probability is zero.
 
  • #20
Petr Mugver said:
In other words: can you give an example of an event having p = 1 but that it can also not occur? (with discrete aleatory variables, so we can rule out problems of zero-measure sets)
The simplest example I've come up with in the past is the probability that, for flipping a particular coin, you have probability 1 of not getting the first heads.
 
  • #21
Hurkyl said:
The simplest example I've come up with in the past is the probability that, for flipping a particular coin, you have probability 1 of not getting the first heads.

Can you say this in other words please? (sorry, maybe it's my english, but I don't understand it)
 
  • #22
Petr Mugver said:
Can you say this in other words please? (sorry, maybe it's my english, but I don't understand it)
The experiment is to flip a coin.

One event is "the first heads ever flipped on this coin". This a probability zero event. It's negation -- "tails, or a heads that isn't the first one flipped on this coin" -- is a probability one event.

Despite being probability one, it will almost surely fail exactly one time.
 
  • #23
Hurkyl said:
The experiment is to flip a coin.

One event is "the first heads ever flipped on this coin". This a probability zero event. It's negation -- "tails, or a heads that isn't the first one flipped on this coin" -- is a probability one event.

Despite being probability one, it will almost surely fail exactly one time.

Sorry, but I still don't understand exactly. (I feel a bit stupid right now...) Tell me if I got it right: you flip a coin many times, and you stop the experiment the first time heads comes out?
 
  • #24
Petr Mugver said:
Sorry, but I still don't understand exactly. (I feel a bit stupid right now...) Tell me if I got it right: you flip a coin many times, and you stop the experiment the first time heads comes out?
The frequentist interpretation of probabilities is if you repeat the same experiment forever, the proportion of successes converges to the probability of success.

The experiment is "flip a coin". Failure is the event "the first heads of the coin". As the number of experiments increases, the number of failures will be at most 1, so the proportion of successes does converge to 1.
 
  • #25
I take it , then, that you could have said "the first trillion heads of the coin" and the probability would still be 1.

OF
 
  • #26
I also had some trouble understanding what Hurkyl meant, but I understand it now:

You toss a coin repeatedly.
X is the number of tosses you have done when your toss turns up heads for the first time.

X could be 1: heads, ...
X could be 3: tail, tail, heads, ...
X could be 7: tail, tail, tail, tail, tail, tail, heads, ...
X could be any number

If you toss the coin 1000 times then:
Probability of not tossing heads = 1/1001
(there are 1001 possibilities: X might equal one of the numbers in 1 to 1000 or you never toss heads)

If you toss the coin more often this probability gets smaller and smaller, and in the limit of tossing the coin an infinite number of times it will be 0. Thus the probability of "not tossing a first head" equals 1 (if you toss an infinite number of times)The example of "Throwing a dart" on http://en.wikipedia.org/wiki/Almost_surely is also a very clear example I think.
 
  • #27
Petr Mugver said:
Can you give us a practical example (with coins, dice, etc) of this? Note that the example given by Mathman doesn't apply to this, because he simply says that the probability of his particular event will EVENTUALLY occur is one, so you can never say a particular realization of the experiment failed. In other words: can you give an example of an event having p = 1 but that it can also not occur? (with discrete aleatory variables, so we can rule out problems of zero-measure sets)

Pretend that we are all some exact height, i.e. that the continuum of our heights is not discrete, and also imagine that we could somehow find out this height.

Then the probability of you being any height except exactly 6 feet tall is 1 since the probability of you being exactly 6 feet is 0 (think about all the other values, really close to 6 feet that you could have been, even if you are around 6 feet tall). But this would work for any number, including the height which you actually are.

So you see that the "probability of you being any height except exactly 6 feet tall" is 1, yet it doesn't mean that it is certain. In the same way the "probability of you being exactly 6 feet tall" is 0, yet it is still possible to be that height.

All this "maths talk" does have well defined and understandable definitions, but sometimes you just have to work a bit to unravel them =D Hope that helps a little.
 

FAQ: FLINE: Solving Infinity Conundrum Problems

What is FLINE?

FLINE is a scientific method for solving infinity conundrum problems. It stands for "Finding Limits in Numerical Equations".

How does FLINE work?

FLINE involves breaking down an infinite problem into smaller, more manageable parts. These parts are then solved using various mathematical techniques and the solutions are combined to find the overall limit of the problem.

What types of problems can FLINE solve?

FLINE is designed to solve problems involving infinite series, sequences, and limits. It can also be applied to problems involving functions and equations with limits.

What are the benefits of using FLINE?

FLINE allows for the efficient and accurate solving of complex infinity conundrum problems. It also provides a systematic approach that can be applied to a wide range of problems, making it a valuable tool for scientists and mathematicians.

Are there any limitations to FLINE?

FLINE may not be suitable for all types of infinite problems, particularly those that involve non-numerical elements or complex functions. It also requires a strong understanding of mathematical concepts and techniques in order to be effectively applied.

Similar threads

Replies
13
Views
4K
Replies
10
Views
4K
Replies
16
Views
910
Replies
45
Views
4K
Replies
5
Views
2K
Replies
6
Views
6K
Replies
15
Views
2K
Back
Top