What's the meaning of "random" in Mathematics?

In summary: But everyone is welcome to share their knowledge and learn from others, which makes this community a great place to discuss and learn about different topics.
  • #36
Buzz Bloom said:
Hi fbs:

This seems like a very strange question to me. If you say x ∈ X, you know something about x and X. Presumably you would know if x is a random variable if someone you believe to be knowledgeable tells you it is a random variable. What is needed by someone with the appropriate knowledge is that the process for obtaining values for x is a random process. So the randomness of a variable is determined by whether the process for obtaining values for the variable is a random process.

I am guessing you have some uncertainty about what it means for a process to be random. A random process is a process for which it is impossible by any means to know in advance what a particular value will be. This is the distinction between a random process and a pseudo-random process. If the process is pseudo-random, and you know the nature of this process and its initial conditions, in principle you can calculate the next value it will generate.

I hope this is helpful.

Regards,
Buzz

Appreciate it. I was stuck with the matter that logic is completely deterministic. If you have propositions A, B, C that are either true or false, then you'll always get other propositions D, E, F that are true or false as consequence from that. No changes, ever. So if f(x) = x2 and x=2, then always f(x) = 4. If so, then how could a "random" value ever be the result of a logical sequence from true/false propositions?

The Cox formulation untied that knot for me, through that abstract concept called "plausibility", which isn't mathematically defined -- it's an axiom. From what I understood, you don't have to define the process of rolling a dice, we just have to assume that plausibility(rolled-a-3) exists and ∈ [0..1]. Similarly, with Cox you don't need to define a process through which a ghostly hand will "choose" a fruit from a bag of fruits -- what's the hand? what's choosing? There's no need for that; you just assume that ∃ picked-an-orange and that plausibility(picked-an-orange) = plausibility(picked-an-apple) = plausibility(picked-a-lemmon) to get all kinds of useful calculations from that. There's no violation of determinism of logic that way.

I'm probably murdering poor Cox here, but that's how I untangled that knot, in my mind :biggrin:
 
Mathematics news on Phys.org
  • #37
Ray Vickson said:
The late physicist E.T Jaynes wrote a provocative book "Probability Theory: the Logic of Science", Cambridge University Press, 2003, in which he essentially rejects the very idea of "randomness". That's right, a large probability book by somebody who does not believe in randomness! For Jaynes (and several others---maybe mostly physicists), probability is associated with a "degree of plausibility". He shows that using some reasonable axioms about how plausibilities combine, you can end up with multiplication laws like P(A & B) = P(A) P(B|A), etc.

Yay! Gotta love Cox & Jaynes! Hooray to them! I somehow suspect that Kolmogorov and Cox/Jaynes are equivalent, as (I suspect) they come to the same conclusions through (I suspect) different axiomatic processes, but I did find Cox infinitely easier to grasp.
 
  • #38
fresh_42 said:
You might want to look for the alternative. It requires Cauchy sequences and equivalence classes. At least those are useful anyway, whereas Dedekind cuts are just this. Google "real numbers as Cauchy limits" or so.

Imo the better approach. It gets the student introduced to sequences, something fundamenal in analysis.
 
  • #39
fresh_42 said:
You're right. My only excuse is: far too many COBOL and RPG switches ...
Yes, 1+1=0 is not true in a Boolean Algebra. But it is true in the field ##\mathbb Z_2##.
 
  • #40
fbs7 said:
my mind screws were more in place with the idea that "random" is an interpretation thing
I suggest you hold on to that idea. The meaning of 'random' in the everyday world is a philosophical issue. There have been countless millions of words written in philosophical journals and the like about whether the universe is 'random', but few of them make sense because the definition of 'random' is not specified with sufficient clarity.

Even in mathematics there is no definition of 'random'. The word is only used in conjunction with another word, usually 'variable'. We have 'random variables' and 'stochastic processes' that are precisely defined terms, but there is no adjective 'random' in probability theory.
 
  • Like
Likes fresh_42
  • #41
fbs7 said:
Yay! Gotta love Cox & Jaynes! Hooray to them! I somehow suspect that Kolmogorov and Cox/Jaynes are equivalent, as (I suspect) they come to the same conclusions through (I suspect) different axiomatic processes, but I did find Cox infinitely easier to grasp.

I think that Cox/Jaynes is "equivalent" to probability as done in volume I of Feller---and that is saying a lot. However, some "deeper" modern results (seem to) need the "measure-theoretic" apparatus, so would essentially be rejected by Jaynes. Certainly, the two approaches would no longer be equivalent in any easily-described sense.

Admittedly, some treatments in the modern way of doing things look like they might just be re-statements of results done in the old-fashioned way, but the resulting statements of the results are more cumbersome in the old way. For example, Feller, Vol. I, proves the so-called Strong Law of Large Numbers without using any measure theory or other more abstract methods. However, the result looks less appealing than the modern statement. The modern statement would amount to ##P(\lim_{n \to \infty} \bar{X}_n = \mu) = 1.## In pre-measure language the same result would say "For every ##\epsilon > 0## with probability 1 there occur only finitely many of the events ##|\bar{X}_n - \mu| > \epsilon##" How much nicer is the first way of saying it compared with the second way.
 
  • #42
Ray Vickson said:
I think that Cox/Jaynes is "equivalent" to probability as done in volume I of Feller---and that is saying a lot.

I've long suspected something like this.. though I've only read part of Jaynes-- I couldn't shake the feeling while reading him that he was repackaging old ideas as new ones while using Feller as some kind of strawman to attack. There are some good ideas in Jaynes but I am leery of polemics these days. You're probably the one person on PF who has cited Feller even more than I have, so this seems satisfying.
 
  • #43
I don't think that anybody here came close to the crux of the problem, so let me try to direct the discussion into a different direction.

Are the digits of ##\pi## random?

Intuitively they are not because there is a deterministic algorithm that determines them, and yet all general tests of randomness suggest that they are random. It is this kind of problem that still seems to lack a satisfactory mathematical and/or philosophical solution.
 
  • Like
Likes Auto-Didact
  • #44
fbs7 said:
Oh... randomness is interpretation...

The intuitive idea behind probability is that an event can have "tendencies" to occur in different possible ways, but only occurs in one of those ways. Yes, this intuitive idea is NOT implemented in the mathematical axioms of probability theory. So when people apply mathematical probability theory to situations and reason about various outcomes being "possible", but only one outcome being "actual", they are making an interpretation of probability theory that is not present in its mathematical formulation.

So a "random" variable is really just another variable, just like "time" is just a variable without anything different than say a "mass" variable.

No. There is a saying: "A random variable is not random and it is not a variable".

As mentioned above, the mathematical assumptions use in defining a "random variable" do not treat the concept of "random" in the intuitive and common language meaning of the word "random".

A "random variable" is not a variable in the same sense that a symbol representing time or mass is a variable nor is it a "variable" in the sense used in mathematical logic or in computer languages. The mathematical properties that define a random variable are stated in terms of functions called distributions. Of course, the definition of a function may contain variables (e.g. f(x) = 4x ). But the same function can be defined using different symbols. (e.g. f(x) = 4x and f(w) = 4w are definitions of the same function).

The inutuive idea of a "random variable" is that it is defined by a distribution function that can be use to compute "the fraction of times that particular sets of values of the random variable will occur". The logical problem with that interpretation is that "will occur" is a definite guarantee of something happening. Such a definite guarantee is a contradiction to the intuitive notion of "probability", which is a concept we apply when there are no definite guarantees. In mathematical probability theory, the distribution function can be used to calculate the probability of particular sets of values - without saying what physical interpretation we assign to "probability". (i.e. There is no mathematical definition of "probability" in terms of a "tendency" or "fraction of times" for an "actual" occurence).

Mathematical probability theory is essentially circular. Given certain distributions , it tells how to compute other distributions. Given various probabilities, we can compute other probabilities. There is no breaking down of the concept of "probability" into more detailed concepts.

( It's amusing and understandable that the terminology used in mathematical statistics (e.g. "significance", "confidence", "uncertainty") strongly suggests that mathematical "probability" must have a specific interpretation along the lines of a "tendency" or "fraction of times". Applications of statistics were made before the invention of modern mathematical probability theory, so it developed such terminology on the basis of applications before the foundations of the subject were properly organized. )

Demystifier said:
I don't think that anybody here came close to the crux of the problem,

That depends on how you define "the problem".

If the problem is to state the content of mainstream mathematical probability theory (i.e. measure theory) , various posters have done this.

If the problem is to find an alternative mathematical theory that implements the intuitive notion of "randomness", then your question hints that such an approach can be founded on notions of computational complexity.
 
  • Like
Likes Auto-Didact
  • #45
@Demystifier
The original question was how a mathematician deals with "random". That is not as deep a question because a mathematician only needs to know that random is being assumed -- not that the assumption is physically correct. So the mathematician only has to know if we are assuming that the digits of ##\pi## are random or not.
I think that your question is different since it is asking if we should accept an assumption that the digits of ##\pi## are random. Suppose there is no statistical test that proves it is not random beyond a reasonable doubt. I would only consider it to be pseudo-random, just like any other algorithm for a pseudo-random number generator. A person as great as Einstein was able to retain the belief that there was no true randomness in the universe till his disagreement with Bohr became famous. When people as brilliant as they are disagree, I will stay out of the debate.
 
  • Like
Likes Klystron and fresh_42
  • #46
Demystifier said:
I don't think that anybody here came close to the crux of the problem, so let me try to direct the discussion into a different direction.

Are the digits of ##\pi## random?

Intuitively they are not because there is a deterministic algorithm that determines them, and yet all general tests of randomness suggest that they are random. It is this kind of problem that still seems to lack a satisfactory mathematical and/or philosophical solution.
I'm not that surprised with that... these "randomness" tests also pass pseudo-random numbers, even if they are fully deterministic... I'd suspect that these "randomness" tests are incomplete, and pass as random sequences that are deterministic.
 
  • #47
fbs7 said:
I'm not that surprised with that... these "randomness" tests also pass pseudo-random numbers, even if they are fully deterministic... I'd suspect that these "randomness" tests are incomplete, and pass as random sequences that are deterministic.
I would like to point out that any truly random sequence can be turned into a deterministic sequence just by recording the random numbers and replaying them as deterministic. Therefore, it is not possible to have a statistical test that would distinguish between the two. There are large tables of "random" numbers available and used.
 
  • #48
FactChecker said:
I would like to point out that any truly random sequence can be turned into a deterministic sequence just by recording the random numbers and replaying them as deterministic. Therefore, it is not possible to have a statistical test that would distinguish between the two. There are large tables of "random" numbers available and used.
I suppose that one could change the question to asking if a process generates random numbers. In that case, repeating the process of replaying the table of numbers would produce the exact same sequence of numbers and be easily identified as deterministic.
 
  • Like
Likes Buzz Bloom
  • #49
fbs7 said:
I'm not that surprised with that... these "randomness" tests also pass pseudo-random numbers, even if they are fully deterministic... I'd suspect that these "randomness" tests are incomplete, and pass as random sequences that are deterministic.
One can go down the road of Kolmogorov randomness. I am no expert on it.

Roughly speaking, if you take this approach, the information content of a string is the length of the shortest computer program that can produce the string as output. In the context of infinite strings (such as pi) one has to get a little fancier and talk about a program that can generate the output stream from some input stream. If no program can produce output bytes at a better than 1 to 1 ratio to input bytes then the output stream is "random".

The gotcha with this approach is that "finding the best program" is not a feasible problem in general. Kolmorogorov randomness can be defined but it can not always be determined.

Edit: If you have an output stream produced by a loaded die then the Kolmogorov definition will (with probability 1) match the Shannon notion of information content in the stream.
 
Last edited:
  • #50
FactChecker said:
I suppose that one could change the question to asking if a process generates random numbers. In that case, repeating the process of replaying the table of numbers would produce the exact same sequence of numbers and be easily identified as deterministic.
We could say that a process generates sequences of random numbers if there are no statistical tests on repetitions of the process which would identify it as deterministic beyond a reasonable doubt. Then we could call any output sequence of the process as random. That opens several cans of worms, one of which is that "no statistical tests" is poorly defined and it is concievable that there will always be a statistical test that a given process will fail.
 
  • #51
fbs7 said:
I'm not that surprised with that... these "randomness" tests also pass pseudo-random numbers, even if they are fully deterministic... I'd suspect that these "randomness" tests are incomplete, and pass as random sequences that are deterministic.
Hi fbs:

A randomness test gives a value that estimates the degree of close approximation a particular pseudo-random number sequence (e.g., the digits in pi) has to true random numbers. It is not intended to be complete.

Regards,
Buzz
 
Last edited:
  • #52
fbs7 said:
Appreciate it. I was stuck with the matter that logic is completely deterministic.
Hi fbs:

A deterministic process is by definition non-random. It is possible to voice logical statements about randomness, but such statements are not processes for generating random values. What they might be are statements that describe requirements about a process which are necessary for the process to create random values. For example, it is logical to say with respect to QM that measurements of the spin of a sequence of non-entangled particles produces a sequence of random values each of which is either "up" or "down".

Regards,
Buzz
 
  • #53
I don't know if this is too basic, but a variable is random if its outcome cannot be predicted with certainty. At best we can describe the distribution of the outcomes.
 
  • #54
WWGD said:
I don't know if this is too basic, but a variable is random if its outcome cannot be predicted with certainty. At best we can describe the distribution of the outcomes.
That certainly has advantages. For one thing, it is simple and as usable as any other definition. For another thing, it allows one to discuss the aspect that lack of available information plays in the inability to predict. One can reasonably think of randomness and probabilities in terms of the ability to guess and predict, given the information available. So the question of whether a result is really physically random or not is no longer appropriate -- it becomes a question of whether enough information is known to make the prediction. The second question is easier to agree on.
 
  • #55
FactChecker said:
That certainly has advantages. For one thing, it is simple and as usable as any other definition. For another thing, it allows one to discuss the aspect that lack of available information plays in the inability to predict. One can reasonably think of randomness and probabilities in terms of the ability to guess and predict, given the information available. So the question of whether a result is really physically random or not is no longer appropriate -- it becomes a question of whether enough information is known to make the prediction. The second question is easier to agree on.
Well, true, I am doing a good amount of assuming/black boxing. We may need to conduct tests on whether the variable ( and not a single output) is random. But at least these are some guide posts/ goalposts, and, yes, pretty tricky: what information , if any, would be needed to do a better approximation or estimation of the output? We may also have trouble with (somewhat_- pathological cases like Cauchy variables with infinite variance.

I can, tho, think of genuinely random variables when, e.g., using a pendulum oscillating between ( more than 1) magnets and seeing where it settles.
 
  • #56
Wow, many exquisite insights!

Let me ask you mathematicians this question: a pseudo-random sequence will repeat, even if after an absurd amount of time. But the digits of π do not repeat.

We know that neither of them are truly random, given that number n-th can be exactly calculated, although both cases "seem" to be random up to some measure.

But, given the fact that the digits of π do not repeat, does that make that sequence a bit more "random-ny" than a pseudo-random generator, if there's such thing? I fear I'm butchering proper mathematic-inglish again! :biggrin:
 
  • #57
fbs7 said:
Wow, many exquisite insights!

Let me ask you mathematicians this question: a pseudo-random sequence will repeat, even if after an absurd amount of time.
That is a property of some specific methods of generating pseudo-random numbers. It is not a consequence of "pseudo-random" alone. Evan very good methods may fail sophisticated statistical tests, but they are adequate for most uses.
 
  • #58
fbs7 said:
But, given the fact that the digits of π do not repeat, does that make that sequence a bit more "random-ny" than a pseudo-random generator, if there's such thing? I fear I'm butchering proper mathematic-inglish again!
Hi fbs:

I am guessing that your question is whether it is possible for a randomness test method to determine if one pseudo-random number generator is better than another. I am also guessing that the answer is yes, and I am pretty confident this second guess is correct.

You should keep in mind that pseudo-random number generators are used with a specific purpose. It is desired that the pseudo random number generator used will give a result adequately close to what would be obtained using a true random number generator, e.g., something based on QM. If you can know theoretically what statistical result to expect form a sequence of random numbers used for a specific purpose, e.g., Monte-Carlo calculations, then the results of performing several Monte-Carlo runs with different pseudo-random number generators can be compared to determine which is the best for that particular purpose.

Regards,
Buzz
 
  • #59
fbs7 said:
Wow, many exquisite insights!

Let me ask you mathematicians this question: a pseudo-random sequence will repeat, even if after an absurd amount of time. But the digits of π do not repeat.

We know that neither of them are truly random, given that number n-th can be exactly calculated, although both cases "seem" to be random up to some measure.

But, given the fact that the digits of π do not repeat, does that make that sequence a bit more "random-ny" than a pseudo-random generator, if there's such thing? I fear I'm butchering proper mathematic-inglish again! :biggrin:
A pseudo-random sequence of numbers is just a sequence that has no readily apparent pattern, but is generated by a finite algorithm that takes no external inputs after the generation of the first M values (M is a fixed positive integer). Since the digits of pi are generated by an algorithm and its calculation takes no external inputs, they are a pseudo-random sequence.

So far as I am aware, it is not the case that a pseudo-random sequence must eventually repeat. It is just that non-repeating sequences become too slow to compute, the longer they continue, so the ones used by computers tend to be repeating for efficiency reasons. One way that I think will make a sequence non-repeating is as follows:

First, a definition: we say that a sequence of numbers, which may be finite or infinite, is 'repeating with lag L' if, for any j that is an index of the sequence such that j+L is also an index, the j-th and the (j+L)th element are equal.

The algorithm will use a simple, repeating, pseudo-random generator as its base. It generates positive integers.

Say we have already generated the first n numbers. Then for every k from 2 to (n+1), let m(n,k) be the quotient from dividing (n+1) by k and let r(n,k) be the quotient on dividing k by 2. For each k, there will be zero or one numbers such that, if that number is chosen, the last r(n,k) . m(n,k) elements out of the first (n+1) sequence elements, is repeating with lag m(n,k). Such a number is a 'forbidden number', as it creates a temporary repeat for a significant tail of the sequence thus far. Let F(n) be the set of all forbidden numbers at the step for generating the (n+1)th number.

Then generate a new number h(n+1) from the simple generator. Now chose as the (n+1)th number the closest positive integer to h(n+1) that is not in F(n).

I believe such a sequence will never repeat. But it will become progressively slower, because the number of checks that need to be made for each number increases as the sequence progresses.

But the sequence is pseudo-random because it satisfies the definition.
 
Last edited:
  • Like
Likes Klystron
  • #60
Hmm... true point! I'm a programmer, so I should have seen that too, appreciate you pointing out.

Another example of a generator that will (eventually, and very very very slowly) not repeat is this one, for say 8-bit integers (or whatever size integers we choose):

Code:
initialize a[i] = 0 for i in 1..N
repeat
    calculate randomness of a[]
    for i = 1..N, for j = 1 to 8:
         invert bit j of a[i]
         recalculate randomness of a[]
         if new sequence has lower randomness, revert the bit back
   until randomness of a[] > desired-target

that algorithm (a brute-force search, actually) will eventually generate a non-repeating, whatever-length sequence that will pass whatever randomness criteria we set... at a cost of taking ages to process... hmm... actually this is probably the slowest random number generator ever... I should get a prize or something for that :biggrin:

hmm... actually I'm a bad programmer.. I should check for the thing hitting a hole, where any one-bit changes does not increase randomness; if one checks for valleys and holes and backtracks, the thing gets even slower...
 
  • #61
fbs7 said:
Another example of a generator that will (eventually, and very very very slowly) not repeat is this one, for say 8-bit integers (or whatever size integers we choose):

Code:
initialize a[i] = 0 for i in 1..N
repeat
    calculate randomness of a[]
    for i = 1..N, for j = 1 to 8:
         invert bit j of a[i]
         recalculate randomness of a[]
         if new sequence has lower randomness, revert the bit back
   until randomness of a[] > desired-target

that algorithm (a brute-force search, actually) will eventually generate a non-repeating, whatever-length sequence that will pass whatever randomness criteria we set... at a cost of taking ages to process... hmm... actually this is probably the slowest random number generator ever... I should get a prize or something for that :biggrin:
It is notoriously difficult to design an algorithm for a random number generator. Anything that one thinks will be random is very likely to display patterns that are clear to see when looked at the right way. Your program looks like an example of that. A classic book which contains a lot on the subject is Knuth, "Art of Computer Programming, Volume 2: Seminumerical Algorithms" .
 
  • Like
Likes Klystron and Auto-Didact
  • #62
FactChecker said:
It is notoriously difficult to design an algorithm for a random number generator. Anything that one thinks will be random is very likely to display patterns that are clear to see when looked at the right way. Your program looks like an example of that. A classic book which contains a lot on the subject is Knuth, "Art of Computer Programming, Volume 2: Seminumerical Algorithms" .
As an example of the notorious difficulty, all of the touted best algorithms in the 1981 edition of this book are now considered inferior, obsolete algorithms as randomness testing improved, and uses became more sensitive to multidimensional correlations (which plagued the most touted algorithms of the 1981 edition). I do not know if there is a more updated edition; the 1981 is the only one I have.

However I still strongly recommend it as the only book I know of to develop a robust theory of pseudo-randomness in a sequence.
 
  • Like
Likes FactChecker and Auto-Didact
  • #63
There are entire texts now on random number generators that, I assume, are more modern and comprehensive. Unfortunately, I am not up to date on any new developments.
 
  • #64
WWGD said:
I can, tho, think of genuinely random variables when, e.g., using a pendulum oscillating between ( more than 1) magnets and seeing where it settles.
This screams determinstic chaos to me i.e. completely non-random, but merely unpredictable for all practical purpose (FAPP) due to limited measurement precision, firstly because of practical reasons and secondly because of the uncertainty principle in QM.

Another popular example of deterministic chaos in which the pattern seems random is the Lorenz water wheel, where it quickly becomes impossible to predict which side the wheel will turn or how it began turning, yet its complete dynamics is fully described by an attractor in state space.
 
  • #65
PAllen said:
As an example of the notorious difficulty, all of the touted best algorithms in the 1981 edition of this book are now considered inferior, obsolete algorithms as randomness testing improved, and uses became more sensitive to multidimensional correlations (which plagued the most touted algorithms of the 1981 edition). I do not know if there is a more updated edition; the 1981 is the only one I have.

However I still strongly recommend it as the only book I know of to develop a robust theory of pseudo-randomness in a sequence.
The algorithms of that time would show high autocorrelations at particular high lag amounts. The use of Box-Jenkins time series analysis implemented on computers easily identified weaknesses that would otherwise not be noticed. Those algorithms were perfectly adequate for most uses in simulation, but not if very sophisticated statistical analysis of the results was to be done. I felt that the weaknesses were noticed in the academic world but usually did not matter in real-world applications.
 
  • #66
FactChecker said:
The algorithms of that time would show high autocorrelations at particular high lag amounts. The use of Box-Jenkins time series analysis implemented on computers easily identified weaknesses that would otherwise not be noticed. Those algorithms were perfectly adequate for most uses in simulation, but not if very sophisticated statistical analysis of the results was to be done. I felt that the weaknesses were noticed in the academic world but usually did not matter in real-world applications.
For most applications, yes. However a former physics professor of mine bumped hard into the fundamental limits of linear congruential gemerators doing Monte Carlo simulation in high dimensional phase space.
 
  • Like
Likes Klystron and FactChecker
  • #67
Years ago when I was a newbie programmer, a senior programmer told me the story of one such pseudo random generator program.

It was a new and better algorithm that looked totally random according to initial tests.

However, it wasn't until the random sequence elements were grouped into triplets and plotted in 3D that it was discovered that they all fell on the same plane.
 
  • Like
Likes FactChecker and Auto-Didact
  • #68
I'm surprised that no one on this forum mentioned that there are in fact different definitions of randomness depending on what context you are using.

For example, there is a notion of statistical randomness. A numeric sequence is said to be "statistically random" when it contains no recognizable patterns or regularities. Note that sequences that are statistically random doesn't necessarily imply "true" randomness in the sense of objective unpredictability, since a deterministic process (such as the calculation of the digits of π) can generate such a statistically random sequence (thus the utility of pseudorandom number generators).

There is another way in which randomness is defined in mathematics. The field of algorithmic information theory studies, among various topics, what constitutes a random sequence. The central idea is that a string of bits is random if and only if it is shorter than any computer program that can produce that string (called Kolmogorov randomness, after Russian mathematician Andrey Kolmogorov who had developed this definition, which were also independently developed by American computer scientist Ray Solomonoff and American mathematician Gregory Chaitin).
 
Last edited:
  • Like
Likes FactChecker
  • #69
Wow! That is awesome! Information theory at its finest!

If you cannot compress a string (by calculating it with a program) then the string is random. Mind-blowing! So "abababababababababababababab" is not random, and neither is "abcdefghijklmnopqrstuvwxyz". I'm starting to like this Kolmogorov guy a little more!

Or not... hmm... say I give this string "azi843ad8$@$%po#$cd2904". Then if I could write a tiny program that calculated it, then I know it's not random. But if I can't write that program, how could I prove it's a random string by proving that nobody ever anywhere in the history of the Universe could write a small program that by doing adds/multiplications/etc... would calculate that string. Not liking Kolmogorov any more... he seems to have the habit of frying other people's brains!
 
  • #70
fbs7 said:
Wow! That is awesome! Information theory at its finest!

If you cannot compress a string (by calculating it with a program) then the string is random. Mind-blowing! So "abababababababababababababab" is not random, and neither is "abcdefghijklmnopqrstuvwxyz". I'm starting to like this Kolmogorov guy a little more!

Or not... hmm... say I give this string "azi843ad8$@$%po#$cd2904". Then if I could write a tiny program that calculated it, then I know it's not random. But if I can't write that program, how could I prove it's a random string by proving that nobody ever anywhere in the history of the Universe could write a small program that by doing adds/multiplications/etc... would calculate that string. Not liking Kolmogorov any more... he seems to have the habit of frying other people's brains!
It seems that the rule of the program must be larger than the result works well as a definition, but proving such a thing for a given sequence is hard.
 

Similar threads

Replies
5
Views
695
Replies
31
Views
5K
Replies
7
Views
3K
Replies
7
Views
3K
Replies
1
Views
2K
Replies
7
Views
3K
Replies
9
Views
3K
Replies
4
Views
3K
Back
Top