Is there a definition of randomness?

In summary, the concept of "randomness" does not have a formal definition and is often approached philosophically rather than mathematically or scientifically. While there are formal definitions for items used in probability theory, such as random variables and stochastic processes, these do not fully capture the meaning of "randomness". Additionally, the concept of "normal" sequences of digits has some similarities to the folk notion of "randomness", but it is also limited in its ability to provide a clear definition. Ultimately, the predictability of a process depends on the theory being used, and there is no way to determine if there is an unknown, more sophisticated theory that can predict outcomes that currently seem random. The idea of "randomness" is also closely
  • #71
andrewkirk said:
It's entry-level Popper. All theories are temporary stop-gaps, awaiting falsification and replacement by a more comprehensive and accurate theory.

Popper was a philosopher. And, philisophy doesn't have the last word on physics. Maybe Popper was wrong? That said, I think "comprehensive" and "accurate" are key words. Whatever replaces QM would have QM at its core. The probabilities would remain.

There is a quotation in one of the Feynman Messenger lectures where he says:

"A philosopher once said: 'it is necessary for the very existence of science that the same conditions always produce the same results'."

To which Feynman added "well, they don't! And, yet the science goes on in spite of it."

If Popper is your last word on QM, then I'll take Feynman as my last word.
 
  • Like
Likes Zafa Pi
Physics news on Phys.org
  • #72
PeroK said:
Popper was a philosopher. And, philisophy doesn't have the last word on physics. Maybe Popper was wrong?
Actually Popper made a foray into QM, unfortunately it turned out wrong. In spite of that I have great respect for Popper.
As an example of something cool, he showed that the usual shibboleth, that tests confirming a hypothesis lend greater credence to the hypothesis, is false.
 
  • Like
Likes PeroK
  • #73
PeroK said:
philisophy doesn't have the last word on physics.
I agree. But they are generally pretty good on philosophy, and your statements in this thread have not been science but philosophy, as they consist of claims about the interpretation of quantum mechanics, which, being unfalsifiable, lie in the realm of metaphysics, a part of philosophy. If you want to turn your metaphysical assertions into science, you'll have to offer a precise, testable definition of 'random', which after all is what this thread is about.
 
  • #74
andrewkirk said:
I agree. But they are generally pretty good on philosophy, and your statements in this thread have not been science but philosophy, as they consist of claims about the interpretation of quantum mechanics, which, being unfalsifiable, lie in the realm of metaphysics, a part of philosophy. If you want to turn your metaphysical assertions into science, you'll have to offer a precise, testable definition of 'random', which after all is what this thread is about.

Talk about pot, kettle, black!
 
  • #75
PeroK said:
Talk about pot, kettle, black!
You are making the positive claim. I am merely observing there is no evidence to support your claim. Hence the onus of proof is entirely on you.
 
  • #76
andrewkirk said:
You are making the positive claim. I am merely saying there is no evidence to support your claim. Hence the onus of proof is entirely on you.

What no evidence? There is no evidence for probabilities in QM?

Honestly, I don't see how quoting the Stern-Gerlach experiment is philosophy. Your blog on M-theory is, on the other hand, pure philosophy.

Nothing can be proved scientifically. And, one thing above all else that definitely can't be proved is that there is an omniscient M-Theory out there that predicts everything.

Anyway. Enough said.
 
  • #77
entropy1 said:
Is there a definition of "random(ness)"? Is it defined?
Definition of random: A sequence of heads and tails obtain by flipping a coin* from the top of the Eiffel Tower.

* For specificity we should use the standard platinum-iridium metric coin stored in the same chamber as the Platonic Kilogram.
The Eiffel Tower was chosen due to its proximity to the Platonic Coin.

N.B. Some say a Pt-Ir Stern/Gerlach apparatus would also suffice.
 
  • #78
PeroK said:
What no evidence? There is no evidence for probabilities in QM?
What does 'evidence for probabilities' mean? Probabilities is a word, not a proposition. Evidence is something that supports a proposition, not a word. You might as well ask what is the evidence for blue, or for three.

The reason you need to resort to such vague statements is that you cannot form a proposition for what you think you are trying to argue, because propositions must be couched only in defined terms, and you have no definition of 'random'. I have pointed this out more than once, and you have failed to supply one. Without such a definition, claims that the universe is random, or that QM says it is, are not even wrong.
 
  • #79
andrewkirk said:
The reason you need to resort to such vague statements is that you cannot form a proposition for what you think you are trying to argue, because propositions must be couched only in defined terms, and you have no definition of 'random'. I have pointed this out more than once, and you have failed to supply one. Without such a definition, claims that the universe is random, or that QM says it is, are not even wrong.
I believe I understand what you are saying, and you have made a reasonable point.

QM says measurements are random variables As example we can take the case of post #58 by @PeroK. that QM says will lead to an iid sequence of two valued inform r.v.s.

Now we go to the lab use a S/G apparatus (which physicists believe is what is being modeled by QM) to generate a sequence of ups and downs and ask if that sequence is random. Now you say, what does random mean? Good question.

Well some may answer by saying the sequence passes a battery of randomness tests. But I personally find that inadequate for several reasons, one being the bits of pi pass the tests, but are the results of a program.

My 1st step in answering your question is: Would you accept
Zafa Pi said:
Definition of random: A sequence of heads and tails obtain by flipping a coin* from the top of the Eiffel Tower.
 
  • #81
OCR said:
I was going to get there, but do you have any idea how much a platinum-iridium S/G apparatus would cost?
 
  • #82
Zafa Pi said:
...do you have any idea how much a platinum-iridium S/G apparatus would cost?
N.B. Only a random guess...
 
Last edited:
  • #83
Zafa Pi said:
My 1st step in answering your question is: Would you accept:

Definition of random: A sequence of heads and tails obtain by flipping a coin* from the top of the Eiffel Tower.
It's a nice suggestion. The difficulty with it is that at most it could be an example of a single outcome from a stochastic ('random') process. Although in everyday life we use collections of examples to serve as pragmatic definitions, that is only ever an approximation, and the further removed we get from everyday objects and concepts, the more ineffective such approximations become. In order to define what it means for a physical process to be random, we need to specify a set of properties that are necessary and sufficient conditions for a process to be random. Setting the condition to be the above would then mean that any process that did not involve coins and the Eiffel Tower would not be random, be it a sequence of S/G measurements, Geiger counter readings, or anything else.

You might be interested in this Fourmilabs link, at which one can order and then download a sequence of measurements by a Geiger counter, which gives you the same sort of thing as a SG setup, but at much lower cost.

The question remains though, what does Fourmilabs (the supplier of the numeric sequences) mean by 'random'. My preferred definition, as per the aforelinked essay, is:

'a process P is random with respect to a theory T iff, for any time t, there is no set of physical measurements that can be made prior to t that enables exact, certain prediction of outcomes of P at times later than t'

Under that (epistemological) definition, the radioactive process that generates the Fourmilabs sequence is random with respect to T=Quantum Mechanics, as is a SG process. And the sequence of 'random numbers' generated by computer programs like R, Matlab or Excel (which they call 'pseudorandom number generators' when they are speaking precisely) are not. But that tells us nothing about whether there may be some more powerful theory T2, with respect to which the Fourmilabs or SG processes are not random.
 
  • Like
Likes stevendaryl and Zafa Pi
  • #84
andrewkirk said:
It's a nice suggestion. The difficulty with it is that at most it could be an example of a single outcome from a stochastic ('random') process. Although in everyday life we use collections of examples to serve as pragmatic definitions, that is only ever an approximation, and the further removed we get from everyday objects and concepts, the more ineffective such approximations become. In order to define what it means for a physical process to be random, we need to specify a set of properties that are necessary and sufficient conditions for a process to be random. Setting the condition to be the above would then mean that any process that did not involve coins and the Eiffel Tower would not be random, be it a sequence of S/G measurements, Geiger counter readings, or anything else.

You might be interested in this Fourmilabs link, at which one can order and then download a sequence of measurements by a Geiger counter, which gives you the same sort of thing as a SG setup, but at much lower cost.

The question remains though, what does Fourmilabs (the supplier of the numeric sequences) mean by 'random'. My preferred definition, as per the aforelinked essay, is:

'a process P is random with respect to a theory T iff, for any time t, there is no set of physical measurements that can be made prior to t that enables exact, certain prediction of outcomes of P at times later than t'

Under that (epistemological) definition, the radioactive process that generates the Fourmilabs sequence is random with respect to T=Quantum Mechanics, as is a SG process. And the sequence of 'random numbers' generated by computer programs like R, Matlab or Excel (which they call 'pseudorandom number generators' when they are speaking precisely) are not. But that tells us nothing about whether there may be some more powerful theory T2, with respect to which the Fourmilabs or SG processes are not random.

That is a workable definition. Still, how do you define a theory? Is it a collection of predictive algorithms , general rules of inference, etc? Is it fixed or does it allow add-ons? Does it allow for infinitely-many inference rules? Does it have a specific measure for detecting fit between predicted and modeled by the theory, etc?
 
  • #85
WWGD said:
That is a workable definition. Still, how do you define a theory? Is it a collection of predictive algorithms , general rules of inference, etc? Is it fixed or does it allow add-ons? Does it allow for infinitely-many inference rules? Does it have a specific measure for detecting fit between predicted and modeled by the theory, etc?
Physics is loaded with undefined terms as opposed to math. You mention theory, and there is measurement and random. The OP was concerned about random, so sticking with that, it seems (@PeroK, @andrewkirk , and me) that random is contextual.

I suggest that a random sequence is generated by certain specified physical processes. For example coin flipping. More general, lab measurements that correspond to QM measurements that are random variables, e.g. measuring electron spin at 90° from spin up electrons at 0°.
Both of the above example have been found to satisfy randomness checks for two valued uniform sequences, but they are not generated by any algorithm.

How about?
Random := The value(s) produce by an objective* physical* process that when repeated yields a sequence that passes randomness tests.
* objective means repeatable by others. * physical means non-algorithmic, like coin flips.

Now note this not a math definition. Is it any more vague than defining length?
 
  • #86
Zafa Pi said:
Physics is loaded with undefined terms as opposed to math. You mention theory, and there is measurement and random.

I would see things as follows. I'll use "circle" as an analogy:

Circle: mathematical concept with precise mathematical definition. In general, a perfect circle cannot be proved to exist in nature, even though a mathematical model predicts it.

Random variable: mathematical concept with precise mathematical definition. In general, a random variable cannot be proved to exist in nature, even if a theory predicts it.

However, to say that a particle executes a circular orbit is not absurd. Neither is it absurb to say that a measurable in QM is a random variable. You can never prove or measure either precisely, but that's not the point.

The question vis-a-vis the Stern-Gerlach experiment is whether QM will ever be replaced by a theory that will predict in advance the direction each electron will take. There are three possibly answers:

Definitely not (QM may be replaced but not by something that gets rid of the probabilities)
Possibly (maybe the universe is deterministic after all and maybe it isn't)
Definitely (the universe is definitely fully deterministic and QM must be replaced at some stage by a theory that gets rid of the probabilities)

The answer of "definitely", which is assumed by some as an a priori property of a fundamentally deterministic universe is not correct. There is no way to be certain that the universe is fundamentally deterministic. There is no way to be certain that the probabilities apparently observed in the Stern-Gerlach experiment and predicted by QM will ever go away.
 
  • #87
WWGD said:
That is a workable definition. Still, how do you define a theory? Is it a collection of predictive algorithms , general rules of inference, etc? Is it fixed or does it allow add-ons? Does it allow for infinitely-many inference rules? Does it have a specific measure for detecting fit between predicted and modeled by the theory, etc?
Good questions. After some reflection, my suggestions are:

Let's adopt the formal definition of 'theory' that is used in First Order Predicate Logic (FOPL), which is that a theory is a set of propositions, where a proposition is a statement that is true or false.

We then define a 'Physical Theory' to be a set of propositions T that is the closure, under the operation of deduction, of a set of propositions G, such that every proposition in G is of the form "probability of B, given A, is p" (call a proposition of this type a predictive proposition), where A and B are both 'constructions' of physical observations, where the set of 'constructions' is the closure of the set of observations under the operations of conjunction (AND) and disjunction (OR).

The B propositions are the predicted physical observations and the A propositions are those on which the predictions are based.

A 'physical observation' is a proposition of the form

'result of measurement M <operand> x'​

where <operand> is <, = or > and x is a real number.

We could say T is 'a deterministic theory' if all probabilities p in propositions in G are either 0 or 1.
Is it fixed or does it allow add-ons?
T is determined by G. If we add or subtract anything from G, then the modified generating set G' generates (via closure over deduction) a theory T' that is different from T, unless the added or removed propositions were redundant.
Does it allow for infinitely-many inference rules?
Inference rules are what generates G. So if we have a set R of inference rules. which is just a set of propositions, that generates a set G of predictive propositions via closure through deduction, subject to requiring any deduced propositions to be predictive. It is the set R of inference rules that we usually think of as a physical theory, as in the postulates of QM or of GR, but here we reserve the term 'theory' for T, the set of all propositions deducible from R , for consistency with the usual terminology of FOPL.

With that meaning of 'inference rules', there is no compelling reason to require R to be finite. If we want the theory to be comprehensible by finite beings like humans, we would have to require it to be finite, but I don't feel the need to apply that restriction, and I don't think it materially influences the issues under discussion here.

Note however, that if we don't require R to be finite, the M-Law, which is the infinite set of observations of every particle in the universe ever, qualifies as a theory.
Does it have a specific measure for detecting fit between predicted and modeled by the theory
The measures could be as follows:

- if the theory contains a proposition of the form ##P(B|A)=1 (0)## and ##A## is observed to occur and ##B## is observed not to occur (##B## is observed to occur), then the theory has been falsified and must be discarded.

- if the theory contains a proposition of the form ##P(B|A)=p## and ##A## is observed to occur and ##B## is observed not to occur then the 'degree of doubt' in the theory raised by the observations is ##p##.
 
  • #88
In software engineering, a random number generator "creates" a sequence of numbers that is pseudo-random. But it is predictable if you know the seed number.

In the quest for "secure randomness" some engineers used the idea that photon "noise" (another word for random data), from a digital video camera can be used to help create more secure encryption methods... but I have come to believe that randomness simply does not exist. It is paramount to saying "god did it" and generally points to a lack of knowledge of a pattern. If a pattern is too complicated for you to see as order, you will call it random. With the photon/camera trick, the only thing random about it is that it is highly unlikely that you will know all of the interacting cycles of the dynamic system of light and matter being captured in the camera's view in order to properly predict when a photon will hit the camera's lens and cause the pixel bucket to pass a specific threshold for that position.

So to put it simply, randomness is just a reflection of our own ignorance of any system. In the case of the camera trick, it's simply our own ignorance of the dynamical system of light reflection and refraction with matter. There will be no definition of randomness truly until we have fully defined physics and somehow have the ability to know the position, heading, charge, and velocity of every electron in the universe.
 
  • #89
Curiose said:
So to put it simply, randomness is just a reflection of our own ignorance of any system.

I'm a bit puzzled by what you say. If I gave you, say, the weekly football results. Just a sequence of the goals scored in each game, home and away, in alphabetical order. Then, you're saying that eventually you can (or could?) predict the scores?

Or, if I tossed a coin repeatedly and gave you that data, then you could eventually predict each and every toss? I don't see how you could do that.
 
  • Like
Likes Curiose
  • #90
PeroK said:
I'm a bit puzzled by what you say. If I gave you, say, the weekly football results. Just a sequence of the goals scored in each game, home and away, in alphabetical order. Then, you're saying that eventually you can (or could?) predict the scores?

Or, if I tossed a coin repeatedly and gave you that data, then you could eventually predict each and every toss? I don't see how you could do that.

No, because that is not enough data about the system which is being analyzed and predicted.

What I'm saying is that if I knew the state of every air particle, the exact pressure/position of the thumb, how many atoms the thumb and the coin had in them, the exact height of the point of release, the exact distance to the ground, the exact layout of the ground or surface of which the coin was falling, maybe down to the micron, and the environmental conditions directly surrounding and interacting with the coin as it flips through the air, and probably a number of other physical factors which it is currently impossible to measure, I could predict which side the coin would end up on when it finally came to rest. But pretty much, since I don't know those things and have no interest in developing some sort of analysis device which could, for me, it is impossible to know, thus I would say it is "random".

The underlying issue is that I cannot know these things with the very limited sensory perception I have. You imagine watching a coin toss with your eyes, but eyes are not a very accurate measuring device. It would be possible IF I knew every factor affecting the coin, and the toss. But that would basically take me becoming some sort of coin-toss analytical genius, so it's not plausible to know, thus, I can just sum up the entire system as "random" and get on with the rest of life.

Every dynamical system has rules and a myriad of components plus their interactions over time (if talking about a "moving" dynamical system). It would only be possible to say, predict whether a kicker would make a kick or not, with 100% accuracy if you could literally measure every stride and every angle of every part of the holarchich dynamical system which is the kicker's body, leg, ball, field, etc.

It's like the difference between having a set of statistics about something, and literally being an omnipotent observer of that system in real time, having much more detailed statistics about factors which may not seem to affect the outcome but actually do. This is why I say that it is not randomness that exists, but rather a lack of observational knowledge.

You can't predict which way the wind will be blowing at a given point if you think of the wind as "wind speed and a 2D angular direction", no matter how many times you sample the wind speed at that given point. The wind is a dynamical system that is the size of the entire earth.
 
  • #91
Zafa Pi said:
Physics is loaded with undefined terms as opposed to math. You mention theory, and there is measurement and random. The OP was concerned about random, so sticking with that, it seems (@PeroK, @andrewkirk , and me) that random is contextual.

I suggest that a random sequence is generated by certain specified physical processes. For example coin flipping. More general, lab measurements that correspond to QM measurements that are random variables, e.g. measuring electron spin at 90° from spin up electrons at 0°.
Both of the above example have been found to satisfy randomness checks for two valued uniform sequences, but they are not generated by any algorithm.

How about?
Random := The value(s) produce by an objective* physical* process that when repeated yields a sequence that passes randomness tests.
* objective means repeatable by others. * physical means non-algorithmic, like coin flips.

Now note this not a math definition. Is it any more vague than defining length?
I understand, but it seems self-referential: defining randomness in terms of passing random tests. Maybe you can say that the outcome can only be determined probabilistic , over the long run. No method exists * which can predict with 100% accuracy the outcome of the experiment.

Maybe can exist
 
  • Like
Likes Curiose
  • #92
Curiose said:
It's like the difference between having a set of statistics about something, and literally being an omnipotent observer of that system in real time, having much more detailed statistics about factors which may not seem to affect the outcome but actually do. This is why I say that it is not randomness that exists, but rather a lack of observational knowledge.

But, what if there is a physical, theorectical limit to observation knowledge? This could come from a) there being a limit on how accurate a measurement of position could possibly be; and/or b) a limit to the knowledge of two observables - the HUP (Heisenberg Uncertainty Principle) essentially says that the more accurately you know the position of a particle, the less accurately you know its momentum - in any case, you can't know both to an arbitray precision; and/or c) a piece of data that is intrinsically random, such as the spin on an electron, which may be theorectically unknowable.

This ties into my post above. There is no certainty that Quantum Mechanics will one day be replaced by a "deterministic" theory, where the HUP and the instrinsic probabilities disappear. And, unless that happens, then there are clear limits to observational knowledge.
 
  • #93
PeroK said:
There is no way to be certain that the probabilities apparently observed in the Stern-Gerlach experiment and predicted by QM will ever go away.

"Apparently observed" is a wise choice of words. Technically, we don't observe specific probabilities. We observe actual frequencies of occurence. Can we observe the distinction between "there is some probability involved" versus "the result is always the same" for a given situation? - i.e. distinguish between certainty versus a probability very near 1?

I find it interesting to consider whether the concept of (physical) probability must always involve a notion of time - at least "time" in the sense of something that has a "before" and "after" aspect. For example, we think of a probability that a fair coin "will" land heads in a scenario where the coin does land. So there is a "collapse" of the probability to a definite outcome even in this non-quantum-mechanical model.

One can try to circumvent the time aspect by thinking about "ensembles". For example, we can think about an "ensemble" of fair coin tosses that were done in the past. However, to extract a probability model for a coin toss of experiment from that point of view, we have to introduce the idea that an experiment is "chosen at random" from the ensemble. So implicitly we have the the idea of the probability of heads before the experiment is chosen then a definite result after the experiment is chosen.
 
  • #94
WWGD said:
I understand, but it seems self-referential: defining randomness in terms of passing random tests.
I didn't do that. The randomness tests were included to ensure the physical process wasn't do something ridiculous like turning out the same digit over and over.
For example, if one looks at the displayed temperature (F) at Times Square at noon on June 1st, I would say the leading digit is not uniform (over the 10 digits), but I would not be surprised if the 2nd digit was (with respect to randomness tests).
 
  • #95
PeroK said:
The answer of "definitely", which is assumed by some as an a priori property of a fundamentally deterministic universe is not correct. There is no way to be certain that the universe is fundamentally deterministic. There is no way to be certain that the probabilities apparently observed in the Stern-Gerlach experiment and predicted by QM will ever go away.
You have said this before and I agreed. However, you didn't respond to my statement
Zafa Pi said:
Also the evolution of the electron state through the S/G is governed by a deterministic unitary process.
 
  • #96
Curiose said:
What I'm saying is that if I knew the state of every air particle, the exact pressure/position of the thumb, how many atoms the thumb and the coin had in them, the exact height of the point of release, the exact distance to the ground, the exact layout of the ground or surface of which the coin was falling, maybe down to the micron, and the environmental conditions directly surrounding and interacting with the coin as it flips through the air, and probably a number of other physical factors which it is currently impossible to measure, I could predict which side the coin would end up on when it finally came to rest.
In spite of many physicist saying this, I find it nonsense. It is untestable, not even wrong, philosophical fluff. I find it on par with: If I knew enough I could predict anything.
PeroK's statement is germane,
PeroK said:
But, what if there is a physical, theorectical limit to observation knowledge?
You then go on to say,
Curiose said:
But pretty much, since I don't know those things and have no interest in developing some sort of analysis device which could, for me, it is impossible to know, thus I would say it is "random".
I like the last 6 words. See post #85.
 
Last edited:
  • #97
Zafa Pi said:
I didn't do that. The randomness tests were included to ensure the physical process wasn't do something ridiculous like turning out the same digit over and over.
For example, if one looks at the displayed temperature (F) at Times Square at noon on June 1st, I would say the leading digit is not uniform (over the 10 digits), but I would not be surprised if the 2nd digit was (with respect to randomness tests).
I see, so you first define the randomness test and then use that. Ok, sorry I misread you.
 
  • #98
A truly random phenomena must have some probability of failing a randomness test.
 
  • Like
Likes Curiose and andrewkirk
  • #99
Stephen Tashi said:
A truly random phenomena must have some probability of failing a randomness test.
Although uncertain, the possibility exists; Mr Pi can appreciate simplicity to a greater degree, using his contextualy more constrained version of randomness...

Defined as... Deterministic Unridiculous Randomness (DUR) .

Zafa Pi said:
The randomness tests were included to ensure the physical process wasn't do[sic] [recte to] something ridiculous like turning out the same digit over and over.
 
Last edited:
  • #100
Stephen Tashi said:
A truly random phenomena must have some probability of failing a randomness test.
I would use possibility rather than probability.
The definition I gave in post #85 has that built in. It's the 1st time I've tried to give a definition, what deficiencies do you find with it?
 
  • #101
OCR said:
Defined as... Deterministic Unridiculous Randomness (DUR)
Deterministic? Where did that come from?
 
  • #102
Zafa Pi said:
The definition I gave in post #85 has that built in. It's the 1st time I've tried to give a definition, what deficiencies do you find with it?

I suggest that a random sequence is generated by certain specified physical processes. For example coin flipping. More general, lab measurements that correspond to QM measurements that are random variables, e.g. measuring electron spin at 90° from spin up electrons at 0°.

As you said, you haven't given a mathematical definition. Until you explain how to determine which physical processes are among the "certain specified physical processes", we don't have a specific physical definition.

A definition of "randomness" that only classifies a process as "random" or "not random" isn't very useful. It would lump tossing a fair die in the same category as tossing a loaded die.
 
  • #103
Zafa Pi said:
Deterministic? Where did that come from?
It just " locks in " ... Unridiculous .
So you can have randomness, with absolutely no foolishness.

Is that speculation ?
 
Last edited:
  • #105
Zafa Pi said:
Random := The value(s) produce by an objective* physical* process that when repeated yields a sequence that passes randomness tests. * objective means repeatable by others. * physical means non-algorithmic, like coin flips.

it seems to me that @andrewkirk gave the most scientifically relevant answer. A process is random or not relative to a theory for predicting it.

A randomness test can at best check whether something is predictable by simple algorithmic means.

Of course, what this means is that some aspects of a sequence might be random, while other aspects are not. For example, there might be a sequence

12121212124121212121212121251212121212121212...

which is mostly an alternation between 1 and 2. So a "randomness test" will fail, saying that it's not random. But the departure of the sequence from predictability may be random.

The other way around would apply, as well. There could be a sequence that looks completely random, but is actually completely predictable. The digits of pi is an example.

If you're interested in how hard it is to predict a sequence, I think computational complexity theory is more appropriate. Given a sequence of digits you can characterize how random it is by how much it can be compressed algorithmically.
 

Similar threads

Replies
2
Views
2K
Replies
5
Views
1K
Replies
1
Views
1K
Replies
9
Views
2K
Replies
30
Views
3K
Back
Top