- #1
- 934
- 746
A recent question about interpretations of probability nicely clarified the role of the Kolmogorov axioms:
But randomness remains an important concept, even in mathematics (and computer science) itself. Take for example the challenge to proof or disprove whether randomized algorithm are more powerful than deterministic algorithms. It is a typical example of the challenges randomness presents for current proof techniques. And because they are sometimes also (wrongly?) called probabilistic algorithms, it is also a typical example of the "relevance and challenge" to distinguish between randomness and probability.
There are also examples where randomness was successfully conquered by current proof techniques:
Dale said:As long as any given interpretation of probability is consistent with the Kolmogorov axioms then they will be equivalent in the sense that you can use the same proofs and theorems.
[... some excursions into QM, negative probabilities, and quasiprobability distributions ...]kered rettop said:I should have known better than to ask on a maths forum. The question arose because a certain interpretation of quantum mechanics has been criticised for the way it uses probability. I haven't the faintest idea whether the model is consistent with the Kolmorogov axioms or even how to go about finding out. Thanks for trying though.
Conclusion: the Kolmogorov axioms formalize the concept of probability. They achieve this by completely separating it from concepts like randomness, uncertainty, propensity, subjective 'degree of belief', ...kered rettop said:So, would it be correct to say that axiomatic probability theory cannot be invoked to distinguish between different interpretations (except in the sense of questioning whether a particular interpretation actually does talk about probabilities at all)?
But randomness remains an important concept, even in mathematics (and computer science) itself. Take for example the challenge to proof or disprove whether randomized algorithm are more powerful than deterministic algorithms. It is a typical example of the challenges randomness presents for current proof techniques. And because they are sometimes also (wrongly?) called probabilistic algorithms, it is also a typical example of the "relevance and challenge" to distinguish between randomness and probability.
There are also examples where randomness was successfully conquered by current proof techniques:
Wikipedia said:the Rado graph, Erdős–Rényi graph, or random graph is a countably infinite graph that can be constructed (with probability one) by choosing independently at random for each pair of its vertices whether to connect the vertices by an edge.
Both examples clearly manage to go beyong probability into actual randomness. Are they of the same type, or do they use different concepts of randomness? What are other examples where randomness was successfully conquered by current proof techniques? Are they of the same type(s) as my two examples?Wikipedia said:a Chaitin constant (Chaitin omega number) or halting probability is a real number that, informally speaking, represents the probability that a randomly constructed program will halt.