Quantum mechanics is random in nature?

In summary, the concept of randomness in quantum mechanics has been debated among scientists, with some arguing that it is purely random while others propose the existence of hidden variables. The mathematical axioms of quantum mechanics dictate that it is random, but there may be a deeper underlying theory that could explain its behavior. Without a specific candidate theory, the discussion of randomness in quantum mechanics remains speculative.
  • #106
Quantum theory is not determinstic. Some observables may be determined by preparation the system in a corresponding state.

You should know that a state vector that is a superposition in one basis can be an eigenstate in another. In the case of momentum and position the relationship between bases is a Fourier transform.
 
  • Like
Likes bhobba
Physics news on Phys.org
  • #107
vanhees71 said:
Quantum theory is not determinstic. Some observables may be determined by preparation the system in a corresponding state. This is possible only for true eigenvalues of the self-adjoint operator, i.e., such eigenvalues for which normalizable eigenvectors exist, and these eigenvectors are in the discrete part of the spectrum.
mikeyork said:
You should know that a state vector that is a superposition in one basis can be an eigenstate in another
It is a <understatement>safe bet</understatement> that vanhees knows this. He's stressing the "discrete part of the spectrum" because applying the same principle to the continuous spectrum, as in
In the case of momentum and position the relationship between bases is a Fourier transform.
is a bit trickier because the "eigenstates" are not physically realizable. First-year QM texts oversimplify the mathematical subtleties here, but if you google for "rigged Hilbert space" you'll get more of the story.
 
  • Like
Likes vanhees71 and bhobba
  • #108

Quantum theory is not determinstic. Some observables may be determined by preparation the system in a corresponding state. This is possible only for true eigenvalues of the self-adjoint operator, i.e., such eigenvalues for which normalizable eigenvectors exist, and these eigenvectors are in the discrete part of the spectrum.

You should know that a state vector that is a superposition in one basis can be an eigenstate in another
It is a <understatement>safe bet</understatement> that vanhees knows this. He's stressing the "discrete part of the spectrum" because applying the same principle to the continuous spectrum, as in
In the case of momentum and position the relationship between bases is a Fourier transform.
is a bit trickier because the "eigenstates" are not physically realizable.

Ok. I get that. But when you write "physically realizable" are you not confounding an underlying fundamental reality with observability? That is,the fundamental reality may be a definite eigenstate, but the information of an "observer" (either in preparing or detecting a state) is realizable only to a specific precision.
 
  • #109
Delta Kilo said:
it is surprising that some measurements are less random than they should have been according to classical view.
Could you give a simple example of what you are talking about here?
 
  • #110
mikeyork said:
Ok. I get that. But when you write "physically realizable" are you not confounding an underlying fundamental reality with observability? That is,the fundamental reality may be a definite eigenstate, but the information of an "observer" (either in preparing or detecting a state) is realizable only to a specific precision.
Within quantum theory generalized eigenstates, which are not in the Hilbert space (but in the dual of the nuclear space, where the unbound self-adjoint operators are defined), do not represent physical states. This is immediately clear also from the usually used heuristic point of view since the states are not normalizable. Take the momentum eigenstates. In position representation they are the plane waves,
$$u_{\vec{p}}(\vec{x})=\frac{1}{(2 \pi)^{3/2}} \exp(\mathrm{i} \vec{x} \cdot \vec{p}).$$
They are obviously not normalizable since the integral over their modulus squared is infinity. They are rather "normalized to a ##\delta## distribution":
$$\int_{\mathbb{R}^3} \mathrm{d}^3 \vec{x} u_{\vec{p}'}^*(\vec{x}) u_{\vec{p}}(\vec{x})=\delta^{(3)}(\vec{p}-\vec{p}'),$$
which clearly underlines the fact that these generalized eigenfunctions are to be interpreted as distributions (in the sense of generalized functions) rather than functions.
 
  • #111
If my understanding of the issue is up to date, this basically comes down to proving a negative. If you could find a (mathematically) deterministic framework that predicted QM experiments, you could rule out randomness (for which I'm assuming non-determinisic is the operating definition in this context).

The hidden variable was one attempt at demonstrating determinism (and seems to have failed); I don't know if that rules out all possibility of determinism or not, my intuition is to doubt it does.
 
Last edited:
  • #112
Things that are more or less probable such as the decay of a fissile atom around it's measured half life, are not the same as 'random';
There is in fact a non randomness that makes the half life what it is measured to be.
If events were completely random then no meaningful measurement of anything is possible.
 
  • #113
I prefer the term "probabilistic" to random.

The probabilities are well defined. The outcome of a single event is not.
 
  • Like
Likes bhobba
  • #115
Pythagorean said:
If you could find a (mathematically) deterministic framework that predicted QM experiments, you could rule out randomness (for which I'm assuming non-determinisic is the operating definition in this context).

Its more subtle than that.

Bohmian Mechanics (BM) is deterministic. Randomness comes from lack of knowledge - not because its inherently random.

One of the big advantages of studying interpretations is you learn exactly what the formalism says which often is not what is at first thought.

Again it must be emphasized no interpretation is better than any other. This does not mean I am a proponent of BM (I am not - my view is pretty much the same as Vanhees - but that means 4/5ths of bugger all ie precisely nothing) it simply means what appeals to my sense of 'beauty'.

Thanks
Bill
 
  • Like
Likes Pythagorean
  • #116
They are obviously not normalizable since the integral over their modulus squared is infinity. They are rather "normalized to a [delta distribution]
which clearly underlines the fact that these generalized eigenfunctions are to be interpreted as distributions (in the sense of generalized functions) rather than functions.

A limiting distribution with a unique value for which it is non-zero and a vanishing standard deviation. This would not normally be considered "random" although I see your mathematical point. Why do you consider it important to a physicist (rather than a mathematician)?

Also, my original point regarding superpositions being eigenstates in another basis still stands for discrete variables even if you consider the momentum/position example to be a bad one.
 
  • #117
mikeyork said:
Why do you consider it important to a physicist (rather than a mathematician)?

I am pretty sure Vanhees doesn't.

Rigged Hilbert Spaces are just as important to applied mathematicians as physics (without delving into the difference - that requires another thread) eg:
http://society.math.ntu.edu.tw/~journal/tjm/V7N4/0312_2.pdf

And that is just applied math - in pure math it has involved some of the greatest mathematicians of all time eg Grothendieck

Thanks
Bill
 
  • #120
Pythagorean said:
If my understanding of the issue is up to date, this basically comes down to proving a negative. If you could find a (mathematically) deterministic framework that predicted QM experiments, you could rule out randomness (for which I'm assuming non-determinisic is the operating definition in this context).

The hidden variable was one attempt at demonstrating determinism (and seems to have failed); I don't know if that rules out all possibility of determinism or not, my intuition is to doubt it does.
Of course, it doesn't rule out all deterministic models, but all the ones that are local in the interactions (in the sense of relativistic QFT). Since there is no consistent non-local theory of relativistic QT today and also no convincing no-go theorem either, it's totally open, whether one day one might find a non-local deterministic theory in accordance with all observations today described by QT.
 
  • Like
Likes Pythagorean and bhobba
  • #121
I repeat: Random is not mathematically defined. People are bantering the term around in different ways. Is the out come of coin flip in a wind tunnel random?
If I hand you an extremely long sequence of 0s and 1s how do you tell if it is random? Is Champernowne's sequence random?
Measurements in QM are random variables (google it). The variance of a measurement is 0 iff the state being measured is an eigenvector of the measurement operator.
 
  • #122
Well, not necessarily. Take the energy of an excited (nonrelativistically approximated) hydrogen atom, which is ##n^2##-fold degenerate. So you have for the general energy-dispersion free state
$$\hat{\rho}_n=\sum_{l,m} P_{lm} |nlm \rangle \langle nlm|.$$
For such a state the energy of the atom is determined to be ##E_n##, and the energy's standard deviation is ##0##. Note that ##E_n## is a true eigenvalue of the Hamiltonian and thus it can be determined, but the state is not necessarily a pure state represented by an eigenstate.

Anyway, this is not the main point of your criticism but the question about randomness. Of course, you cannot say whether a given sequence is "random". All "random numbers" produced by computers are only pseudo-random numbers since they are somehow calculated with an algorithm that produces sequences which look random according to some given probability distribution.

To our understanding the probabilities in quantum theory are truly "random" in the sense that the corresponding values of observables, for which the prepared state is not dispersion free, are "really" undetermined and "irreducibly" random with the probabilities for a specific outcome given according to Born's rule. Of course, also this you can only verify on sufficiently large ensembles with a given significance (say 5 standard deviations for a discovery in the HEP community).

The same is true for the "randomness" in classical statistical physics. Assuming that flipping a coin in a wind tunnel is in principle deterministic, because the motion of the coin is described accurately by deterministic laws (mechanics of a rigid body and aerodynamics, including the mutual interaction). Of course, if the motion of the entire system is completely known (even the exact knowledge of the initial state is enough), you'd be able to predict the outcome of the experiment. Nevertheless we cannot control the state of the entire system so precisely that we can predict with certainty the outcome of a specific coin flip in the wind tunnel, and thus we get a "random sequence" due to the uncertainty in setting up the initial conditions of macroscopic systems. In my view there is not so much difference between the "irreducible randomness" of quantum mechanics and the "classical randomness" due to the uncontrollability of initial states of macroscopically deterministic systems.
 
  • #123
vanhees71 said:
Well, not necessarily. Take the energy of an excited (nonrelativistically approximated) hydrogen atom, which is ##n^2##-fold degenerate. So you have for the general energy-dispersion free state
$$\hat{\rho}_n=\sum_{l,m} P_{lm} |nlm \rangle \langle nlm|.$$
For such a state the energy of the atom is determined to be ##E_n##, and the energy's standard deviation is ##0##. Note that ##E_n## is a true eigenvalue of the Hamiltonian and thus it can be determined, but the state is not necessarily a pure state represented by an eigenstate.

Anyway, this is not the main point of your criticism but the question about randomness. Of course, you cannot say whether a given sequence is "random". All "random numbers" produced by computers are only pseudo-random numbers since they are somehow calculated with an algorithm that produces sequences which look random according to some given probability distribution.

To our understanding the probabilities in quantum theory are truly "random" in the sense that the corresponding values of observables, for which the prepared state is not dispersion free, are "really" undetermined and "irreducibly" random with the probabilities for a specific outcome given according to Born's rule. Of course, also this you can only verify on sufficiently large ensembles with a given significance (say 5 standard deviations for a discovery in the HEP community).

The same is true for the "randomness" in classical statistical physics. Assuming that flipping a coin in a wind tunnel is in principle deterministic, because the motion of the coin is described accurately by deterministic laws (mechanics of a rigid body and aerodynamics, including the mutual interaction). Of course, if the motion of the entire system is completely known (even the exact knowledge of the initial state is enough), you'd be able to predict the outcome of the experiment. Nevertheless we cannot control the state of the entire system so precisely that we can predict with certainty the outcome of a specific coin flip in the wind tunnel, and thus we get a "random sequence" due to the uncertainty in setting up the initial conditions of macroscopic systems. In my view there is not so much difference between the "irreducible randomness" of quantum mechanics and the "classical randomness" due to the uncontrollability of initial states of macroscopically deterministic systems.
Thanks for your response.
In your 1st paragraph I was indeed referring to pure states, but that is not necessary since your density operator ρn is an "eigenvector" of the Hamiltonian.

In paragraph 2 I'm glad to see you put quotation marks around random.

In paragraph 3 the statement that probabilities are random is nonsense. The random variable W =1 with probability 1/2 and -1 with probability 1/2 is exactly the same as the random variable one gets by measuring √½|0⟩ + √½|1⟩ with the Pauli operator Z. There is nothing random (whatever it means) about the probability 1/2. Now if we leave theory and step into a quantum optics lab and measure polarized photons with polarization angle 45º with a polarization analyzer set at 0º then we'll get a sequence of 1s and -1s that will look like the flips of a fair coin with 1 on one side and -1 on the other. Running statistical tests on the sequence will seem to indicate an iid sequence of Ws justifying once again the validity of the theory of QM. The word "random" need not appear anywhere (random variable should be thought as a single word and is function from a probability space to R.). If you wish to use it be my guest, but realize that it is an undefined, intuitive, vague, and an oft misleading concept.

In paragraph 4 you refer to "irreducible randomness" and "classical randomness". The latter usually means "randomness" due to "lack of knowledge" as you say (your age is random to me, but not to you). Would you say that "irreducible randomness" is "randomness" with no cause, a disproof of an omniscient God, or what? What if you knew the initial conditions of the big bang?
I like your use of the word "Assuming" in the 2nd sentence.
 
  • #124
In my naive opinion, Quantum Mechanics says that randomness is intrinsic to nature in the small. First of all, it says that quantum mechanical measurement of states are random variables. A repeated measurement of exactly the same sate will generally not be the same answer but will have a probability distribution, Secondly it says that states evolve in time according to a stochastic process.

This randomness of measurement is not because of slight uncertainties in initial conditions. Quantum mechanics says that exactly the same state when measured will produce a random variable.

Whether this theory is true or not is a metaphysical question in my opinion. The theory works famously and will be questioned when there are experiments that it cannot explain. While I have no idea how Bohmian mechanics works, it seems that it is a different theory which may or may not explain better than Quantum Mehanincs.

To me the question of randomness is not the core question. Rather it is whether one can describe Nature causally and whether this causal explanation gives a clue to the true workings of the world. But the idea of causality may be different than the ideas that arose in Classical Physics.
 
Last edited:
  • #125
The usual concept of an experiment to test a probabilistic theory is to (repeatedly) make preparations and then observe an outcome, so there is a concept of time involved - at least to the extent that the concept of time involves a "before" and "after". We do the preparations before observing the result.

I'm curious whether the theories of relativity introduce any complications into this picture. If observer A thinks he made the preparations before the outcome happened, does observer B always agree ?
 
  • #126
Zafa Pi said:
In paragraph 4 you refer to "irreducible randomness" and "classical randomness". The latter usually means "randomness" due to "lack of knowledge" as you say (your age is random to me, but not to you). Would you say that "irreducible randomness" is "randomness" with no cause, a disproof of an omniscient God, or what? What if you knew the initial conditions of the big bang?
I like your use of the word "Assuming" in the 2nd sentence.
I don't discuss about semantics. I call things "random" in the usual common sense as it is understood by everybody.

I also argue in the realm of quantum theory that there the "randomness" for the outcome of measurements of observables is "irreducible" also in the usual sense as quantum theory is understood in the minimal statistical interpretation, which is the only interpretation one needs in physics and which is not in contradiction with other fundamentals of physics, particularly the relativistic spacetime strucure and its implied causality structure. The knowledge of the exact initial state of the entire universe is a contradiction in itself since to the best of our knowledge only a tiny part of the universe is observable for us in principle. Also the quantum theory of gravity is not understood yet. So I don't talk about this in this anyway weird philosophical discussion since it's hopeless to get a clear idea about what we are talking if one discusses things which aren't even understood on a scientific level. Then an understanding in a philosophical sense is impossible and also completely useless.

Knowing the "initial state" of a quantum system, i.e., preparing the quantum system in this state at a time ##t## does not imply that its observable are all determined. QT tells you that this is impossible. The komplete knowledge of the state, i.e., the preparation of the system in a pure state implies that you know the statistical properties for the outcome of precise measurements of its observables, no more no less. So what I mean with "irreducible randomness" according to QT is exactly this notion of state within QT: The system's observable really have no determined values but with a certain probability you find certain possible values (in the spectrum of the representing self-adjoing operator) when measuring them. This is in accordance with any observations in nature so far and that's why we take QT as the best theory about the description of nature we have today.
 
  • #127
Zafa Pi said:
... you refer to "irreducible randomness" and "classical randomness". The latter usually means "randomness" due to "lack of knowledge" as you say (your age is random to me, but not to you). Would you say that "irreducible randomness" is "randomness" with no cause, a disproof of an omniscient God, or what? What if you knew the initial conditions of the big bang? ...

The question of whether there is "irreducible randomness" in QM - as I think has been pointed out already - is one of interpretation. There are nonlocal interpretations - such as Bohmian Mechanics - that assert that suitable knowledge of initial conditions (the big bang in your example) would allow one to predict the future state of any observables to any precision. So that means quantum randomness is due to lack of knowledge of initial conditions, much like the penny in the wind tunnel.

But most would say that there is no amount of knowledge of initial conditions that would allow you to know the value of non-commuting observables. As far as anyone knows, it is randomness without a cause.

So it seems as if the answer to these questions is a matter of personal choice or preference. If you then tie defining "true randomness" to the situation, then you could equate that to the "uncaused" interpretation. Then you are left with answering whether randomness due to lack of initial condition is "true randomness" - or not.
 
  • Like
Likes bhobba
  • #128
DrChinese said:
So it seems as if the answer to these questions is a matter of personal choice or preference. If you then tie defining "true randomness" to the situation, then you could equate that to the "uncaused" interpretation. Then you are left with answering whether randomness due to lack of initial condition is "true randomness" - or not.

Are you saying that even though one can model Quantum Mechanical systems deterministically, the uncertainty principle prevents any measurement that would allow predicting the future of a path?
 
  • #129
I think part of the problem is the entire verbal language of QM was historically set up to try to swipe the elephant in the room under the carpet. I am talking about measuring apparatus. Every time "observable" is mentioned, there must be a corresponding measuring apparatus involved, otherwise the observable is not defined. Saying "particle does not have defined position between measurements" basically amounts to "there is no outcome reported by the measuring apparatus when no measurement has taken place" - a tautology.

It is nothing short of a miracle that the entire effect of all these complicated measuring apparatuses (apparatii?) can be described by a few simple operators. But, as far as I understand it, the operator is not "hard-coded" into the system. Instead it emerges statistically from the complex interaction of countless internal states, much like normal distribution comes out of nowhere in central limit theorem.

Now the initial state of measuring apparatus is necessarily unknown. I would call it random, and I don't care if it is "true" randomness or only "apparent" due to our lack of knowledge, the result is the same FAPP. Funnily enough, as soon as we try to control this initial state, the device ceases to be measuring apparatus and becomes yet another quantum system which then starts behaving weirdly and requires yet another measuring device to tell us what is going on with the first one (micromirrors getting momentum-entangled with photons, fullerens going through both slits etc). So the randomness is unavoidable, it is inherent in the nature of a measuring apparatus.

What I'm trying to say, there is enough randomness in our measurement devices and in the environment in general to explain randomness of quantum measurement results. And, by invoking Occam's razor, there is no need to postulate inherent randomness "built-in" into foundations of QM. It should just come out by itself from unitary evolution coupled with the assumption of environment having large number of interacting degrees of freedom in unknown initial state, or in other words, from decoherence. Basically, Born Rule should be derived rather then postulated.
 
  • #130
lavinia said:
Are you saying that even though one can model Quantum Mechanical systems deterministically, the uncertainty principle prevents any measurement that would allow predicting the future of a path?

I'm not a Bohmian, so I don't really accept that interpretation. Channeling others who accept Bohmian Mechanics (and I beg forgiveness if I explain poorly):

In principle, it would be possible to simultaneously predict the value of 2 non-commuting observables. However, they would be quick to say that practical considerations prevent one from placing an actual system in a state in which that could be done. As a result, the uncertainty principle emerges and there is no practical difference between theirs and non-deterministic interpretations.
 
  • Like
Likes lavinia
  • #131
Delta Kilo said:
...Now the initial state of measuring apparatus is necessarily unknown. I would call it random, and I don't care if it is "true" randomness or only "apparent" due to our lack of knowledge, the result is the same FAPP. Funnily enough, as soon as we try to control this initial state, the device ceases to be measuring apparatus and becomes yet another quantum system which then starts behaving weirdly and requires yet another measuring device to tell us what is going on with the first one (micromirrors getting momentum-entangled with photons, fullerens going through both slits etc). So the randomness is unavoidable, it is inherent in the nature of a measuring apparatus.

What I'm trying to say, there is enough randomness in our measurement devices and in the environment in general to explain randomness of quantum measurement results.

Ah, but your premise is demonstrably false! :smile:

You cannot place 2 different quantum systems in identical states such that non-commuting observables will have identical outcomes. But you can place 2 different observers in an ("unknown") state in which they WILL yield (see) the same outcome to identical quantum measurements. Let's get specific:

We have a system consisting of 2 separated but entangled photons such that their polarization is unknown but identical (Type I PDC for example). Observing the photons' individual polarizations by the 2 *different* observers - at the same angle - always yields the same results! Therefore, none - and I mean none - of the outcome can be attributed to the state of the observer unless there is something mysterious being communicated from observer to observer. Obviously, it is not the interaction between the observed and the observer (as you hypothesize), else the results would be different in some trials.

If the observers contributed to the uncertainty - to the randomness - then that would show up in experiments such as above. It doesn't. Put another way: your premise seems superficially reasonable, but fails when you look closer. Randomness is not due to "noise" (or anything like that) which is part of (or local to) the observer.
 
Last edited:
  • #132
Delta Kilo said:
What I'm trying to say, there is enough randomness in our measurement devices and in the environment in general to explain randomness of quantum measurement results.

At the risk of making a statement I have no real qualifications to be making, I don't agree that this can explain Bell's experiment.
 
  • #133
DrChinese said:
I'm not a Bohmian, so I don't really accept that interpretation. Channeling others who accept Bohmian Mechanics (and I beg forgiveness if I explain poorly):

In principle, it would be possible to simultaneously predict the value of 2 non-commuting observables. However, they would be quick to say that practical considerations prevent one from placing an actual system in a state in which that could be done. As a result, the uncertainty principle emerges and there is no practical difference between theirs and non-deterministic interpretations.

OK. So just to understand better would the practical considerations be similar to those that Heisenberg posited that any attempt to determine position would perturb its momentum? If so it would seem that theoretically once could never measure both.
 
  • #134
lavinia said:
OK. So just to understand better would the practical considerations be similar to those that Heisenberg posited that any attempt to determine position would perturb its momentum? If so it would seem that theoretically once could never measure both.

I don't think I can answer this to someone like Demystifier's satisfaction. Hopefully he or someone more qualified than I can answer this.

But I think the concept is: IF you knew the starting p and q of every particle in a closed system (i.e. the entire universe), THEN you could predict future p and q for all with no uncertainty. It is then the inability to know all starting p's and q's which ultimately leads to the uncertainty relations in the Bohmian view.
 
  • #135
rootone said:
Things that are more or less probable, such as the decay of a fissile atom around its measured half life, are not the same as 'random'

In principle, no amount of knowledge, measurements or computational resources can predict with certainty whether or when a fission will occur.
 
Last edited:
  • #136
DrChinese said:
Ah, but your premise is demonstrably false! :smile:
Grinkle said:
At the risk of making a statement I have no real qualifications to be making, I don't agree that this can explain Bell's experiment.
Sorry, either I was not clear enough (which is quite likely) or you are trying to read way too much into what I wrote. I'm not trying to explain the weirdness of QM as some statistical process in the measuring apparatus. Yes, I'm well aware that two spacelike separated particles sometimes can only be described by a single non-separable wavefunction. And yes, when one of those particles interact with the measuring apparatus we must treat it as the whole system of two entangled particles interacting with it, even though the other particle might be light years away. It is simply impossible to write an interaction hamiltonian involving one particle but not the other.
Exactly how this happens I'm not prepared to discuss, my gut feeling is non-relativistic QM is ill-equipped to answer it and I'm not yet at the level to talk about QFT where there is no spoon everything is different yet again.

Anyway, all I'm saying is every time when there is random output in QM there just happens to be a thermal bath conveniently located nearby and therefore randomness in QM is emergent phenomena which does not need to be hardwired into the theory at the fundamental level.
 
  • #137
Delta Kilo said:
Anyway, all I'm saying is every time when there is random output in QM there just happens to be a thermal bath conveniently located nearby and therefore randomness in QM is emergent phenomena which does not need to be hardwired into the theory at the fundamental level.

OK, but it is hardwired into the mathematical formalism of QM. That fact seems to me enough to answer the question "Is quantum mechanics random in nature?" (which is the thread title - just sayin'). Clearly that fact does not preclude the possibility that some more fundamental theory with some other mathematical formalism but without the baked-in randomness could also exist. It will necessarily be either non-local or non-EPR-realistic, but it need not have baked-in randomness.

So far, so good... But until we have a candidate theory to consider, "so far" isn't very far at all.
 
  • #138
DrChinese said:
The question of whether there is "irreducible randomness" in QM - as I think has been pointed out already - is one of interpretation. There are nonlocal interpretations - such as Bohmian Mechanics - that assert that suitable knowledge of initial conditions (the big bang in your example) would allow one to predict the future state of any observables to any precision. So that means quantum randomness is due to lack of knowledge of initial conditions, much like the penny in the wind tunnel.

But most would say that there is no amount of knowledge of initial conditions that would allow you to know the value of non-commuting observables. As far as anyone knows, it is randomness without a cause.

So it seems as if the answer to these questions is a matter of personal choice or preference. If you then tie defining "true randomness" to the situation, then you could equate that to the "uncaused" interpretation. Then you are left with answering whether randomness due to lack of initial condition is "true randomness" - or not.
By perusing (in the skimming sense) the posts of this thread it appears your 2nd paragraph is valid while your 1st sentence is rarely confirmed. Here's a little dialogue:
A: I'm flipping this coin in this wind tunnel and getting random results.
B: They're not really random, it's just that you don't know the initial conditions.
A: There no initial conditions.
B: Of course there are you just don't know them.
A: Nobody knows them or can find a way of knowing them because they don't exist.
B: Yes they do. It's you lack of knowledge.
A: No they don't. It's pure "Random Without a Cause" (not to be confused with the James Dean movie)
C: God knows the initial conditions.
D: Hold on, there is no God.
C: Yes there is
D: No there ain't ...

DrChinese I'll bet you 2 bucks that my coin in the wind tunnel is just as random as the measurement of a pound of identically (tee hee) prepared photons exiting a polarization analyzer.
People in this thread bandy the term "random" like the Jesuits did "God", no one defines it and everyone thinks they know what it is, yet disagree. The term random is no more necessary to QM than God was to Laplace. But, by God, don't let me rain the random parade.
 
  • #139
DrChinese said:
The question of whether there is "irreducible randomness" in QM - as I think has been pointed out already - is one of interpretation. .

:smile::smile::smile::smile::smile::smile::smile::smile::smile:

Again, I mentioned it before, but will repeat for emphasis, one of the main reasons for studying QM interpretations is to disentangle what the formalism is really saying - its sometimes not obvious at a first brush.

Thanks
Bill
 
  • #140
The problem with quantum theory is that there is a physics part, used to explain objective observations in nature, and a plethora of socalled "interpretations" which try to extend somehow the philosophical ideas about it beyond natural sciences. There's no point for a physicist to get involved in this, because it's beyond the methodology of physics. Which of these additional elements of interpretation you follow is a question of personal believe (for some even a religion). It is irrelevant for physics.

The pure physical part of QT together with very fundamental assumptions about the relativistic space-time structure and locality of interactions, which is basis of an unprecedented success of explaining comprehensively all known facts about nature, tells us that there is an irreducible randomness in nature. The complete determination of the state does not determine the values of all observables characterizing the described system. Any additions to the minimal statistical interpretation are just philosophical speculation with no scientific empirical basis so far.

As was stressed before, that doesn't rule out that one day one finds an even more comprehensive scientific theory of nature, and one finds limits of applicability of QT, but that won't come very probably not from philosophical speculations but from observations reproducibly contradicting the present theory or an ingeneous mathematical (not philosophical!) development to solve one of the problems with "the Standard Model", like a consistent description of gravity and/or dark matter.
 
  • Like
Likes bhobba

Similar threads

Back
Top