- #1
- 8,938
- 2,947
This is just a little note describing something that may be common knowledge, but I was confused about, myself, which is the relationship between local hidden variables and determinism.
For a hidden-variables explanation of the twin-pair EPR experiment, people often look to deterministic models. They assume that there is some hidden variable [itex]\lambda[/itex] and the results of a spin measurement along axis [itex]\vec{a}[/itex] for one of the particles is a function [itex]F(\vec{a}, \lambda)[/itex]. At first, that doesn't seem very general, and it's not--we can certainly imagine a stochastic process where the outcome of the measurement is not uniquely determined by the hidden variables plus the state of the detector, but is partly random. Why should people assume that the outcome is deterministic?
The first reason, which is the one I've given in the past, is because of perfect correlations. In the twin pair EPR experiment, if the two detectors measure spins along the same axis, they always get the same result (in the spin-1 case) or always get the opposite result (in the spin-1/2 case). These perfect correlations are not possible if there is local nondeterminism involved.
But actually, there's a much more general reason, that's independent of the specific predictions of quantum mechanics: For any probabilistic theory, there are corresponding deterministic theories in which all probabilities are due to unknown hidden variables in the initial state. That doesn't mean that there is no point in making nondeterministic theories, because the nondeterministic theory might be much more elegant and plausible than the corresponding deterministic theory. But purely as a logical matter, if you're trying to see whether it is possible to have a hidden-variables theory that explains a certain empirical result, you don't lose any generality by just considering deterministic models. If a nondeterministic local theory is possible, then so is a deterministic local theory.
This isn't particularly profound or difficult, but many people might not know it.
Here's how it works: Suppose you have a nondeterministic theory, in which the outcome of a measurement depends in a probabilistic way on the settings of your measuring device:
[itex]P(i | j)[/itex] is the probability of getting result [itex]R_i[/itex] when the device has setting [itex]S_j[/itex].
For simplicity, let's assume that the results can take on values [itex]R_0, R_1, ...[/itex]. Let me define a cumulative probability function
[itex]P_C(i | j) = P(0 | j) + P(1 | j) + \ldots + P(i | j)[/itex]
This is the probability that the result will be in the range between [itex]R_0[/itex] and [itex]R_i[/itex]
Now, let's introduce a function [itex]F(j,\lambda)[/itex] defined as follows:
[itex]F(j,\lambda) = R_0[/itex] if [itex]0 \leq \lambda < P_C(0,j)[/itex]
[itex] = R_1[/itex] if [itex] P_C(0,j) \leq \lambda < P_C(1,j)[/itex]
[itex] = R_2[/itex] if [itex] P_C(1,j) \leq \lambda < P_C(2,j)[/itex]
etc.
This model puts all the nondeterminism into the variable [itex]\lambda[/itex], which is assumed to be a real number between 0 and 1 with a flat probability distribution.
As I said, I'm not claiming that this is a plausible model, only that it is mathematically consistent with the probability distribution [itex]P(i | j)[/itex]
So if you are only interested in the question of whether it is possible (as opposed to plausible) to explain experimental results using a local, realistic hidden-variables model, you may as well assume that it is deterministic.
For a hidden-variables explanation of the twin-pair EPR experiment, people often look to deterministic models. They assume that there is some hidden variable [itex]\lambda[/itex] and the results of a spin measurement along axis [itex]\vec{a}[/itex] for one of the particles is a function [itex]F(\vec{a}, \lambda)[/itex]. At first, that doesn't seem very general, and it's not--we can certainly imagine a stochastic process where the outcome of the measurement is not uniquely determined by the hidden variables plus the state of the detector, but is partly random. Why should people assume that the outcome is deterministic?
The first reason, which is the one I've given in the past, is because of perfect correlations. In the twin pair EPR experiment, if the two detectors measure spins along the same axis, they always get the same result (in the spin-1 case) or always get the opposite result (in the spin-1/2 case). These perfect correlations are not possible if there is local nondeterminism involved.
But actually, there's a much more general reason, that's independent of the specific predictions of quantum mechanics: For any probabilistic theory, there are corresponding deterministic theories in which all probabilities are due to unknown hidden variables in the initial state. That doesn't mean that there is no point in making nondeterministic theories, because the nondeterministic theory might be much more elegant and plausible than the corresponding deterministic theory. But purely as a logical matter, if you're trying to see whether it is possible to have a hidden-variables theory that explains a certain empirical result, you don't lose any generality by just considering deterministic models. If a nondeterministic local theory is possible, then so is a deterministic local theory.
This isn't particularly profound or difficult, but many people might not know it.
Here's how it works: Suppose you have a nondeterministic theory, in which the outcome of a measurement depends in a probabilistic way on the settings of your measuring device:
[itex]P(i | j)[/itex] is the probability of getting result [itex]R_i[/itex] when the device has setting [itex]S_j[/itex].
For simplicity, let's assume that the results can take on values [itex]R_0, R_1, ...[/itex]. Let me define a cumulative probability function
[itex]P_C(i | j) = P(0 | j) + P(1 | j) + \ldots + P(i | j)[/itex]
This is the probability that the result will be in the range between [itex]R_0[/itex] and [itex]R_i[/itex]
Now, let's introduce a function [itex]F(j,\lambda)[/itex] defined as follows:
[itex]F(j,\lambda) = R_0[/itex] if [itex]0 \leq \lambda < P_C(0,j)[/itex]
[itex] = R_1[/itex] if [itex] P_C(0,j) \leq \lambda < P_C(1,j)[/itex]
[itex] = R_2[/itex] if [itex] P_C(1,j) \leq \lambda < P_C(2,j)[/itex]
etc.
This model puts all the nondeterminism into the variable [itex]\lambda[/itex], which is assumed to be a real number between 0 and 1 with a flat probability distribution.
As I said, I'm not claiming that this is a plausible model, only that it is mathematically consistent with the probability distribution [itex]P(i | j)[/itex]
So if you are only interested in the question of whether it is possible (as opposed to plausible) to explain experimental results using a local, realistic hidden-variables model, you may as well assume that it is deterministic.