- #71
SW VandeCarr
- 2,199
- 81
ThomasT said:Yes, it's our uncertanties that probabilities quantfy. We assume an underlying determinism (for many good reasons). But we don't know the details of that underlying determinism. Hence, the need for probabilistic descriptiions.
ueit asks for arguments against determinism. Afaik, there aren't any -- at least no definitive ones. And, as far as I can tell from this thread you haven't given any.
But, ueit's proposed explanation for Bell-type correlations is problematic for reasons that I've given.
So, where are we?
I think you're trying to have it both ways. First, just to be clear, we need to quantify uncertainty. Uncertainty is a function of a probability: U=4(p(1-p)) where 4 just scales the measure to the interval [0,1]. It's clear uncertainty is maximal when p=0.5 and 0 when p=0
or p=1.
Now you've already agreed that probability measures our uncertainty. What does our uncertainty have to do with nature? Someone tosses a fair coin behind a curtain. The coin is tossed but you don't see it. For the "tosser" uncertainty is 0. For you, it's 1.
Now we have deterministic theories that are not time dependent. The laws of physics are presumed to hold in the future just as in the past. The charge of an electron does not change with time. If we have determinism (D), it's clear that any outcome of a time dependent process is also entirely predictable in principle. That means randomness is only a reflection of our current state of knowledge. If you could have perfect information as to some future time dependent outcome, you have U=0. This corresponds to p=1 or p=0. This is what I meant when I said that under D, with perfect information, future events occur with a real probability 1 (or 0). In effect under D, we don't have probabilities in nature. We only have our uncertainty about nature.