Is Free Will a Foundational Assumption in Quantum Theory?

In summary, the "free will" assumption is not a foundational assumption of QM. It is an assumption of the scientific method. The superdeterminism alternative to the theory of free will undermines all of science.
  • #141
DarMM said:
That's the point though. In the classical case we can consider the imprint in the device as some kind of approximation of an event that occurred in the system. This is because all random variables in the classical case can be considered as functions on a space of outcomes. Thus we have some notion of the events of microsystem when no external system is present to register them.
Not really. We have measurement results (aka events) only after we specify what in the classical universe is the measurement device and how it is supposed to measure which system of interest. This is external to the Laplacian description of the universe by positions and momenta. Thus measurement theory in a classical universe needs as much externals (i.e., the Heisenberg cut) as measurement theory in a quantum universe.
The properties of the underlying probability theory are secondary to that.

DarMM said:
So quantum theory provides a stochastic description of a system-external system interaction when supplied with a choice of external system, but it is intrinsically incapable of modelling that choice of external system. Moreover this is a feature of any non-Kolmogorovian probability theory.
This statement is interpretation dependent. For example, it does not hold in the thermal interpretation, where modeling both system and external system is done inside the quantum universe.
 
Last edited:
Physics news on Phys.org
  • #142
A. Neumaier said:
Not really. We have measurement results (aka events) only after we specify what in the classical universe is the measurement device and how it is supposed to measure which system of interest. This is external to the Lapalcian description of the universe by positions and momenta. Thus measurement theory in a classical universe needs as much externals (i.e., the Heisenberg cut) as measurement theory in a quantum universe.
The properties of the underlying probability theory are secondary to that.
Do you have a reference for this?

Yes defining the measured system and measuring system split is required for a measurement theory. However since all properties of the measured system have a single Boolean algebra the device events can be seen to be approximate recordings of events occurring in the system alone. In Quantum Theory there are no events for the system alone, you can't consider the measurement result to be an approximation of some property, even a randomly driven one, in the system alone. The idea that the underlying probability theory is irrelevant seems hard to support to me in this case. It's the underlying probability theory that drives results like contextuality that utterly change the notion of measurement for most authors.

A. Neumaier said:
This statement is interpretation dependent. For example, it does not hold in the thermal interpretation, where modeling both system and external system is done inside the quantum universe.
Yes of course, I indicated this initially that I was speaking of QM viewed as a probability theory not the quantum state viewed as a real wave or some such representational view.
 
  • #143
DarMM said:
Do you have a reference for this?
No. But this needs no reference as it is obvious. Indeed classical multiparticle dynamics is chaotic but deterministic, and has no intrinsic notion of one part measuring another. So something must be added from the outside to talk about any of probability or measurement.
DarMM said:
Yes defining the measured system and measuring system split is required for a measurement theory. However since all properties of the measured system have a single Boolean algebra the device events can be seen to be approximate recordings of events occurring in the system alone.
Not in a classical universe. Nothing about the measured system can be considered to be recorded even approximately unless you specify what recording a property of the measured system means in terms of the measuring system. You still need to specify an outcome space, but even what is an outcome - neither exists in the Laplacian universe by itself.
DarMM said:
I was speaking of QM viewed as a probability theory not the quantum state viewed as a real wave or some such representational view.
QM is not primarily a probability theory but a dynamical theory of Nature. Reducing it to noncommutative probability theory (a mathematical discipline) means abstracting from the real content, eliminating all physics. It is analogous to reducing classical mechanics to Kolmogorov probability theory. In both cases, little is left of the substance.
 
  • #144
Lynch101 said:
Apologies, this will probably seem like an incredibly basic question, but would it not be possible to predict what number you will roll in a dice game if you knew nearly all of the material information. If you were watching an impossibly high definition video and slowed it right down so that you could see the force that the die was being thrown, the angle it leaves the hand, the trajectory, couples with the relevant information about the air, the firmness of the table, etc. etc. If all of this was known, wouldn't it be possible to predict the roll of a die?

Apologies if it is somewhat off-topic and basic.
The difficulty is not to "predict" the role of the die after it's been thrown, but before it's been thrown!
 
  • #145
PeterDonis said:
These claims about macroscopic processes like shuffling cards, rolling dice, etc., are not necessarily true, because it's highly likely that quantum indeterminacy does not play any role in the outcome, unlike the case of, say, a Stern-Gerlach measurement of a particle's spin. It's entirely possible that, for example, the process of die rolling is insensitive enough to its exact initial conditions that an accurate enough measurement of those initial conditions could allow us to predict the result.
It depends what you mean by initial conditions. You might claim to be able to predict whether a penalty in a soccer match is scored or not. But, if you wait to see which way the ball is kicked and which way the goalkeeper is moving, I don't consider that removes the fundamental uncertainty.
 
  • #146
Lynch101 said:
If the choices of which observable to be measured had a common cause would they be correlated?
Not necessarily. For example, you could generate a sequence from the digits of ππ and another sequence from the digits of ee.
These two sequences of digits would be uncorrelated, even though they are completely determined in advance.

PS for further insight into the difference between indeterminacy and randomness look up the definition of "normal number".
 
  • #147
A. Neumaier said:
No. But this needs no reference as it is obvious.
...
QM is not primarily a probability theory but a dynamical theory of Nature. Reducing it to noncommutative probability theory
The point here isn't to say that QM is nothing but noncommutative probability theory, obviously there is much more to it than that. However it is the non-Boolean nature of the underlying probability that changes the notion of measurement in the theory, by introducing issues like contextuality. I do not consider it "obvious" that these seismic shifts in the ability to perform property assignment due to the non-Boolean structure cause no difference in the notion of measurement.

A. Neumaier said:
Nothing about the measured system can be considered to be recorded even approximately unless you specify what recording a property of the measured system means in terms of the measuring system. You still need to specify an outcome space, but even what is an outcome - neither exists in the Laplacian universe by itself.
Again this goes against most of what I've read where by in a classical theory one can consider there to be events in the absence of devices and measuring systems.
 
  • #148
bhobba said:
Even defining free will - well philosophers haven't solved that one.

Exactly. Nevertheless, regarding the "freedom of choice", there is an interesting point of view.

Hans Primas in „Hidden Determinism, Probability, and Time’s Arrow“ (Section 10 Experimental Science Requires Freedom of Action):

"At present the problem of how free will relates to physics seems to be intractable since no known physical theory deals with consciousness or free will. Fortunately, the topic at issue here is a much simpler one. It is neither our experience of personal freedom, nor the question whether the idea of freedom could be an illusion, nor whether we are responsible for our actions. The topic here is that the framework of experimental science requires a freedom of action in the material world as a constitutive presupposition. In this way “freedom” refers to actions in a material domain which are not governed by deterministic first principles of physics.

To get a clearer idea of what is essential in this argument we recall that the most consequential accomplishment by Isaac Newton was his insight that the laws of nature have to be separated from initial conditions. The initial conditions are not accounted for by first principles of physics, they are assumed to be “given”. In experimental physics it is always taken for granted that the experimenter has the freedom to choose these initial condition, and to repeat his experiment at any particular instant. To deny this freedom of action is to deny the possibility of experimental science.

In other words, we assume that the physical system under investigation is governed by strictly deterministic or probabilistic laws. On the other hand, we also have to assume that the experimentalist stands out of these natural laws. The traditional assumption of theoretical physics that the basic deterministic laws are universally and globally valid for all matter thus entails a pragmatic contradiction between theory and practice. A globally deterministic physics is impossible.“

[PDF]Hidden Determinism, Probability, and Time's Arrow - Core
 
  • Like
Likes Lynch101
  • #149
DarMM said:
this goes against most of what I've read where by in a classical theory one can consider there to be events in the absence of devices and measuring systems.
This is just because, as I had already mentioned, nobody ever really addressed the classical version of the quantum measurement problem. Instead it is simply assumed that the measuring device somehow acquires the measurement value, in principle with arbitrary accuracy and without backreaction to the system measured. This is a Platonic notion of measurement, looking into God's cards so to say. In this case you may say that the events are the properties of the collection of trajectories of the particles making up the system measured. You (like everyone else in the past) are treating classical mechanics exclusively from this Platonic perspective, while you consider quantum mechanics on a different footing, e.g., by allowing for microscopic descriptions of the measuring device, or even a hierarchy of these. This gives a distorted picture of the classical vs. quantum theme.

But for a system consisting of a few interacting atoms it is impossible to measure (from within a classical universe) most of these properties as if they were unobserved, as the coupling to the measuring device distorts the trajectories as in the quantum case. This proves that classical measurement is nontrivial. Realistic measurement in a classical universe would have to consider how some observable of a classical microscopically described measuring device acquires a value correlated with some property of the observed system. The outcome space is then the range of that observable - which is not necessarily that of the property considered. Thus one needs considerable extra structure...
 
  • Like
Likes akvadrako
  • #150
Lord Jestocost said:
In experimental physics it is always taken for granted that the experimenter has the freedom to choose these initial condition, and to repeat his experiment at any particular instant. To deny this freedom of action is to deny the possibility of experimental science.
No. Part of experimental physics is to figure out which part of the initial condition can be manipulated in such a way that desired experiments are possible. Often this is the most difficult aspect of an experiment. Calibration experiments (such as various kinds of quantum tomography) are even set up to discover the initial conditions of a source through appropriate measurements.
 
  • Like
Likes Lynch101
  • #151
Lynch101 said:
If the choices of which observable to be measured had a common cause would they be correlated?
Let me try again to explain the difference between cause and correlation.

Imagine first that everything in a system is fully determined. The universe since the big bang say.

Now imagine that you have experiment with two people involved. There are two boxes. The first person puts a prize in one of the boxes. The second person gets to open a box and try to win a prize.

Now, let's assume you, at the big bang, can predict exactly what everyone will do in this experiment. Everything to you is completely predictable. But, what you predict will be a mixture of all four possibilities. Prize in box 1, box 1 is chosen; prize in box 1, box 2 is chosen; prize in box 2, box 1 is chosen; prize in box 2 , box 2 is chosen.

To you this was all totally predictable. But, it still represents "random", "uncorrelated" results. There is no correlation between the prize being in box 1 and box 1 being chosen etc.

Now, suppose we find that there is a correlation. Let's assume the prize is never won. It's not enough that the universe is fully deterministic for this to happen. There would need to be a casual chain that enforces opposite choices. But, what law of nature can enforce that for complex systems like human beings 14 billion years later? Especially if this always happens with any two people. In any place at any time.

The answer is no normal law of nature can explain that. You argue as though simple determinism could produce that result. It can't.

That's why determinism cannot explain QM correlations.
 
Last edited:
  • Like
Likes Lynch101 and DarMM
  • #152
A. Neumaier said:
This is just because, as I had already mentioned, nobody ever really addressed the classical version of the quantum measurement problem. Instead it is simply assumed that the measuring device somehow acquires the measurement value, in principle with arbitrary accuracy and without backreaction to the system measured. This is a Platonic notion of measurement, looking into God's cards so to say. In this case you may say that the events are the properties of the collection of trajectories of the particles making up the system measured. You (like everyone else in the past) are treating classical mechanics exclusively from this Platonic perspective, while you consider quantum mechanics on a different footing, e.g., by allowing for microscopic descriptions of the measuring device, or even a hierarchy of these. This gives a distorted picture of the classical vs. quantum theme.

But for a system consisting of a few interacting atoms it is impossible to measure (from within a classical universe) most of these properties as if they were unobserved, as the coupling to the measuring device distorts the trajectories as in the quantum case. This proves that classical measurement is nontrivial. Realistic measurement in a classical universe would have to consider how some observable of a classical microscopically described measuring device acquires a value correlated with some property of the observed system. The outcome space is then the range of that observable -
No I still don't think this is quite right and it's not just a matter of assuming an approximation of Platonic nondisturbance. Robert Spekkens and others have investigated a classical theory with fundamental disturbance and where the idealized notion of no back reaction is abandoned. So called epistimically restricted classical theories.

We do get non-commutativity of measurements, entanglement, discord, steering, super-dense coding and many other features. We do not however get Contextuality and Non-classical correlations, because ultimately the underlying event algebra of the system is Boolean. These features indicate something beyond merely some unconsidered irremovable back reaction and form the core of the differences between measurement in a quantum theory viewed as a probability theory.
 
  • #153
DarMM said:
No I still don't think this is quite right and it's not just a matter of assuming an approximation of Platonic nondisturbance. Robert Spekkens and others have investigated a classical theory with fundamental disturbance and where the idealized notion of no back reaction is abandoned. So called epistimically restricted classical theories.
https://arxiv.org/pdf/1409.5041.pdf ? I don't see there a chaotic deterministic dynamics between system and detector giving rise to stochastic measurements. Instead, both probability and what can be measured is input by hand in a purely Platonic way - i.e., axiomatically, by making assumptions.
DarMM said:
We do get non-commutativity of measurements, entanglement, discord, steering, super-dense coding and many other features. We do not however get Contextuality and Non-classical correlations, because ultimately the underlying event algebra of the system is Boolean. These features indicate something beyond merely some unconsidered irremovable back reaction and form the core of the differences between measurement in a quantum theory viewed as a probability theory.
Of course the differences between QM and CM must show up somewhere. But at least the situation is not so easy.

In any case one needs extraneous structure beyond what is given by a model of Nature as such (i.e., before physicists tamper with it by installing a cut separating systems and detectors). This was my primary claim against yours.

In the classical case one can dispense with separately specifying a Boolean subalgebra because there is already one naturally intrinsically given. But in QM, decoherence also seems to specify a natural intrinsically given Boolean subalgebra (once the Heisenberg cut is made); cf. Schlosshauer's recent article.
 
  • Like
Likes akvadrako
  • #154
What then is the difference between Quantum and Classical probability in your view? That is in views where we are not taking the quantum state as representational.
 
  • #155
A. Neumaier said:
In the classical case one can dispense with separately specifying a Boolean subalgebra because there is already one naturally intrinsically given. But in QM, decoherence also seems to specify a natural intrinsically given Boolean subalgebra (once the Heisenberg cut is made); cf. Schlosshauer's recent article.
Yes, but decoherence only gives it after an appropriate device is included in the total system. Where as in classical mechanics each system has a Boolean algebra of properties intrinsically.
 
  • #156
DarMM said:
What then is the difference between Quantum and Classical probability in your view? That is in views where we are not taking the quantum state as representational.
Quantum probability is a formal extension of classical probability obtained by dropping the commutative law in the algebraic formulation of classical probability theory (as given by Peter Whittle). Their notion of expectation are essentially the same, given by states (continuous monotone linear functionals). But the quantum notion of probability is quite different: In place of a classical measure (a a mapping from the commutative algebra of measurable sets to [0,1] defining the probability, uniquely determined by the state) it has a quantum measure (a pair consisting of a state and a POVM, jointly defining the probability). Thus one needs a state and a context (given by the POVM) to define probabilities, making the concept of probability context-dependent. (Special cases are the projective quantum measures where the POVM consists of a system of orthogonal projectors. The latter are the only quantum measures you seem to consider in your discussions of the classical-quantum difference, though it is well-known that they cannot describe lots of stochastic situations in quantum experiments.)

But I sharply distinguish between these mathematical notions and quantum and classical physics. The latter are about dynamical systems giving rise to measurement questions and associated probability statements. The dynamics should give rise through an appropriately defined cut (= specification of system, detector, environment) to a definition of measurement results, their relation to the system state, and the appropriate quantum or classical measure describing the measurement statistics, including in the quantum case the proper context. The quest for achieving this constitutes the measurement problem. It is nontrivial both in the classical and in the quantum case.
 
Last edited:
  • #157
I think that the occurrence of unique measurement results is just a fundamental empirical fact, which cannot be explained by simpler facts or sophisticated theories. In theory it's just to be assumed as a postulate in both classical and quantum theory. Of course, both classical and quantum theory are dynamical theories, describing the evolution of the state of a system with time, given the dynamics (i.e., the Hamiltonian) and an initial condition. The main difference is just the notion of state, which reflects the classical-deterministic and the quantum-deterministic description with the latter being more fundamental and the former being derivable as an effective description of macroscopically relevant observables in a statistical sense.
 
  • Like
Likes PeroK and DarMM
  • #158
A. Neumaier said:
it has a quantum measure (a pair consisting of a state and a POVM, jointly defining the probability). Thus one needs a state and a context (given by the POVM) to define probabilities, making the concept of probability context-dependent
So far this seems very similar to what I was saying. You need a POVM choice (in addition to the state) to have a well defined probability model, unlike the classical case where no such choice is needed.

A. Neumaier said:
Special cases are the projective quantum measures where the POVM consists of a system of orthogonal projectors. The latter are the only quantum measures you seem to consider in your discussions of the classical-quantum difference
Where did I only consider PVMs. I spoke about POVMs from the very beginning. The only point where I mentioned PVMs is when you said Born et al didn't know about POVMs.
 
  • #159
vanhees71 said:
I think that the occurrence of unique measurement results is just a fundamental empirical fact, which cannot be explained by simpler facts or sophisticated theories.
But what makes a measurement device (considered as a quantum system) so special that one can read off from it unique measurement results - in spite of it being represented by a superposition in standard quantum measurement theory? Usual quantum systems do not behave this way, so there must be something special about measurement devices...
 
  • #160
DarMM said:
So far this seems very similar to what I was saying. You need a POVM choice (in addition to the state) to have a well defined probability model, unlike the classical case where no such choice is needed.
You were referring to ''quantum theory'' (and ''classical theory'') rather than ''quantum probability'' (and ''classical probability''), terms that have quite different meanings to me. Now I realized that with ''quantum theory viewed as a probability theory'' you just meant the purely mathematical discipline of noncommutative probability theory (46L53 in the Mathematical Subject Classification) and nothing else. In the interpretation of the latter we completely agree.

But ''quantum theory'' and ''classical theory'' are in my view dynamical theories; to apply notions of probability to them one needs additional specifications. These determine the context, and once the context is fixed, quantum probability restricts to classical probability. Thus I don't consider the specific noncommutative aspects of quantum probability a ''seismic shift'' (your #147).

Rather, the seismic shift is that one expects quantum physics to be consistent on all scales while one expects classical physics to be consistent only at macroscopic scales, obviating the quest for a microscopic description of the measurement process. Thus at present the requirements for a consistent quantum theory are far more stringent than those for a consistent classical theory. This strengthening of the requirements is the seismic shift that created the measurement problem.

If one strengthens the requirements for a consistent classical theory in the same way, one ends up with the question of how a detector subsystem of a large classical chaotic system can acquire information about a disjoint subsystem to be measured. Trying to answer this poses a classical measurement problem with essentially the same difficulties as in the quantum case. The differences in the probability calculus appear minor from this perspective.

DarMM said:
Where did I only consider PVMs. I spoke about POVMs from the very beginning. The only point where I mentioned PVMs is when you said Born et al didn't know about POVMs.
Ah yes, in post #92. Sorry; the discussion extended over a time span longer than my short term memory. Since on the classical side you always referred to the Boolean algebra I had thought without checking you assumed in the quantum case a Boolean subalgebra as well to get a classical subsetting.
 
  • Like
Likes DarMM
  • #161
Elias1960 said:
What I reference here as the interpretation of the Einstein equations would be the limit Ξ,Υ→0\Xi,\Upsilon \to 0 of the equations of that theory.

Ok, that makes it clearer what you are actually talking about, and it is not "Lorentz ether theory". What you are talking about is a different theory that makes different empirical predictions, but just happens to have standard GR as an approximation in an appropriate limit. Making measurements outside the domain in which that approximation is valid would test this theory against, for example, standard GR not considered as an approximation to anything else.
 
  • #162
PeroK said:
The difficulty is not to "predict" the role of the die after it's been thrown, but before it's been thrown!
It would be difficult of course, but surely if all the relevant information was known then it would be possible. The difficulty is down to determining the values of all the relevant information, no?
 
  • #163
Lynch101 said:
It would be difficult of course, but surely if all the relevant information was known then it would be possible. The difficulty is down to determining the values of all the relevant information, no?
Possibly. At the very least it becomes practically impossible if a human is involved. Also, you can have strange loops in this case. Suppose you claim to be able to predict what number I will write down next. That only works if you don't tell me. Otherwise, I can use my "free will" to write something else down.

That's also why stock market predictions are impossible if the information is made public. You get feedback loops.
 
  • Like
Likes DarMM and Lynch101
  • #164
bhobba said:
Determinism and choice are mutually contradictory.

Think carefully. Suppose I describe "free choice" this way: "free choice" just means you determine what your actions are, not anything else. That seems to capture our intuitive sense of what "free choice" is: after all, "free choice" doesn't mean you just do some random thing, it doesn't mean you choose A and then do B or C or D; it means you choose what you do. But that means your choice has to determine what you do.

But how could you even have this kind of free choice in a world that wasn't deterministic, at least at whatever level of description is relevant for "free choice"? In a world without deterministic laws, or at least laws that were deterministic to a good enough approximation at the level of your free choice, having "free choice" wouldn't matter, because any effects of your free choice would soon be overwhelmed by random fluctuations.

So the key question is really, what are "you"? What is this "you" that has to determine your actions in order for "you" to have free choice? In a deterministic universe (or at any rate one that is deterministic to a good enough approximation at the appropriate level of description), "you" are a particular set of deterministic processes that go on in a particular physical subsystem (your brain and body). As long as those processes are what determine your actions, you have free choice.

Many people object to this concept of free choice, but as the philosopher Daniel Dennett has pointed out in several of his books and many articles on the topic, this concept of free choice gives you everything about free will that's actually worth wanting. You just have to be clear on what "you" actually are.
 
  • Like
Likes Lynch101
  • #165
PeroK said:
Possibly. At the very least it becomes practically impossible if a human is involved. Also, you can have strange loops in this case. Suppose you claim to be able to predict what number I will write down next. That only works if you don't tell me. Otherwise, I can use my "free will" to write something else down.

That's also why stock market predictions are impossible if the information is made public. You get feedback loops.
Absolutely, it is impossible in a practical sense.

If I were to predict what number you were to write down though, I would also predict my telling you a number and so the number I tell you might not necessarily be the one I predict, unless I predict that you will think that I am telling you the wrong number and write down the number I tell you thinking that the number I tell you is is the one number it is guaranteed not to be...

my head hurts...

I think Daniel Dennett had a term for that, something like 3rd, 4th, 5th order intentionality or something.
 
  • #166
Lynch101 said:
Absolutely, it is impossible in a practical sense.

If I were to predict what number you were to write down though, I would also predict my telling you a number and so the number I tell you might not necessarily be the one I predict, unless I predict that you will think that I am telling you the wrong number and write down the number I tell you thinking that the number I tell you is is the one number it is guaranteed not to be...

my head hurts...

I think Daniel Dennett had a term for that, something like 3rd, 4th, 5th order intentionality or something.
It's not particularly relevant except to note that the behaviour of complex systems is fundamentally different from the behaviour of simple systems.
 
  • Like
Likes Lynch101
  • #167
PeroK said:
Let me try again to explain the difference between cause and correlation.

Imagine first that everything in a system is fully determined. The universe since the big bang say.

Now imagine that you have experiment with two people involved. There are two boxes. The first person puts a prize in one of the boxes. The second person gets to open a box and try to win a prize.

Now, let's assume you, at the big bang, can predict exactly what everyone will do in this experiment. Everything to you is completely predictable. But, what you predict will be a mixture of all four possibilities. Prize in box 1, box 1 is chosen; prize in box 1, box 2 is chosen; prize in box 2, box 1 is chosen; prize in box 2 , box 2 is chosen.

To you this was all totally predictable. But, it still represents "random", "uncorrelated" results. There is no correlation between the prize being in box 1 and box 1 being chosen etc.

Now, suppose we find that there is a correlation. Let's assume the prize is never won. It's not enough that the universe is fully deterministic for this to happen. There would need to be a casual chain that enforces opposite choices. But, what law of nature can enforce that for complex systems like human beings 14 billion years later? Especially if this always happens with any two people. In any place at any time.

The answer is no normal law of nature can explain that. You argue as though simple determinism could produce that result. It can't.

That's why determinism cannot explain QM correlations.
Thanks Perok again, this is very helpful. I'm pretty sure I understand the notion of correlation on a basic level at least - the example I gave to Demystifier was the correlation between flowers blooming in summer and the number of people wearing sunglasses. It's probably more the issue of correlation as it pertains to Bell's theorem that I am unclear about.

Am I correct in saying that we wouldn't expect the outcomes of measurements made in the Bell tests to be correlated, but results show that they are not statistically independent and so there is a higher level of correlation than if the measurements were just random? The question then is, what is the cause of this correlation. With Bell's theorem implying that we must give up one of the following:
1) Realism
2) Locality
3) Local realism
4) Free Will

I'm not intending to argue the point that simple determinism can explain the QM correlations but in discussing the issue of Free Will (however it is interpreted) it seems to get juxtaposed with SuperDeterminism (SD). I'm wondering how SD explains the correlations. Is it by saying that they have a common cause, that has its origins at the Big Bang? Someone mentioned that SD doesn't try to explain the correlations but I'm not sure how to interpret that.

I tend to interpret the term correlation in the context of the adage "correlation does not imply causation" but I also think of it in terms of a relationship, that two "things" are related in some way. In the case of my example of correlation above, the number of flowers that bloom in summer and the number of people wearing sunglasses are correlated but one doesn't cause the other. They do however share a common cause namely, the Suns rays, so they are related in that way.

When I think of SD I tend to imagine a giant set of dominoes stretching all the way back to the big bang. An incredibly complicated and intricate set of dominoes which are, in practice, unpredictable but which are entirely deterministic. I imagine those dominoes falling in such that they lead always lead to the case where the person always chooses the wrong box. It would appear like an enormous coincidence and defy all explanation, but it would be completely deterministic.

Is that an accurate characterisation of SD or am I missing something along the way?
 
  • #168
PeterDonis said:
These claims about macroscopic processes like shuffling cards, rolling dice, etc., are not necessarily true, because it's highly likely that quantum indeterminacy does not play any role in the outcome, unlike the case of, say, a Stern-Gerlach measurement of a particle's spin. It's entirely possible that, for example, the process of die rolling is insensitive enough to its exact initial conditions that an accurate enough measurement of those initial conditions could allow us to predict the result.

The only analysis I've seen of flipping a coin came to the opposite conclusion.
 
  • #169
Lynch101 said:
Is that an accurate characterisation of SD or am I missing something along the way?
That's correct.
 
  • Like
Likes Lynch101
  • #170
vanhees71 said:
I think that the occurrence of unique measurement results is just a fundamental empirical fact, which cannot be explained by simpler facts or sophisticated theories. In theory it's just to be assumed as a postulate in both classical and quantum theory.

Empirically all we know is each individual observer only observes unique results, but the idea that other results were not observed by those we will never come in contact with is an assumption and it doesn't seem warranted. Indeed it seems arbitrary and unneeded.
 
Last edited:
  • #171
akvadrako said:
The only analysis I've seen of flipping a coin

Please give a reference.
 
  • #173
PeterDonis said:
In a world without deterministic laws, or at least laws that were deterministic to a good enough approximation at the level of your free choice, having "free choice" wouldn't matter, because any effects of your free choice would soon be overwhelmed by random fluctuations.
Just to say and I really have no developed view on this, this sort of views "randomness" as something ontic that can overwhelm choice. Where as some people see probability as something epistemic one autonomous object with choice has to another autonomous object with choice when they interact. You'll see some discussion about this in Fuch's email collection in https://arxiv.org/abs/1405.2390. See for example exchanges with Terry Rudolph and Marcus Appleby. It comes up a few other times, see the topic index.

It's a long, long read though!
 
  • #174
akvadrako said:

Their analysis of the coin flip process looks inconsistent to me; they claim the relevant fluctuations are in polypeptides, but they use the numbers for water. Using numbers for polypeptides should make ##n_Q## larger; an increase of only a factor of 10 in ##r## and ##l## is sufficient for ##n_Q > ##, if ##\Delta b## is kept the same as for water; if ##\Delta b## is decreased as would be expected for a polypeptide whose mass is two or more orders of magnitude larger than that of a water molecule (##\Delta b## goes roughly as the inverse cube root of the mass), ##n_Q## gets even larger.
 
  • Like
Likes akvadrako
  • #175
DarMM said:
this sort of views "randomness" as something ontic that can overwhelm choice

That's one alternative covered by what I said, yes: basically that the fundamental laws are not deterministic, and their non-determinism is so strong that it prevents us from controlling what we do in any meaningful way.

The other alternative is that the fundamental laws are deterministic but their dependence on the exact initial conditions is so sensitive that our inability to control the exact initial conditions means that we cannot control what we do in any meaningful way.

Neither of these alternatives seems to be true of our actual universe: we do seem to be able to control what we do in meaningful ways. My point was simply that that fact alone implies, if not fundamental determinism, at least determinism for practical purposes in the domain of our actions.
 
  • Like
Likes mattt

Similar threads

Replies
97
Views
7K
Replies
874
Views
37K
Replies
76
Views
7K
Replies
333
Views
15K
Replies
25
Views
3K
Replies
91
Views
6K
Back
Top