Is Free Will a Foundational Assumption in Quantum Theory?

In summary, the "free will" assumption is not a foundational assumption of QM. It is an assumption of the scientific method. The superdeterminism alternative to the theory of free will undermines all of science.
  • #176
Ah I see what you mean now. The world around you has to be somewhat deterministic in the "region" into which your choices propagate or otherwise your choices would just get wiped out or you wouldn't realistically be able to judge the consequences of anything.

That's a very interesting point: Supposing you have Free Will and aren't subject to determinism in some way, the world of objects around you has to be somewhat predictable in order for you to exercise that Free Will in any meaningful way.
 
Physics news on Phys.org
  • #177
DarMM said:
That's correct.
Ah, I see.

I've heard people refer to the conspiratorial nature of SD and some suggest that it undermines scientific inquiry. Are there other objections to it?
 
  • #178
Lynch101 said:
Ah, I see.

I've heard people refer to the conspiratorial nature of SD and some suggest that it undermines scientific inquiry. Are there other objections to it?
It's very hard to actually construct a superdeterministic theory, since you need to be able to show the initial state has all these consequences and that it is a natural initial state in some sense without fine tuning.
 
  • Like
Likes Lynch101
  • #179
DarMM said:
Supposing you have Free Will and aren't subject to determinism in some way, the world of objects around you has to be somewhat predictable in order for you to exercise that Free Will in any meaningful way.

Yes, and it has to be predictable in both directions, so to speak. Not only do you have to be able to control what you do, you have to be able to rely on what your senses tell you. In a world that is not deterministic to a good enough approximation in the relevant domain, you won't be able to do that.

And it goes even further than that. You have to be able to count on your brain processes to be reliable. That means your brain processes can't be randomly jumping around all the time; they have to be reasonably related to your sensory inputs and your action outputs. Otherwise "you" won't even be a coherent thinking thing.

When you fully unpack all this, the idea that "free will" in any meaningful sense involves not being subject to determinism begins to look pretty hopeless.
 
  • Like
Likes mattt and Lynch101
  • #180
Lynch101 said:
Would the choices of which observable to be measured require a common cause in order to be considered correlated? Is that the position essentially what superdeterminism implies/requires?
No, superdeterminism does not say that, but it also does not deny that.
 
  • Like
Likes Lynch101
  • #181
Demystifier said:
No, superdeterminism does not say that, but it also does not deny that.
It might be my interpretation of the term "correlation" as 'a mutual relationship or connection between two or more things."
Google Dictionary

I imagine two things which have such a mutual relationship or connection would display statistical interdependence or would not be statistically independent.

A common cause would constitute a common relationship or connection but would it account for statistical interdependence?
 
  • #182
bhobba said:
That is a logical absurdity. Determinism and choice are mutually contradictory. Determinism means initial conditions determine everything - the concept of choice does not exist. What we do know is chaos does exist so in practical terms it is impossible to predict everything even if the world is deterministic. We can never know initial conditions with exact accuracy and those inaccuracies grow to the point all we can predict is probabilities just like if it was probabilistic in the first place.

I agree with this. Determinism is absurd as shown by the Strong Free Will Theorem.

1. If the Experimenters choice is determined by some mechanism then probability wouldn't exist on a quantum level. Every experiment supports this free choice and there's not a shred of evidence that supports any mechanism that determines the choice of the Experimenter.

2. You can never reduce a quantum system to 1 state prior to measurement which is local to the observer. It's always this state or that state or a 1 and an 0. It's never reduced to a single state so randomness is fundamental and this is the antithesis of determinism. This means there isn't one set of initial conditions. The initial conditions of the universe wouldn't be objective but a combination of states that can be in at least 2 different sates up to 10^500 if you accept the String Theory Landscape.

3. Free will is an extension of freedom of choice. All observers, whether human or non human have freedom of choice. The freedom of choice loophole was closed in the Big Bell Test, and the difference is this. A free choice can occur when a measuring device measures which slit the particle went through in the double slit experiment.

Free will is me deciding to go to Denny's this morning for breakfast. This is human consciousness making a free choice among a probability distribution of probable states. You can do Bayesian updating based on my history of choices when I eat breakfast to assign probabilities. Based on my history, maybe 40% of the time I went to Denny's for Breakfast, 30% of the time I went to Bob Evans, 20% of the time to a local diner, 4% of the time McDonald's, 4% of the time Burger King and 2% of the time somewhere new when I go out to eat breakfast.

There's no evidence that there's any mechanism outside of my consciousness that determines which restaurant I will go to for breakfast when I eat out for breakfast. The most you can do is Bayesian updating based on my history of free will choices.
 
  • Like
Likes Lord Jestocost
  • #183
Demystifier said:
No, superdeterminism does not say that, but it also does not deny that.
Is it not typical in superdeterminism for the choice of observable and state of the system to be highly correlated due to a common cause (in Reichenbach's sense) in an earlier prior state of the world?
 
  • #184
PeterDonis said:
When you fully unpack all this, the idea that "free will" in any meaningful sense involves not being subject to determinism begins to look pretty hopeless.
Very interesting. I'm not quite sure of this last point, but plenty of food for thought.
 
  • #185
DarMM said:
Is it not typical in superdeterminism for the choice of observable and state of the system to be highly correlated due to a common cause (in Reichenbach's sense) in an earlier prior state of the world?
Well, without an explicit example of a superdeterministic theory, it's hard to tell. But it seems to me that, in principle, the fine tuning of the initial conditions can be a pure coincidence, without an actual common cause.
 
  • #186
PeterDonis said:
Their analysis of the coin flip process looks inconsistent to me; they claim the relevant fluctuations are in polypeptides, but they use the numbers for water. Using numbers for polypeptides should make ##n_Q## larger; an increase of only a factor of 10 in ##r## and ##l## is sufficient for ##n_Q > ##, if ##\Delta b## is kept the same as for water; if ##\Delta b## is decreased as would be expected for a polypeptide whose mass is two or more orders of magnitude larger than that of a water molecule (##\Delta b## goes roughly as the inverse cube root of the mass), ##n_Q## gets even larger.

Thanks for the review. This is the only paper I've seen where they try to do it. Indeed it could depend a lot on the specifics of biology and even the psychology of how much energy the brain decides to use and many other factors. But if even one of those factors depends on amplifying a quantum event then the result will be random. It seems like getting rid of all the quantum randomness would require every system in the path to have some kind of dynamics that self-corrects to keep the outcomes in discrete peaks.
 
  • #187
Demystifier said:
Well, without an explicit example of a superdeterministic theory, it's hard to tell. But it seems to me that, in principle, the fine tuning of the initial conditions can be a pure coincidence, without an actual common cause.
In 't Hooft's theory basically there is, but I see what you mean now. It would be an incredible coincidence of course, but not logically excluded as such.
 
  • Like
Likes Demystifier
  • #188
DarMM said:
Another system constitutes the POVM selected for the system under study. Only a POVM provides a well defined statistical model, the full algebra of projectors does not.
DarMM said:
In quantum theory viewed as a probability theory, due to the non-Boolean structure, we do not. Some device must be present to define the outcome space. [...]
So quantum theory provides a stochastic description of a system-external system interaction when supplied with a choice of external system, but it is intrinsically incapable of modelling that choice of external system.
DarMM said:
Robert Spekkens and others have investigated a classical theory with fundamental disturbance and where the idealized notion of no back reaction is abandoned. So called epistimically restricted classical theories.
We do get non-commutativity of measurements, entanglement, discord, steering, super-dense coding and many other features. We do not however get Contextuality and Non-classical correlations, because ultimately the underlying event algebra of the system is Boolean.
DarMM said:
So far this seems very similar to what I was saying. You need a POVM choice (in addition to the state) to have a well defined probability model, unlike the classical case where no such choice is needed.
A. Neumaier said:
Since on the classical side you always referred to the Boolean algebra I had thought without checking you assumed in the quantum case a Boolean subalgebra as well to get a classical subsetting.
Note that restricting the probabilistic framework of noncommutative probability (i.e., only state + POVM together define probabilities) to the special case of a commutative algebra does not produce noncontextual Kolmogorov probability but a contextual classical probability calculus. Even in the classical case there are POVMs that do not correspond to the Boolean probability. Thus in full generality, commutative probability must also be considered contextual, if considered on the same footing as the noncommutative version.
 
Last edited:
  • Like
Likes vanhees71 and bhobba
  • #189
Demystifier said:
Well, without an explicit example of a superdeterministic theory, it's hard to tell. But it seems to me that, in principle, the fine tuning of the initial conditions can be a pure coincidence, without an actual common cause.
Could the cause of the big bang be considered a common cause?
 
  • #190
Elias1960 said:
To justify proposals to reject such old common sense notions like the existence of space and time, one needs a serious justification. Extraordinary claims require extraordinary evidence. This extraordinary evidence would have to contain something close to impossibility theorems for theories with classical space and time. At least there should be quite obvious serious problems for any theory with classical space and time, with no plausible chance to solve them. This is certainly not the actual situation. Such theories exist, have been published, they follow a straightforward path known already by Lorentz. Simply not liking them because a curved spacetime is sort of more fascinating is not enough, but nothing better has been proposed yet against them.
With a preferred frame of reference, one would expect objects flying on a jetliner to behave as if they were in motion. But what we observe is not this - e.g. if you dropped a ball onboard an airplane flying at a constant speed of 550 mph is that it falls straight down as if they plane was completely stationary and in all accounts, it is stationary... wrt to the ground. This is what relativity says. All frames are completely equal. This is also the famous elevator thought experiment Einstein had prior to 1905.

How would you explain this experiment if GR was wrong? Are the ball and feather 'moving'?

 
  • #191
A. Neumaier said:
Note that restricting the probabilistic framework of noncommutative probability (i.e., only state + POVM together define probabilities) to the special case of a commutative algebra does not produce noncontextual Kolmogorov probability but a contextual classical probability calculus. Even in the classical case there are POVM that do not correspond to the Boolean probability. Thus in full generality, commutative probability must also be considered contextual, if considered on the same footing as the noncommutative version.
Certainly, the resulting classical probability model is contextual. Streater makes similar remarks in his book "Lost Causes in and beyond Theoretical Physics" in Chapter 6.
 
  • Like
Likes bhobba
  • #192
PeterDonis said:
Neither of these alternatives seems to be true of our actual universe: we do seem to be able to control what we do in meaningful ways. My point was simply that that fact alone implies, if not fundamental determinism, at least determinism for practical purposes in the domain of our actions.

Seems to me there are regions of the world (all physics) where choice “propagates” and regions where it’s more like the first two.

I’m still not clear how one knows the difference between complexity that seems probabilistic and something truly indeterminate.

By “the world” I mean the terrain of all things that could be said to be will-full or chosen, so that’s sort of everything...

Does a rock have will? Does a complex molecule, RNA, a virus, a planet, an ant, a country? “Will” is distinguishable from inertia or other physics how?
 
  • #193
Sorry to mention a movie on a serious thread but we’ve all seen “Ex Machina” right? Nice little movie about “Free Will”

It’s like... the pinnacle of Human poetry (esp if you favor minimalist style) while also the best ever oxymoron.
 
  • #194
A. Neumaier said:
But what makes a measurement device (considered as a quantum system) so special that one can read off from it unique measurement results - in spite of it being represented by a superposition in standard quantum measurement theory? Usual quantum systems do not behave this way, so there must be something special about measurement devices...
Measurement devices are nothing special. E.g., for a position measurement you merely need a photoplate, wher the photons/particles leave a spot through some interaction inducing a chemical reaction, or a photo-multiplier working via the photoelectric effect for photons or something similar for particles. It's all described by the interactions of the measured object with the device's atoms/molecules. It's an empirical fact that the outcomes as expected by the probabilistic predictions of QT, i.e., that if you have prepared a single-photon state at most one spot is blackened when hitting the photo plate and repeating the experiment with equally such prepared single photons the frequency of occurring at this spot is well-estimated according to the usual statistical laws using the probabilities (probability distributions) predicted by QT. As far as I can see there's nothing deeper to it. That's the best "explanation" (or rather "description") we have for the behavior of a single photon: We cannot predict, where it hits the photo plate, we only know that with a certain probability it hits the photoplate at each spot, and the location of the spots is with some macroscopic resolution allowing for statistical analysis. In this sense there's no logical argument forbidding to conclude that nature is on a fundamental level random as very specifically described by QT.

I don't understand, why it is claimed that this minimal statistical interpretation doesn't provide "enough ontology" and why the PBR theorem should forbid this interpretation of the quantum state (as discussed in another thread in the foundations forum). Of course, the quantum state is not purely epistemic but for the single system it describes formally a class of preparation procedures. The expected probabilistic outcome of the measurement is described using Born's rule, using the analysis of the interaction of the measurement device with the measured object and the measurement device is constructed such as to measure more or less accurately some observable (be it in the von Neumann PV or the more general POVM sense).

The "ontology" the simply is that the "primitive phenomena" are random and can only be described by corresponding probabilistic rules, which are defined by the quantum formalism. There are state preparations, where some observable is determined, i.e., leads with probability 1 to a certain outcome and then any other observable not compatible with the prepared state is indetermined leading to certain possible outcomes with some probability <1.
 
  • #195
DarMM said:
That's correct.
Thinking more on this. If the "domino" analogy is accurate, is SD not simply the extrapolation of determinism to it's logical conclusion then?
 
  • #196
EPR said:
With a preferred frame of reference, one would expect objects flying on a jetliner to behave as if they were in motion.
No, one would have to look at the equations of the particular theory in question instead of expecting whatever. If these equations are the Einstein equations in harmonic coordinates, one would expect the same as in GR. If the description of the interpretation claims that the preferred frame is hidden, then you don't even have to look at the equations, because this verbal description says exactly that one has to expect no difference to GR.
 
  • #197
Lynch101 said:
Could the cause of the big bang be considered a common cause?
It could.
 
  • #198
Demystifier said:
It could.
Not really. Ok, some details:

It could be, but only as a single common cause for everything. So, an ideally homogeneous initial temperature could be explained.

But the minor differences in the distribution cannot. They have to be explained by common causes after the BB.

This is essential, because this is what forces us to accept inflation (in the technical sense of ##a''(\tau)>0##, without speculations about the mechanism which would give such a thing). Without inflation, one can compute the maximal size of inhomogeneities with a common cause after the BB. And what we observe is inhomogeneous at much greater distances.
 
  • #199
Elias1960 said:
No, one would have to look at the equations of the particular theory in question instead of expecting whatever. If these equations are the Einstein equations in harmonic coordinates, one would expect the same as in GR. If the description of the interpretation claims that the preferred frame is hidden, then you don't even have to look at the equations, because this verbal description says exactly that one has to expect no difference to GR.

That hidden preferred frame sounds like a hidden explanation. In the framework of relativity all laws of physics behave the same way in any whatever frame is chosen. This is what the example highlights and what is virtually impossible to explain by other means. If you have a source that explains this ubiquitous preferred hidden frame of reference and how it works, please share.
Why is it that virtually no one has been able to find this frame?
 
  • #200
vanhees71 said:
Measurement devices are nothing special. E.g., for a position measurement you merely need a photoplate,
[...]
I don't understand, why it is claimed that this minimal statistical interpretation doesn't provide "enough ontology" [...]The expected probabilistic outcome of the measurement is described using Born's rule, using the analysis of the interaction of the measurement device with the measured object and the measurement device is constructed such as to measure more or less accurately some observable
The lack of ontology consists of using the above only for the system measurement but not for the meter reading. For the latter you are content with the phenomenological description used by the experimentalists.
As long one is doing this there are no problems since the ontology declares as real the experimental setting (preparation and measurement) only. This leaves state and outcome something ill-defined - neither the equivalence relation nor the definition of measurement results are specified to an extent that one could make a simulation of both on the theoretical level.
 
  • #201
Lynch101 said:
Thinking more on this. If the "domino" analogy is accurate, is SD not simply the extrapolation of determinism to it's logical conclusion then?
I wouldn't say it's a logical conclusion of determinism, just a special case of determinism.
 
  • #202
Jimster41 said:
“Will” is distinguishable from inertia or other physics how?

I would say the complexity of the input data that make a difference to the response, and the complexity of the possible responses. A rock can't, for example, decide it doesn't like someone based on what they posted on Facebook yesterday and post a catty rejoinder.
 
  • Like
Likes Jimster41
  • #203
EPR said:
That hidden preferred frame sounds like a hidden explanation. In the framework of relativity all laws of physics behave the same way in any whatever frame is chosen. This is what the example highlights and what is virtually impossible to explain by other means. If you have a source that explains this ubiquitous preferred hidden frame of reference and how it works, please share.
Why is it that virtually no one has been able to find this frame?
The preferred frame is a quite obvious one, the CMBR frame, it is used in cosmology all the time.

How the Einstein Equivalence Principle follows essentially from the action equals reaction symmetry of the Lagrange formalism is part of

Schmelzer, I. (2012). A generalization of the Lorentz ether to gravity with general-relativistic limit, Advances in Applied Clifford Algebras 22, 1 (2012), p. 203-242, arXiv:gr-qc/0205035

The idea is simple, a few lines. Take a theory with preferred coordinates, and translational symmetry in these coordinates. Take the Euler-Lagrange equations for the preferred coordinates. They will be conservation laws. This is essentially Noether's theorem. Name the fields which appear in this EMS tensor "gravitational field". Once the equations for the preferred coordinates depend only on the gravitational field, the equations for all other fields will, via "action equals reaction", not depend on the preferred coordinates too. This is the EEP.

So all that can be influenced by the preferred coordinates is the gravitational field. That means the preferred coordinates are visible in the same way as dark matter.
 
  • #204
Elias1960 said:
The preferred frame is a quite obvious one, the CMBR frame

This frame is not preferred by the laws of physics; the laws of physics are the same in other frames.

This frame is only "preferred" by the particular configuration of stress-energy and spacetime geometry in an FRW universe. Any spacetime with any kind of symmetry will have one or more "preferred" frames in this sense, but that is not what "preferred frame" means in discussions of foundations.
 
  • Like
Likes bhobba, weirdoguy and vanhees71
  • #205
PeterDonis said:
This frame is not preferred by the laws of physics; the laws of physics are the same in other frames.
It is not preferred by the laws of GR interpreted with the spacetime interpretation.
PeterDonis said:
This frame is only "preferred" by the particular configuration of stress-energy and spacetime geometry in an FRW universe.
Given that GR is wrong, and wrong in the region where the initial conditions which have this preference come from (the Big Bang), one quite plausible idea is that the correct theory has this particular choice of a preferred frame, and that the homogeneous initial conditions we have to postulate now can be caused, in fact, by the preferred frame of that more fundamental theory.
PeterDonis said:
Any spacetime with any kind of symmetry will have one or more "preferred" frames in this sense, but that is not what "preferred frame" means in discussions of foundations.
But observation shows that we live in a universe that has that very particular symmetry which we observe on a large scale. And any discussion of foundations has to take into account that GR, as we know it today, is not the fundamental theory.
 
  • #206
Elias1960 said:
Given that GR is wrong

Are you just taking this as an assumption for the sake of argument, or claiming that it is actually true?
 
  • #207
PeterDonis said:
Are you just taking this as an assumption for the sake of argument, or claiming that it is actually true?
I'm assuming this is actually true, and proven by the singularity theorems, and the general principle that true theories will not have singularities/infinities for physical fields.
Alternatively, you can use that it is a classical theory but our world is quantum as what empirically falsifies classical GR. That there are some researchers who hope that QG may be a theory where classical GR remains valid is irrelevant, they have yet to deliver, and until they deliver a QG where classical GR remains valid, their hopes are irrelevant. Semiclassical QFT is not such an example, given that it is simply inconsistent as a theory.

And, of course, I use "true" in a strong sense, so that "true only approximately in some domain of applicability", so that Flat Earth theory would count as true on the soccer field, means the theory is wrong.
 
  • #208
You say, this leaves state and outcome something ill-defined, but you give yourself its very definition, i.e., it's the experimental setting (preparation and measurement). That's all there is from a physical point of view since physics is about the experimental setting (of course in a wide sense of "preparation and measurement", e.g., the observation of Jupiter's moons with a telescope by Galilei is also to be considered an "experimental setting" though Galilei of course hasn't prepared Jupiter with his moons ;-))).
 
  • Like
Likes bhobba
  • #209
Elias1960 said:
I'm assuming this is actually true, and proven by the singularity theorems, and the general principle that true theories will not have singularities/infinities for physical fields.

GR only predicts singularities for solutions that have particular properties. It does not claim that the solution that describes our actual universe must have those properties. And the current models of our universe that seem to be preferred in cosmology are inflationary models, which violate the premises of the singularity theorems (the energy conditions) during the inflationary epoch, so they don't have to have an initial singularity.

Elias1960 said:
Alternatively, you can use that it is a classical theory but our world is quantum as what empirically falsifies classical GR.

This seems to me to be a much better basis for saying that GR is not exactly correct. (It's worth noting, however, that there are some physicists, such as Freeman Dyson, who have speculated that we might not need a quantum theory of gravity, and that classical GR might in fact be the exactly correct theory of gravity.)
 
  • Like
Likes bhobba
  • #210
vanhees71 said:
You say, this leaves state and outcome something ill-defined, but you give yourself its very definition, i.e., it's the experimental setting (preparation and measurement). That's all there is from a physical point of view since physics is about the experimental setting (of course in a wide sense of "preparation and measurement", e.g., the observation of Jupiter's moons with a telescope by Galilei is also to be considered an "experimental setting" though Galilei of course hasn't prepared Jupiter with his moons ;-))).
I claimed that it is ill-defined on the level of theory. This is unlike in classical mechanics, where one usually takes the point of view that a preparation prepares and a measurement records an in principle arbitrarily accurate approximation of the exact value of the position and momentum. Hence one has a well-defined theoretical notion of idealized preparation and measurement. In the quantum case, this is missing.
 

Similar threads

Replies
97
Views
7K
Replies
874
Views
37K
Replies
76
Views
7K
Replies
333
Views
15K
Replies
25
Views
3K
Replies
91
Views
6K
Back
Top