Quantum mechanics is not weird, unless presented as such

In summary, quantum mechanics may seem weird due to the way it is often presented to the general public. However, there is a long history of this approach, as it sells better. In reality, it can be an obstacle for those trying to truly understand the subject. The paper referenced in the conversation shows that quantum mechanics can actually be derived from reasonable assumptions, making it not as weird as some may think. However, this derivation is only one author's view and may not be the complete truth. There are also other interpretations of quantum mechanics, such as the ensemble interpretation, which may not be fully satisfactory. Overall, a proper derivation of quantum mechanics must account for all aspects, including the treatment of measurement devices and the past before measurements
  • #351
As you know, the thermodynamic limit has for some cases shown noncomputability of the gap in quantum many-body theory though, which is even worse than nondeterminism, so it's a double-edged sword.
 
Physics news on Phys.org
  • #352
stevendaryl said:
If you treat Brownian motion using statistical mechanics, then it's deterministic. If you analyze a dust particle suspended in a liquid, your statistical mechanics will give a probability distribution for the location of the particle as a function of time
That makes it nondeterministic. Once probabilities are the basic quantities, one has a stochastic system. Note that in any classical stochastic system, probabilities have a deterministic dynamics, but they nevertheless describe stochastic, nondeterministic processes.

To go from the probabilities to the actual events is the classical version of collapse; cf. the companion thread. But nobody working on stochastic processes uses that weird language for it.

On the other hand, for a system in equilibrium (which involves a thermodynamic limit), quantum statistical mechanics produces the deterministic equations of equilibrium thermodynamics, where no trace is left of anything probabilistic or stochastic. This is quite unlike Brownian motion, which is about the interaction of a macroscopic fluid and a microscopic 1-particle system, restricted to the microscopic system. Stochasticity characterizes the microscopic world, but is foreign to much of the macroscopic world - even when the latter is described as a quantum system.
 
  • #353
ddd123 said:
shown noncomputability [...] which is even worse than nondeterminism
?

We already cannot compute most things about most classical systems with more than a few degrees of freedom, thus the whole discussion about theoretical limits of computability is moot.
 
  • #354
In quantum theory the probabilities are also deterministic in the sense that the statistical operator and the operators representing observables follow deterministic equations of motion. That doesn't make quantum theory a deterministic theory in the usually understood sense. Determinism means that, as within classical physics, all observables at each time have a determined value and these values change via an equation of motion which let's you know any value at any time ##t>t_0##, if you know these values at a time ##t_0##.
 
  • Like
Likes A. Neumaier
  • #355
A. Neumaier claims that quantum mechanics has no weirdness, despite demonstrations that objects as small as photons can share properties over more than a kilometer in Bell theorem tests. This sort of fuzzyheaded thinking has led to a "mass boson" called the Higgs which is so massive it cannot exist for a fraction of a second, despite the evidence the Universe has existed for 13 billion years. So the physicists "cook the books" with "virtual particles", and where the claims of "magic" cannot be refuted (as in entanglement), they simply demand it be accepted without explanation. No mechansim, nothing to see here, move along now.

Quantum mechanics isn't weird, but the explanations we have historically accepted are wrong. We will discover better ones.
 
  • #356
A. Neumaier said:
That makes it nondeterministic. Once probabilities are the basic quantities, one has a stochastic system. Note that in any classical stochastic system, probabilities have a deterministic dynamics, but they nevertheless describe stochastic, nondeterministic processes.

Then I misunderstand what you mean about the thermodynamic limit of QFT being deterministic.

To go from the probabilities to the actual events is the classical version of collapse; cf. the companion thread. But nobody working on stochastic processes uses that weird language for it.

That's because it's pretty clear what the relationship is between the actual events and the statistical model: The actual case is one element of an ensemble of cases with the same macroscopic description. The collapse is just a matter of updating knowledge about which case we are in.

On the other hand, for a system in equilibrium (which involves a thermodynamic limit), quantum statistical mechanics produces the deterministic equations of equilibrium thermodynamics, where no trace is left of anything probabilistic or stochastic.

I wouldn't say that. Equilibrium thermodynamics can be interpreted probabilistically: the actual system has a probability of [itex]e^{- \beta E_j}/Z[/itex] of being in state [itex]j[/itex], where [itex]E_j[/itex] is the energy of state [itex]j[/itex], and [itex]\beta = \frac{1}{kT}[/itex], and [itex]Z[/itex] is the partition function. (Something more complicated has to be done to take into account continuum-many states in classical thermodynamics...)

You can use the equilibrium thermodynamics to compute distributions on particle velocities, and thus to analyze the stochastic behavior of a dust particle suspended in a fluid.
 
  • #357
C Davidson said:
A. Neumaier claims that quantum mechanics has no weirdness, despite demonstrations that objects as small as photons can share properties over more than a kilometer in Bell theorem tests. This sort of fuzzyheaded thinking has led to a "mass boson" called the Higgs which is so massive it cannot exist for a fraction of a second, despite the evidence the Universe has existed for 13 billion years. So the physicists "cook the books" with "virtual particles", and where the claims of "magic" cannot be refuted (as in entanglement), they simply demand it be accepted without explanation. No mechansim, nothing to see here, move along now.

Quantum mechanics isn't weird, but the explanations we have historically accepted are wrong. We will discover better ones.

I've been one of the ones arguing on the side of QM being weird (or at least, nonlocal), but the stuff that you're saying about the Higgs isn't really relevant to these foundational issues. There is a distinction between the Higgs "field" and the Higgs "particle". The particle is fluctuations in the field, and those fluctuations might be short-lived. But the field itself is stable over billions of years (if not forever---it may not be forever).

Anyway, I think it's important to distinguish between two different kinds of weirdness:
  1. A topic can seem baffling and weird to a novice, because it involves unfamiliar concepts, or because familiar concepts no longer apply. This is a matter of learning the subject thoroughly. Special Relativity seems bizarre to those first exposed to it, but after you become familiar with it, and understand it, much (all?) of the weirdness disappears.
  2. There can be lingering questions about the foundations of a topic, even after someone has thoroughly mastered the topic.
A. Neumaier is claiming that the only weirdness of QM is of type 1: If you understand it in the right way, then it stops being weird. I claim that there is some type 2 weirdness.

There might be unanswered foundational questions about the Higgs or the use of virtual particles in calculations, but I don't think so. I think that the weirdness there is due to lack of understanding of the (very complicated) subject. I think you're talking about type 1 weirdness.
 
  • #358
vanhees71 said:
In quantum theory the probabilities are also deterministic in the sense that the statistical operator and the operators representing observables follow deterministic equations of motion. That doesn't make quantum theory a deterministic theory in the usually understood sense. Determinism means that, as within classical physics, all observables at each time have a determined value and these values change via an equation of motion which let's you know any value at any time ##t>t_0##, if you know these values at a time ##t_0##.

So in what sense is the thermodynamic limit of QFT deterministic?
 
  • #359
stevendaryl said:
That's because it's pretty clear what the relationship is between the actual events and the statistical model: The actual case is one element of an ensemble of cases with the same macroscopic description. The collapse is just a matter of updating knowledge about which case we are in.
Yes, and in the quantum case it is the same, if you drop the word ''macroscopic''.
stevendaryl said:
I wouldn't say that. Equilibrium thermodynamics can be interpreted probabilistically: the actual system has a probability of [itex]e^{- \beta E_j}/Z[/itex] of being in state [itex]j[/itex], where [itex]E_j[/itex] is the energy of state [itex]j[/itex], and [itex]\beta = \frac{1}{kT}[/itex], and [itex]Z[/itex] is the partition function. (Something more complicated has to be done to take into account continuum-many states in classical thermodynamics...)
Equilibrium thermodynamics doesn't have the concept of a partition function. One needs statistical mechanics to relate the former to a probabilistic view of matter.
stevendaryl said:
You can use the equilibrium thermodynamics to compute distributions on particle velocities, and thus to analyze the stochastic behavior of a dust particle suspended in a fluid.
You can use statistical mechanics to do that, but not equilibrium thermodynamics, which is a 19th century classical theory that doesn't have a notion of particles. Statistical mechanics is much more versatile than thermodynamics, as one isn't limited to locally homogeneous substances.
 
  • #360
A. Neumaier said:
Yes, and in the quantum case it is the same, if you drop the word ''macroscopic''.

But that sounds like a hidden-variables theory of the type that is supposed to not exist.

Equilibrium thermodynamics doesn't have the concept of a partition function. One needs statistical mechanics to relate the former to a probabilistic view of matter.

Okay. I'm lumping thermodynamics and statistical mechanics together.
 
  • #361
stevendaryl said:
So in what sense is the thermodynamic limit of QFT deterministic?
In the sense that it results in 19th century classical thermodynamics. In the latter theory there are known, exact, nonrandom relations between the thermodynamic quantities, and one can predict (from a thermodynamic potential and the values of a few state variables) the results of all reversible changes with certainty. No thermodynamics textbook mentions randomness (unless it refers to an underlying microscopic picture, i.e., to statistical mechanics).
 
Last edited:
  • #362
stevendaryl said:
that sounds like a hidden-variables theory of the type that is supposed to not exist.
Well, I argued that it might be nonlocal hidden variables - namely all those that describe the neglected environment. No Bell-type theorem excludes this possibility, and statistical mechanics demands that these variables must be taken into account. The only open question is whether these abundant hidden variables are enough to explain everything random. My strong suspicion is that they do.
 
  • #363
A. Neumaier said:
Well, I argued that it might be nonlocal hidden variables - namely all those that describe the neglected environment. No Bell-type theorem excludes this possibility, and statistical mechanics demands that these variables must be taken into account. The only open question is whether these abundant hidden variables are enough to explain everything random. My strong suspicion is that they do.

I had to leave the discussion for a while, because I was overly busy with my paying job, so I may have missed something, but it seems to me that taking into account the environment can't possibly resolve the nondeterminism using only unitary evolution. My argument is pretty simple:

Let [itex]|\psi_U\rangle[/itex] be a state (including an electron, a stern-gerlach device, and the environment) which leads to measurement outcome "spin-up" for a spin measurement. Let [itex]|\psi_D\rangle[/itex] be a state which leads to measurement outcome "spin-down". Then the state [itex]|\psi_?\rangle = \alpha |\psi_U\rangle + \beta |\psi_D\rangle[/itex] would be a state that would lead to an undetermined outcome to the measurement. Maybe you can argue that there is no way to produce state [itex]|\psi_?\rangle[/itex], but it certainly exists in the Hilbert space, and it's not at all obvious to me that it would be unachievable.
 
  • #364
stevendaryl said:
it seems to me that taking into account the environment can't possibly resolve the nondeterminism using only unitary evolution. My argument is pretty simple:

Let [itex]|\psi_U\rangle[/itex] be a state (including an electron, a stern-gerlach device, and the environment) which leads to measurement outcome "spin-up" for a spin measurement. Let [itex]|\psi_D\rangle[/itex] be a state which leads to measurement outcome "spin-down". Then the state [itex]|\psi_?\rangle = \alpha |\psi_U\rangle + \beta |\psi_D\rangle[/itex] would be a state that would lead to an undetermined outcome to the measurement. Maybe you can argue that there is no way to produce state [itex]|\psi_?\rangle[/itex], but it certainly exists in the Hilbert space, and it's not at all obvious to me that it would be unachievable.
This is a well-known argument, used already long ago by Wigner, I believe.

But it is not valid in my setting: Here, in the algebra of linear operators of some huge, universal Hilbert space, there is a unique density matrix of the universe that describes reality, and all systems that are observable are described by the projections of this universal density matrix to the algebra of linear operators of the tiny Hilbert space describing the observable system under investigation.
Most of the superpositions, while they exist in the tiny Hilbert space, have no relation to the universal density matrix, hence cannot be used to make an argument.
 
Last edited:
  • #365
A. Neumaier said:
This is a well-known argument, used already long ago by Wigner, I believe.

But it is not valid in my setting, where, in some huge, universal Hilbert space, there is a unique density matrix of the universe that describes reality, and all systems that are observable are projections of this universal density matrix to the tiny Hilbert space describing the microscopic system under investigation.
Most of the superpositions, while they exist in the tiny Hilbert space, have no relation to the universal density matrix, hence cannot be used to make an argument.

I think I understand your point, but it still seems like a tremendous leap. The same argument I made earlier can be lifted up to the level of universal density matrix, I would think. Why does the universal density matrix necessarily lead to definite outcomes to all possible experiments? Is there a way to prove this for a typical density matrix, or are your assuming some kind of "fine-tuning" of the initial density matrix to insure that it's true?

Mathematically, I think what you're saying might be something along the lines of the following:

Let [itex]\rho[/itex] be the density matrix of the universe at some time (let's pick a frame/coordinate system so that we can talk about the state at one time). Then the claim might be that there is a decomposition of [itex]\rho[/itex] into the form [itex]\rho = \sum_j p_j |\psi_j\rangle \langle \psi_j |[/itex] where [itex]\psi_j[/itex] is an orthonormal basis such that for each [itex]j[/itex], all macroscopic quantities (such as the outcomes of measurements) have definite values. I don't see why that should be the case.

(You can always write [itex]\rho = \sum_j p_j |\psi_j\rangle \langle \psi_j |[/itex] , but you can't always be guaranteed that your favorite set of observables---the macroscopic values of measurement results--will be diagonal in the basis [itex]\psi_j[/itex])
 
  • #366
stevendaryl said:
Why does the universal density matrix necessarily lead to definite outcomes to all possible experiments? Is there a way to prove this for a typical density matrix, or are your assuming some kind of "fine-tuning" of the initial density matrix to insure that it's true?
I only need to assume that the observed part of the universe is approximately in local equilibrium. This is amply corroborated by experiment, and provides a very strong constraint on the universal density matrix. Indeed, local equilibrium is just the assumption needed to derive fluid mechanics or elasticity theory from quantum field theory, and for more than a century we describe every macroscopic object in these terms. Thus only those density matrices qualify as typical that satisfy this experimental constraint.
In my book (see post #2 of this thread), I call the corresponding states Gibbs states.
stevendaryl said:
Let [itex]\rho[/itex] be the density matrix of the universe at some time (let's pick a frame/coordinate system so that we can talk about the state at one time). Then the claim might be that there is a decomposition of [itex]\rho[/itex] into the form [itex]\rho = \sum_j p_j |\psi_j\rangle \langle \psi_j |[/itex] where [itex]\psi_j[/itex] is an orthonormal basis such that for each [itex]j[/itex], all macroscopic quantities (such as the outcomes of measurements) have definite values. I don't see why that should be the case.
This is obviously not the case but this was not my claim. We do not need definite values but only values accurate enough to match experimental practice. This is a much less severe condition.

We all know from classical nonequilibrium thermodynamics that the macroscopic local observables are a small set of fields (in the simplest case just internal energy density and mass density). We also know from statistical mechanics in the grand canonical ensemble that these are given microscopically not by eigenvalues but by certain well-defined expectations. Under the assumption of local equilibrium, the fluctuations of the corresponding averaged quantum fields around the expectations are negligible. Thus the values of the macroscopic effective fields (obtained by corresponding small-scale averaging in the statistical coarse-graining procedure) are sharp for all practical purposes.

Mathematically, this becomes exact only in the thermodynamic limit. But for observable systems, which have finite extent, one can estimate the uncertainties through the standard fluctuation formulas of statistical mechanics. One finds that for macroscopic observations at the human length and time scale, we typically get engineering accuracy. This is the reason why engineering was already successful long before the advent of quantum mechanics.
 
  • #367
A. Neumaier said:
I only need to assume that the observed part of the universe is approximately in local equilibrium. This is amply corroborated by experiment, and provides a very strong constraint on the universal density matrix. Indeed, local equilibrium is just the assumption needed to derive fluid mechanics or elasticity theory from quantum field theory, and for more than a century we describe every macroscopic object in these terms. Thus only those density matrices qualify as typical that satisfy this experimental constraint.
In my book (see post #2 of this thread), I call the corresponding states Gibbs states.

But to me, the question is about quantum theory, not empirical observations. Does QM predict those observations?
 
  • #368
stevendaryl said:
So in what sense is the thermodynamic limit of QFT deterministic?
Don't ask me. I don't understand this claim at all.
 
  • #369
A. Neumaier said:
This is obviously not the case but this was not my claim. We do not need definite values but only values accurate enough to match experimental practice. This is a much less severe condition.

I think that's just a clarification of what I mean by "macroscopic quantities". I like your suggestion of giving coarse-grained descriptions of the mass-energy density, and field values. If the description is coarse enough, then the uncertainty principle doesn't get in the way of knowing the "macroscopic state of the universe" to that level of accuracy.
 
  • Like
Likes vanhees71
  • #370
Precisely the apparently "deterministic" behavior of macroscopic systems is due to a "blurred" enough view on them. One way is to derive semiclassical transport models from QFT. The Kadanoff-Baym equations (fully quantum) become a Boltzmann equation in the quasiparticle limit applying leading-order gradient expansion.
 
  • #371
stevendaryl said:
I think that's just a clarification of what I mean by "macroscopic quantities". I like your suggestion of giving coarse-grained descriptions of the mass-energy density, and field values. If the description is coarse enough, then the uncertainty principle doesn't get in the way of knowing the "macroscopic state of the universe" to that level of accuracy.

The question is: Can the universe be in a superposition of states that have different macroscopic states? If not, why not?
 
  • #372
stevendaryl said:
But to me, the question is about quantum theory, not empirical observations. Does QM predict those observations?
Quantum theory is derived from empirical observations and organizes these into a coherent whole. Quantum field theory predicts - under the usual assumptions of statistical mechanics, which include local equilibrium - hydrodynamics and elasticity theory, and hence everything computable from it.

Of course it predicts only the general theoretical structure, since all the detail depends on the initial conditions. But it predicts in principle all material properties, and quantum chemists are doing precisley that. All these are essentially exact predictions of QFT, with errors dominated by the computational techniques available rather than the uncerainty due to the averaging. Together with prepared or observed initial conditions it predicts the values of the macroscopic observables at later times. For example, computational fluid dynamics is an essential tool for the optimization of modern aircrafts.

Local equilibrium itself is usually justified in an ad hoc way assuming fast relaxation scales. These can probably be derived, too, but I haven't seen a derivation. But one knows when this condition is not satisfied in practice - namely if the mean free path lenth is too long. This happens for very dilute gases, where the Boltzmann equation must be used instead of hydrodynamic equations (and can be derived from QFT).
 
  • #373
stevendaryl said:
The question is: Can the universe be in a superposition of states that have different macroscopic states? If not, why not?
In the view I outlined above, the universe is not in a pure state but in a Gibbs state where local equilibrium holds to a good approximation. This is not a pure state but a mixture, ##\rho=e^{-S/k}## where ##S## is an entropy operator and ##k## the Boltzmann constant.

The more precise one wants to describe the state of the universe, the more complex is the form of ##S##. Local equilibrium means that one considers the approximation where ##S## is an integral over local fields, and leads to hydrodynamics. The next, more accurate approximation is microlocal equilibrium, where
##S## is an integral over local fields, and leads to kinetic theory (Boltzmann equation and Kadanoff-Baym equations). Critical point studies go even selectively beyond that ot make predctions of critical exponents.
 
Last edited:
  • #374
A. Neumaier said:
Well, I argued that it might be nonlocal hidden variables - namely all those that describe the neglected environment. No Bell-type theorem excludes this possibility, and statistical mechanics demands that these variables must be taken into account. The only open question is whether these abundant hidden variables are enough to explain everything random. My strong suspicion is that they do.
Interesting how you imagine these "non-local hidden variables" and their effects... In particular, are they actual variables, i.e. do they get changed by some processes? I think this is critical for distinguishing from LHV models - because constant "variables", even if called "non-local" in some sense, can in my opinion always be modeled by local copies. Only their non-local change, or in other words spooky action at a distance, is what sets a model apart of LHV models and allows Bell violations.
 
  • #375
A. Neumaier said:
This is a well-known argument, used already long ago by Wigner, I believe.

But it is not valid in my setting: Here, in the algebra of linear operators of some huge, universal Hilbert space, there is a unique density matrix of the universe that describes reality, and all systems that are observable are described by the projections of this universal density matrix to the algebra of linear operators of the tiny Hilbert space describing the observable system under investigation.
Most of the superpositions, while they exist in the tiny Hilbert space, have no relation to the universal density matrix, hence cannot be used to make an argument.

I'd like to have your expert opinion on the Conway-Kochen theorem. http://arxiv.org/pdf/quant-ph/0604079.pdf and http://arxiv.o rg/pdf/0807.3286.pdf
 
Last edited by a moderator:
  • #376
georgir said:
are they actual variables, i.e. do they get changed by some processes?
They change according to the Schroedinger equation of the universe, which determiens how ##\rho(t)## depends on time. The Hamiltonian would be known if we had a common generalization of the standard model and gravitation.
 
  • #377
Hornbein said:
I'd like to have your expert opinion on the Conway-Kochen theorem. http://arxiv.org/pdf/quant-ph/0604079.pdf and http://arxiv.o rg/pdf/0807.3286.pdf
I don't think the paper has any relevance. The will of the experimenter is not relevant for Bell-type experiments, as all choices can be made by automatic devices. (See https://www.physicsforums.com/posts/5347224/ , especially point 9.)

In particular, the assumption made in their theorem is highly unrealistic. The choices made by an automatic device always depend on its internal state and its input, hence are in some sense determined by the information available to the device.

There is also no reason to believe that things would be different with humans, although here the definition of ''free will'' is beset with philosophical difficulties.
 
Last edited by a moderator:
  • #378
A. Neumaier said:
I don't think the paper has any relevance. The will of the experimenter is not relevant for Bell-type experiments, as all choices can be made by automatic devices. (See https://www.physicsforums.com/posts/5347224/ , especially point 9.)

In particular, the assumption made in their theorem is highly unrealistic. The choices made by an automatic device always depend on its internal state and its input, hence are in some sense determined by the information available to the device.

There is also no reason to believe that things would be different with humans, although here the definition of ''free will'' is beset with philosophical difficulties.
Aha. So you are a superdeterminist, like t'Hooft? You are correct: the theorem does not exclude this possibility.
 
  • #380
I got a pingback on my blog from someone with a question/comment about my blog post concerning 'Wrong idea...' but I can't find the post and I don't know who asked the question. Please feel free to contact me through my blog (there's a 'contact me' option there) if you would like a reply. Thanks.
 
Last edited:
  • #381
rkastner said:
I got a pingback on my blog from someone with a question/comment about my blog post concerning 'Wrong idea...' but I can't find the post and I don't know who asked the question.
Off topic but, I wouldn't post emails on a public forum, it's inviting spam doomsday. Today's services are filtered but you increase it tenfold if not more. I may be wrong.
 
  • #382
ddd123 said:
Off topic but, I wouldn't post emails on a public forum, it's inviting spam doomsday. Today's services are filtered but you increase it tenfold if not more. I may be wrong.
Thanks, fixed it
 
  • #383
A. Neumaier said:
Here, in the algebra of linear operators of some huge, universal Hilbert space, there is a unique density matrix of the universe that describes reality, and all systems that are observable are described by the projections of this universal density matrix to the algebra of linear operators of the tiny Hilbert space describing the observable system under investigation.
Most of the superpositions, while they exist in the tiny Hilbert space, have no relation to the universal density matrix, hence cannot be used to make an argument.
Further discussion of this part (concerning reality described by a universal density matrix), if any, please in this new thread!
 
  • #384
One offshoot of this discussion (and the twin discussion of an associated experimental setting) is that I arrived at a new, improved understanding of relativistic causality. This settles (for me) all problems with causality in Bell-type theorems, and reduces the weirdness of nonlocality experiments to a problem in the psychology of knowledge. The residual weirdness is only of the same kind as the weirdness of being able to know what happens if some object falls into a classical black hole and when it will hit the singularity, although no information can escape from a black hole.

Thus the quantum case is not really different from the classical case in this respect. This throws light on the true, social, role of weirdness in quantum mechanics.

People very experienced in a particular area of real life can easily trick those who don't understand the corresponding matter well enough into believing that seemingly impossible things can happen. This is true in the classical domain, amply documented by magic tricks where really weird things happen, such as rabbits being pulled out of empty hats, etc..

The art of a magician consists in studying particular potentially weird aspects of Nature and presenting them in a context that emphasizes the weirdness. Part of the art consists of remaining silent about the true reasons why things work rationally, since then the weirdness is gone, and with it the entertainment value.

The same is true in the quantum domain. Apart from being technically very versed experimental physicists, people like Anton Zeilinger are quantum magicians entertaining the world with well-prepared quantum weirdness. And the general public loves it! Judging by its social impact, quantum weirdness will therefore never go away as long as highly reputed scientists are willing to play this role.
 
  • #385
A. Neumaier said:
One offshoot of this discussion (and the twin discussion of an associated experimental setting) is that I arrived at a new, improved understanding of relativistic causality. This settles (for me) all problems with causality in Bell-type theorems, and reduces the weirdness of nonlocality experiments to a problem in the psychology of knowledge. The residual weirdness is only of the same kind as the weirdness of being able to know what happens if some object falls into a classical black hole and when it will hit the singularity, although no information can escape from a black hole.

Honestly, I didn't understand this argument at all. As I said in the thread, the weirdness is in the correlated results themselves. Sure, we can anticipate them due to past experiments, but how is this different from what maline was saying: "QM is not weird because it's correct"? That seems to be your argument, but then you say it isn't. I am at a loss.
 

Similar threads

Replies
6
Views
2K
Replies
1
Views
6K
Replies
36
Views
4K
Replies
2
Views
1K
Replies
7
Views
525
Replies
2
Views
840
Replies
4
Views
6K
Back
Top