Nobody understands quantum physics?

In summary, Feynman's statement that nobody understands quantum mechanics is often quoted as a witty remark, but it highlights the fact that quantum mechanics is not understood in the same way as classical mechanics. Quantum mechanics is a theory that does not assign values to all observables in the absence of measurement, unlike classical mechanics. However, it is still considered the most well-understood and rigorously tested theory ever. There have been many interpretations of quantum mechanics, and the "measurement problem" is still being debated. But for most practical applications, the minimal interpretation is sufficient. It is possible that new observational facts may one day lead to a major revision of quantum theory, as happened with the development of quantum electrodynamics. Despite these debates, quantum
  • #211
WernerQH said:
Strange. The emission of a photon is not an event? Quantum theory does not describe this?
I think Neumaiers point is that the unitary evolution does not involve any events. It is an expectation of what happens in between events which are just the end points.

To solve the measurement problem, we need for example a unitary description of the actual measurement, but the problem is that this involves a classical system, and the classical backround is assumed part of the observing context. The unitary description would have to deal with how the classical detection events that are supposedly real, can be allow when it's at hte same time in superposition. We need to crack the sort of implicit duality of measurement process and a regular physical interaction. Decoherence just solves this by saying that the classical system really isn't classical! It's just a "complex" quantum system. But this pushes the "scientific inference" out to imaginary observes at some point. This is why i objected to. In the extremes this imaginary observer either becomes a black hole, or somehow asymptotic in it's existence (which is useless, as it's not where we "live").

/Fredrik
 
  • Like
Likes gentzen
Physics news on Phys.org
  • #212
WernerQH said:
Strange. The emission of a photon is not an event? Quantum theory does not describe this?
An event with an exact spacetime location would be a classical concept. Of course, you could mean something else by that word. But if I remember correctly, you previously indicated that you indeed mean such an exact spacetime location.

WernerQH said:
Do you envision the decay of a neutron as a gradual, continuous process, with the neutron slowly turning into a proton?
Why do you write "envision"? What is unclear about the concept of "unitary quantum mechanics" for you? As long as you don't explicitly measure, the probability that the neutron has turned into a proton indeed gradually increases. And if you explicitly measure, then you learn whether or not it has turned into a proton, but not when or where exactly that happened.

WernerQH said:
This doesn't seem to be mainstream physics.
I get the impression that it is more mainstream physics than what you seem to suggest instead.
 
  • #213
WernerQH said:
Strange. The emission of a photon is not an event? Quantum theory does not describe this?
There is no notion of an event in quantum theory (without an interpretation in classical terms). The whole quantum formalism proceeds without the concept of an event. The theory only describes the dynamics of photon states and S-matrices. but not the associated events in real life.

Events do not appear in quantum theory but only in quantum practice, namely when the formalism is interpreted in terms of measurement.

WernerQH said:
Do you envision the decay of a neutron as a gradual, continuous process, with the neutron slowly turning into a proton? This doesn't seem to be mainstream physics.
Mainstream quantum physics says that the wave function dynamics is a continuous process, governed by the Schrödinger equation. Even decoherence arguments assume that decay takes time.

In quantum theory, a neutron never turns into a proton, but a superposition of both dynamically changes weights. The events appear - outside the strict theory - upon interpreting the quantum results in terms of what can be observed.
 
Last edited:
  • Like
Likes physika, dextercioby, Simple question and 2 others
  • #214
For experiments on QM systems, a single event doesn't constitute a measurement. Just because my magnetic monopole detector clicked one time late at night doesn't mean that magnetic monopoles have been shown to exist by measurement. The rules of QM (and those of most reputable journals) are about measurements which of necessity comprised of multiple events. You have to do the statistics. This remains true even in the edge cases in which the outcome of an event is probability 1. I think this distinction between isolated event and measurements is blurred in the minds of many.
 
  • #215
Of course quantum dynamics describes the ##\beta## decay of a neutron. A neutron is not an energy eigenstate when taking into account the weak interaction, and thus having prepared a neutron at time ##t=0## the unitary time evolution taking into account the weak interaction leads to a state, where the probability to find a proton, an electron and an electron-antineutrino at any time ##t>0##. So indeed QT unitary time evolution describes the ##\beta## decay of a neutron.

Of course as any quantum state also in this situation it describes the probability to find at some given time ##t>0## a proton, electron, and electron-antineutrino instead of a neutron. It's generically impossible to know when an individual neutron decays. The survival probability for the neutron (considered to be at rest) is approximately given by the "radioactive decay law", ##P(t)=\exp(-t/\tau)##, where ##\tau## is the lifetime of the neutron (a pretty delicate quantity, but that's another story ;-)).
 
  • Like
Likes gentzen
  • #216
A. Neumaier said:
But an event is a classical concept, there are no events in unitary quantum theory. Your argumentation therefore imposes classical concepts upon quantum mechanics of macroscopic bodies.
I'm using event in the probability theory sense. When we consider the state space of the source + detector ##\mathcal{H}_s\otimes\mathcal{H}_D##, we build a sample space of measurement outcomes ##\{\Pi_i\}##with a projective decomposition of the identity ##I = \sum_i \Pi_i##. We can then build an event algebra from all projectors of the form ##P_j = \sum_j\lambda_j \Pi_j## where ##\lambda_j## is either zero or one.
 
  • Like
Likes gentzen
  • #217
Paul Colby said:
For experiments on QM systems, a single event doesn't constitute a measurement.
...
You have to do the statistics
...
I think this distinction between isolated event and measurements is blurred in the minds of many.
The "problem" with QM as it stands is exactly that it lives only at the perfect statistical level. But real inferences invariable works with limited sampling, and limited processing capacity. This is ultimately why QM appears timeless and asymptotic.

A real agent/observer, can for multiple reasons not ever attain perfect knowledge of the statistics (the distributions), as there is a race condition where subject matter changes during the inference process and that "perfect" postprocessing or decoding to abduce patters simply take time. And it may not be possible even to repeat the experiment. So the agent decision has to be made on availabe imperfect conclusions.

This "intermediate" process is where QM doens't work. QM works where a relative unlimited or "sufficient" statistics can be compiled and where the agent is never saturated with information.

So while a QM experience goes in principle like this: A preparation prodecure must be defined and tested to make sure we are confidence about the distributions coming out from it. Then the "final state" also reequires repeating this enough times that we can within the accuracy determined the "distribution" of final detections. Suhc an "experiment" is clearly not how a normal interaction takes place. A normal interaction happens once, and them moves on to the next (which is typically not pulled from the same ensemble).

So while I share that the minimal ensemble interpretation is quite correct, in how QM is corroborated, it is also precisely what makes it problematic, when applying it to any real scenario which is not a controlled experiment that is repeated 1000 times.

/Fredrik
 
  • #218
Before someone says, it's just the same in classical physics...
Fra said:
So the agent decision has to be made on availabe imperfect conclusions.
This is the difference, if you after solving hte measurement problem see that we need to unify measurement and physical interactions. So this imperfection (as opposed to just physicists imperfect knowledge) should in theory give observable consequences for the agents interactions. (On par with bell entanglement for eample, but more involved).

/Fredrik
 
  • #219
Morbert said:
I'm using event in the probability theory sense. When we consider the state space of the source + detector ##\mathcal{H}_s\otimes\mathcal{H}_D##, we build a sample space of measurement outcomes ##\{\Pi_i\}##with a projective decomposition of the identity ##I = \sum_i \Pi_i##. We can then build an event algebra from all projectors of the form ##P_j = \sum_j\lambda_j \Pi_j## where ##\lambda_j## is either zero or one.
According to the explanation of event given in Wikiedia, it means that all events occur in every experiment (if all probabilities are nonzero). This is surely not the case in physical experiments. Thus either the notion of events or the notion of experiment (as used in Wikipedia) is physically irrelevant. Therefore your argument is spurious.
 
Last edited:
  • Skeptical
Likes WernerQH
  • #220
A. Neumaier said:
There is no notion of an event in quantum theory (without an interpretation in classical terms). The whole quantum formalism proceeds without the concept of an event.
So Feynman diagrams do not describe anything real? Don't they play a role in the theory? Theorists derive them using perturbation theory and asymptotic states, but is it in your view coincidental that many physicists think of them as describing real processes?
A. Neumaier said:
Events do not appear in quantum theory but only in quantum practice, namely when the formalism is interpreted in terms of measurement.
Planck was forced to think of the emission of radiation as a discontinuous process. It is ironic that you seem to suggest that the real world changes continuously.
A. Neumaier said:
Mainstream quantum physics says that the wave function is a continuous process, governed by the Schrödinger equation.
Unitary evolution according to Schrödinger's equation can't be the whole story. Microscopic events add discreteness and randomness to the picture, and there is ample evidence for them. Why do you call events a classical notion? I think that's a distortion.
 
  • #221
Paul Colby said:
For experiments on QM systems, a single event doesn't constitute a measurement.
This is a very sensible point of view. In this view, what is measured are never events. Indeed, assuming your statement, an induction argument shows that any number of events is not a measurement.

Thus only what is computed from a number of events is a measurement - with an uncertainty determined by the statistics. This means that the true observables (and indeed, what is reported in papers on experimental quantum physics) are probabilities and expectations, and not eigenvalues. This is precisely the point of view of my thermal interpretation.

Paul Colby said:
For experiments on QM systems, a single event doesn't constitute a measurement.
But this is not the usage in quantum foundations. There a response of the detector is regarded as a measurment of the presence of a single photon. The only question is whether the response measured a stray photon or one that was deliberately sent.
 
  • Like
Likes Simple question and gentzen
  • #222
gentzen said:
As long as you don't explicitly measure, the probability that the neutron has turned into a proton indeed gradually increases.
Yes, that's the theorist's picture. The experimentalist knows that the neutron decays even when he's not "measuring", and on a time scale shorter than microseconds.
 
  • #223
WernerQH said:
So Feynman diagrams do not describe anything real?
Only the integration over 4 real variables.
WernerQH said:
Don't they play a role in the theory?
Only as illustrations.
WernerQH said:
Theorists derive them using perturbation theory and asymptotic states, but is it in your view coincidental that many physicists think of them as describing real processes?
Yes. It is just informal imagery for formally precise integration procedures. See
https://www.physicsforums.com/insights/vacuum-fluctuation-myth/
WernerQH said:
Planck was forced to think of the emission of radiation as a discontinuous process. It is ironic that you seem to suggest that the real world changes continuously.
30 years later, quantum statistical mechanics explained the Planck spectrum without any recourse to discontinuity.
WernerQH said:
Unitary evolution according to Schrödinger's equation can't be the whole story.
Perhaps not. What else do you advocate?
WernerQH said:
Microscopic events add discreteness and randomness to the picture, and there is ample evidence for them. Why do you call events a classical notion? I think that's a distortion.
Microscopic events cause discreteness also in classical mechanics. Whenever you switch on your computer you are doing something discrete which is microscopically continuous.
 
  • Like
Likes Motore and gentzen
  • #224
WernerQH said:
Yes, that's the theorist's picture. The experimentalist knows that the neutron decays even when he's not "measuring",
because the environment does enough measuring. Anthropomorphic detectors are not needed for that, only for isolating things to be able to study them quantitatively.
WernerQH said:
and on a time scale shorter than microseconds.
Yes, but nobody has ever seen a neutron discontinuously turn into a proton.
 
  • Like
Likes gentzen
  • #225
WernerQH said:
So Feynman diagrams do not describe anything real? Don't they play a role in the theory?
A. Neumaier said:
Only as illustrations.
Haha.

A. Neumaier said:
Whenever you switch on your computer you are doing something discrete which is microscopically continuous.
I think you got this backwards.

A. Neumaier said:
Yes, but nobody has ever seen a neutron discontinuously turn into a proton.
You sound like Ernst Mach commenting on atoms.
 
  • #226
martinbn said:
I think this is a very misleading example. The development of general relativity was a result of Einstein solving physics motivated problems with hardcore mathematics not philosophy. The philosophy part, like the hole argument, actually slowed him down. Only when he was able to shrug off the philosophy he made progress.
Another post I just don't get. Again, as I said, the motivation for Einstein to develop GR was not a mismatch between theory and empirical results. But from my point of view to call the hole argument "the philosophy part" is just silly. The hole argument was Einstein's encounter with the true meaning of gauge invariance applied to spacetime and its coordinates. It showed that physically it doesn't make sense to label spacetime points as events before the metric is introduced, something which intuitively is not trivial at all. In a time where gauge invariance was not well understood this was a deep conceptual issue of the theory. To call that "the philosophy part" is, in my view, wrong on many levels. It's physics, people. It's not "mere philosophy". We are talking about the precise meaning of the metric field, the meaning of when you can interpret spacetime points as events, and the application of gauge invariance to spacetime. How is that not physics but "the philosphy part"?

Of course, in retrospect, with hundreds of textbooks written on GR, it's easy to dismiss Einstein's hole argument and his departure from general covariance as silly. And yes, I also don't get why many philosophers of science still go on and on about the hole argument and talk about "manifold substantivalism" and what not. But to dismiss the hole argument for these reasons as "the philosophy part of GR" is, as I see it, a denial of both the physics and the history behind the development of General Relativity.

As an experiment during my PhD I tried a few times to expose string theory people to explicit physical examples of the hole argument. And you would be surprised by how many people were confused by it. I once heard a string theorist who later on worked at MIT say that he never truely understood the subtleties of the argument. Maybe I and many physicists with me are just not smart enough because we're impressed by the subtleties of this "mere philosophy", but I seriously consider the idea that many people just don't appreciate the deep physical thinking of Einsteins when he considered the nature of general covariance at that time. But what I'm quite convinced of is that it's categorically wrong to put it away as "the philosophy part".

I guess we're talking about the demarcation between physics and philosophy here, but to me the labeling of it as "the philosophy part" and the fact that it was merely something to shrugg of and go on sounds like an underappreciation of the deep conceptual structure of GR.
 
Last edited:
  • Like
Likes Simple question and gentzen
  • #227
WernerQH said:
I think you got this backwards.
Switch = toggling a binary state = discrete.
Continuous = treatment of the switching process by classical electrodynamics, which is continuous.

We see everywhere the Discrete appear as a simplified description of the Continuous.

In mainstream physics, time is always continuous. Heaviside jumps are incompatible with relativistic quantum field theory. To do something discrete you therefore need to explain what happens to the neutron in the tiny interval where it stops being a neutron and before it is a proton.
 
Last edited:
  • Like
Likes Simple question, gentzen and dextercioby
  • #228
vanhees71 said:
What do you mean by "deterministic"? ##g-2## of electrons or muons is a parameter that can be predicted by the standard model after defining all the coupling constants of this model and as such can be tested by measuring it in experiment. The measurement of the magnetic moment of particles is as "probabilistic" as it is for any other observable in QT.

The Planck distribution of course is calculated by using Born's rule. How else do you want to interpret the statistical equilibrium operator you calculate for free photons to obtain this result?
An electron is a magnetic-moment eigenstate, so the value of magnetic moment is not uncertain. That's what I mean whey I say that it's deterministic, rather than probabilistic.

Planck predicted the Planck distribution 25 years before Born postulated the Born rule. Planck derived it from classical probabilistic reasoning, combined with the hypothesis that energy (associated with given frequency) can take only discrete values. Sure, one can also derive the Planck distribution from the modern Born rule for general mixed states, but my point is that the Planck distribution can be derived even without it.
 
  • Like
Likes Simple question
  • #229
haushofer said:
but I seriously consider the idea that many people just don't appreciate the deep physical thinking of Einsteins when he considered the nature of general covariance at that time. But what I'm quite convinced of is that it's categorically wrong to put it away as "the philosophy part".
I also see shades of grey here. The sort of "philosophy" relevant here, is philosophy of science and physics, which I see as rightfully belonging to the foundations of the same (not it's application though). I just don't see the obsession with drawing clear lines here. I think we can tell educated rational reasning from random crackpottery easy enough.

We are talking about the philosophical ponderings from the fathers of many modern theories in their process of formulating new hypothesis that later turned out well, it's not like we are talking about modern crackpots or the reasoning of Confucius or Nietzsche.

/Fredrik
 
  • Like
Likes haushofer, Simple question and gentzen
  • #230
Morbert said:
I just mean the problem of indeterminism in QM is a subjective one, as opposed to an objective problem like e.g. the recovery of the Born rule in the many-worlds interpretation.
I see, but the problem of explaining definite outcomes has not much to do with determinism. The problem is not to predict which outcome will realize; the realization of an outcome may well be random. The problem is to explain why single outcome (rather than all outcomes at once) realizes at all.

To explain the outcome one needs a variable that describes the outcome itself, rather than the probability of an outcome. This variable may evolve either deterministically or stochastically, but the problem is that standard QM does not contain such a variable at all. The standard QM contains variables that describe probabilities of outcomes, but it does not contain variables that describe random outcomes themselves. For example, it contains the probability amplitude ##\psi(x,t)##, but it does not contain ##x(t)##. How to explain that the particle has real position ##x## at time ##t##, if there is no real variable ##x(t)##?
 
  • Like
Likes Simple question and Fra
  • #231
Fra said:
This "intermediate" process is where QM doesn't work.
By not work I assume you mean does not provide information about individual events you expect or would like from a theory? Working with incomplete or finite statistics is the bane of any probabilistic theory and not itself an argument against QM, IMO. QM provides probability and the restriction of events to eigenvalues. Bell type experiments really seem to limit the nature of any hypothetical additional information a theory might provide. They certainly dim hopes that one might interpret their way out.
 
  • #232
Demystifier said:
I see, but the problem of explaining definite outcomes has not much to do with determinism. The problem is not to predict which outcome will realize; the realization of an outcome may well be random. The problem is to explain why single outcome (rather than all outcomes at once) realizes at all.

To explain the outcome one needs a variable that describes the outcome itself, rather than the probability of an outcome. This variable may evolve either deterministically or stochastically, but the problem is that standard QM does not contain such a variable at all. The standard QM contains variables that describe probabilities of outcomes, but it does not contain variables that describe random outcomes themselves. For example, it contains the probability amplitude ##\psi(x,t)##, but it does not contain ##x(t)##. How to explain that the particle has real position ##x## at time ##t##, if there is no real variable ##x(t)##?
But a theory should discribe the world the way it is, not the way we wish it were. What if it is not possible to have such variables.
 
  • #233
martinbn said:
But a theory should discribe the world the way it is, not the way we wish it were. What if it is not possible to have such variables.
I agree. But the association I made was that such place holder variables likely do exist, but its their influence on causality that does not work "the way some wish". And these placeholder might not even have the propery of been "observables" in the technical sense.

Bell theorem only forbids a specific type of hidden variables. Those that are objective.

I do not share bohmian ideas but on this point I see sound ogical possibilities. Its the idea that some variables are subjective and only some of these form equivalence classes to restore objectivity.

/Fredrik
 
  • #234
A. Neumaier said:
According to the explanation of event given in Wikiedia, it means that all events occur in every experiment (if all probabilities are nonzero). This is surely not the case in physical experiments. Thus either the notion of events or the notion of experiment (as used in Wikipedia) is physically irrelevant. Therefore your argument is spurious.
Demystifier said:
I see, but the problem of explaining definite outcomes has not much to do with determinism. The problem is not to predict which outcome will realize; the realization of an outcome may well be random. The problem is to explain why single outcome (rather than all outcomes at once) realizes at all.

To explain the outcome one needs a variable that describes the outcome itself, rather than the probability of an outcome.
Given a sample space of possible outcomes ##\{o_i\}## of an experiment involving measured system ##s## and detector array ##D##, the probability of an outcome occurring in a given run is $$p(o_i) = \mathrm{tr}_{sD}(\Pi_i(t)\rho_s\otimes\rho_D)$$The probability of an event like ##o_i\lor o_j## occurring is $$p(o_i\lor o_j)=\mathrm{tr}_{sD}([\Pi_i(t)+\Pi_j(t)]\rho_s\otimes\rho_D)$$The probability of all outcomes occurring at once in a given run is $$p(o_1\land o_2\land\dots\land o_N) = \mathrm{tr}_{sD}(\Pi_1(t)\Pi_2(t)...\Pi_N(t)\rho_s\otimes\rho_D) = 0$$Where am I going wrong?
 
Last edited:
  • #235
Morbert said:
Where am I going wrong?
You represent outcomes with projectors. For example, in the case of particle position, the projector would be something like ##|x\rangle\langle x|##. That's enough for computing the probability of position ##x##. However, the formalism you outlined talks about probabilities of position, not about position itself. A formalism that talks about position itself should have a real-valued variable ##x##, and the formalism you outlined does not have such a variable. (The operator ##\hat{x}## would not count because it's a hermitian operator, not a real-valued variable.)

Compare this formalism with classical stochastic mechanics. There one has a function ##x(t)##, which is a stochastic (not deterministic) function of ##t##. Such a quantity is missing in the quantum formalism.
 
  • #236
martinbn said:
What if it is not possible to have such variables.
Sure, it is a logical possibility, but I do not see any good reason to believe it. I think many physicists believe it only because they wish the standard quantum formalism to be the complete theory that does not need any additions. They want to believe that they understand everything that can be understood, so if there is something that they don't understand it's because it cannot be understood at all. Such an attitude is too dogmatic and anti-scientific for my taste. But as I said, it is a legitimate and logically consistent point of view.
 
Last edited:
  • Like
Likes physika, OCR, Simple question and 1 other person
  • #237
Demystifier said:
An electron is a magnetic-moment eigenstate, so the value of magnetic moment is not uncertain. That's what I mean whey I say that it's deterministic, rather than probabilistic.

Planck predicted the Planck distribution 25 years before Born postulated the Born rule. Planck derived it from classical probabilistic reasoning, combined with the hypothesis that energy (associated with given frequency) can take only discrete values. Sure, one can also derive the Planck distribution from the modern Born rule for general mixed states, but my point is that the Planck distribution can be derived even without it.
The magnetic moment of the electron is proportional to its spin and as such of course an observable as any other and thus behaves probabilistic. In fact the magnetic moment is what's measured in the Stern Gerlach experiment.

I know, how Planck derived the black-body spectrum. That's, however, not according to modern QED. In modern QED you evaluate the canonical equilibrium distribution of free photons, and to interpret the physics of the result of course you use Born's rule.
 
  • Like
Likes dextercioby
  • #238
vanhees71 said:
The magnetic moment of the electron is proportional to its spin and as such of course an observable as any other and thus behaves probabilistic. In fact the magnetic moment is what's measured in the Stern Gerlach experiment.
The absolute value of spin of electron is always ##1/2##, there is no any uncertainty about this, so the prediction that it is ##1/2## is deterministic.
 
  • #239
Fra said:
I agree. But the association I made was that such place holder variables likely do exist, but its their influence on causality that does not work "the way some wish". And these placeholder might not even have the propery of been "observables" in the technical sense.

Bell theorem only forbids a specific type of hidden variables. Those that are objective.

I do not share bohmian ideas but on this point I see sound ogical possibilities. Its the idea that some variables are subjective and only some of these form equivalence classes to restore objectivity.

/Fredrik
What's a subjective variable?
 
  • #240
Hornbein said:
What's a subjective variable?
What i meant was, it's "measured" and encoded by individual observers and thus "real" for that observer. But this variable cant med copied or cloned like a classical variable. And the observera subjective "measurement" does not qualify as a measurement of an observable as per qm or qft. So its a form of non-realism.

Demystifiers alternarive interpretation of bohimn HV, is also similar I think. https://arxiv.org/abs/1112.2034

Apart from that coincidence, i am not talking about the bell type hv of course.

/Fredrik
 
  • #241
Demystifier said:
You represent outcomes with projectors. For example, in the case of particle position, the projector would be something like ##|x\rangle\langle x|##. That's enough for computing the probability of position ##x##. However, the formalism you outlined talks about probabilities of position, not about position itself. A formalism that talks about position itself should have a real-valued variable ##x##, and the formalism you outlined does not have such a variable. (The operator ##\hat{x}## would not count because it's a hermitian operator, not a real-valued variable.)

Compare this formalism with classical stochastic mechanics. There one has a function ##x(t)##, which is a stochastic (not deterministic) function of ##t##. Such a quantity is missing in the quantum formalism.
Where we disagree is I don't think such a variable is needed "to explain why a single outcome (rather than all outcomes at once) realizes at all". We can show that, when presented with a sample space of experimental outcomes, QM rules out the possibility that all outcomes (or even more than one) will occur at once, since the probability is 0, as shown in my last post. Similarly, we can show that "no outcome occurs" also has a probability ##p(\varnothing) = \mathrm{tr}_{sD}([I_{sD}-\sum_i\Pi_i(t)] \rho_s\otimes\rho_D) = 0##.

I.e. We can interpret QM as returning probabilities for possible outcomes, without a variable corresponding to the "true outcome", and this interpretation won't suffer from problems like implying all outcomes might occur at once.
 
Last edited:
  • #242
Morbert said:
Where we disagree is I don't think such a variable is needed "to explain why a single outcome (rather than all outcomes at once) realizes at all".
A 5 year child asks: Mommy and daddy, why there is no Sun during the night?
Daddy: Because during the night the probability of seeing Sun is zero.
Mommy: Because Earth is round and during the night the Sun is on the other side.

Both explanations are true, but which is better?
 
  • Like
  • Love
Likes lodbrok, DrChinese and Simple question
  • #243
Demystifier said:
A 5 year child asks: Mommy and daddy, why there is no Sun during the night?
Daddy: Because during the night the probability of seeing Sun is zero.
Mommy: Because Earth is round and during the night the Sun is on the other side.

Both explanations are true, but which is better?
This is misleading. How about a 5 year old wants to have a sibling. The parents say sure. The child ask: where is it now? Parent one: it doesn't exist yet. Parent two: there are exact values of the position variables, we just don't know them.
 
  • Haha
Likes WernerQH
  • #244
martinbn said:
Parent two: there are exact values of the position variables, we just don't know them.
The point would i think be that its not just we that doesnt know, Noone or no thing, has been able to learn them, which is why we do not obet bell inequality. It might well be "lost in chaos" and thus it decouples from any inference.

I think the difference betwteen a subatomic variable and the sun, is that it would take nothing less than a black hole to scramble the information of where the sun went for all the environment. Putting a blindfold on daddy at night also gives zero probability of seeing rhe sun but that isnt the mechanism.

/Fredrik
 
  • #245
Demystifier said:
A 5 year child asks: Mommy and daddy, why there is no Sun during the night?
Daddy: Because during the night the probability of seeing Sun is zero.
Mommy: Because Earth is round and during the night the Sun is on the other side.

Both explanations are true, but which is better?
I think this is where the subjectivity comes into play. Everyone would agree that for macroscopic matters like the the solar system, explanations like the 2nd are better. When it comes to microscopic matters, many people are happy to frame QM as characterising microscopic systems in terms of macroscopic tests and responses, without grounding it in some primitive ontology.

At the same time, this doesn't mean the notion of explanation is entirely surrendered. E.g. The Dad might instead explain why there is no sun at night by talking about the way in which the solar system is "prepared" and the dynamics it obeys.
 
  • Like
Likes LittleSchwinger

Similar threads

Replies
13
Views
4K
Replies
42
Views
6K
Replies
6
Views
1K
Replies
6
Views
3K
Replies
12
Views
2K
Replies
25
Views
3K
Replies
4
Views
930
Back
Top