The typical and the exceptional in physics

In summary, the conversation discusses the concept of the superposition principle in quantum mechanics and its implications on macroscopic objects. While there is no limitation on the standard deviation of variables in quantum mechanics, it is argued that successful physics focuses on typical situations rather than exceptional ones. The use of mixed states in statistical mechanics is mentioned as a way to describe macroscopic objects, but it is noted that this already assumes a small standard deviation. The conversation concludes that while it is possible to ignore these problems, it is not a satisfying approach.
  • #36
stevendaryl said:
I don't know in what sense you are rejecting collapse
This is my view of collapse:

The collapse is a sudden change of the model used by an observer to reinterpret the situation when new information comes in, hence depends on when and whether the observer cares to take notice of a physical fact. This clearly cannot affect other observers and their models of the physical situation. Hence there is no nonlocal effect. Nonlocal correlations appear only when a single observer compares records of other (distant) observer's measurements. At that time the past light cone of this observer contains all the previously nonlocal information, so that locality is again not violated.

As one can see from the wording in terms of subjective information, it applies to modeling a large sample of equally prepared systems when the model is loosely interpreted to apply to a single of these. Although this interpretation is strictly speaking forbidden (in the sense that objective probabilities for single events do not exist), it is informally meaningful in a Bayesian subjective probability interpretation.
 
  • Like
Likes vanhees71
Physics news on Phys.org
  • #37
Ken G said:
hen we idealize our systems as closed, we are left with no way to explain how they would behave.
This is no different from the classical situation, where we idealize our die-casting system to a stochastic process, and are left with no way to explain how a single die would behave.

Idealizations always introduce uncertainty, and if this uncertainty is big because there is sensitive dependence on unmodled degrees of freedom then only probabilities (i..e, relative frequencies for a large sample) can be predicted.
 
  • #38
A. Neumaier said:
This is my view of collapse:

The collapse is a sudden change of the model used by an observer to reinterpret the situation when new information comes in, hence depends on when and whether the observer cares to take notice of a physical fact. This clearly cannot affect other observers and their models of the physical situation. Hence there is no nonlocal effect.

It's unclear. For quantum systems, there are two aspects to measurements: (1) the choice of an observable (for spin-1/2, it's a direction to measure spin relative to), and (2) the value for that observable. Once the observable is chosen, then the subsequent update that comes from learning the value of that variable is exactly like an ordinary classical update that occurs when you learn the value of a pre-existing quantity. But the fact that in quantum mechanics, the probability only exists after the observable is chosen makes it difficult (for me) to view collapse as simply updating based on new information.
 
  • Like
Likes zonde
  • #39
A. Neumaier said:
Idealizations always introduce uncertainty, and if this uncertainty is big because there is sensitive dependence on unmodled degrees of freedom then only probabilities (i..e, relative frequencies for a large sample) can be predicted.

I would say that while it is correct to use open systems, it's also easy to be misled. Yes, there is a sensitive dependence on unmodeled degrees of freedom. But that is NOT what is going on with mixed states due to entanglement. Or at least, that's not all that is going on. As I said, a PURE two-component state becomes a mixed state when you trace out the degrees of freedom of one of the components. But in that case, it's just factually incorrect to ascribe the probabilities in the resulting density matrix to sensitive dependence on unmodeled degrees of freedom. The probabilities in this case don't come from ignorance about the exact details, because we started with a pure state, where we know all there is to know.
 
  • #40
stevendaryl said:
because we started with a pure state, where we know all there is to know.
This is a severe problem for the knowledge interpretation of quantum mechanics, and only for that interpretation. This interpretation claims that a pure state gives the maximum knowledge one can have about a system, while a mixed state represents incomplete knowledge.

But this view is self-contradictory as your tracing out example shows. If you know everything about the whole system, it would imply that you know very little about the subsystem, while if you know everything about a subsystem but nothing about the remainder of the system, this cannot even be described in this model of knowledge.

Thus I reject the whole basis of your reasoning as it is conceptually unsound.

A sensible interpretation of quantum mechanics must not only assign meaning to the whole system but to all subsystems. Indeed, of complex systems we usually know a lot about subsystems but not so much about the whole system. My interpretation of a density operator satisfies this requirement (and is completely different from your conceptually inconsistent view).
 
  • #41
A. Neumaier said:
This is a severe problem for the knowledge interpretation of quantum mechanics, and only for that interpretation. This interpretation claims that a pure state gives the maximum knowledge one can have about a system, while a mixed state represents incomplete knowledge.

But this view is self-contradictory as your tracing out example shows.

Well, I consider just about all interpretations of quantum mechanics, including yours, to be in the same boat.
 
  • #42
stevendaryl said:
But the fact that in quantum mechanics, the probability only exists after the observable is chosen makes it difficult (for me) to view collapse as simply updating based on new information.
I don't understand. One knows which information came in (the measurement of a particular spin direction) and one updates the model (not the probability!) according to that information. Probability doesn't even enter!

This is just the same as is done in real-time stochastic modeling of the stock market - whenever some new data come in (whatever these data are about) the model is updated.

The only difference is that the stochastic model is in the first case a quantum-classical process (a classical stochastic process conditioned on quantum states) while in the second case it is a purely classical process.
 
  • #43
A. Neumaier said:
I don't understand. One knows which information came in (the measurement of a particular spin direction) and one updates the model (not the probability!) according to that information. Probability doesn't even enter!

Probability enters in that a measurement of one component of an entangled system updates the probabilities associated with the other component.
 
  • #44
stevendaryl said:
Probability enters in that a measurement of one component of an entangled system updates the probabilities associated with the other component.
No. Measurement of one component of an entangled system updates the state of the whole system. That's the only thing going on. As a consequence, all predictions about the whole system change, of course. Probability plays no active role in this process.
stevendaryl said:
But the fact that in quantum mechanics, the probability only exists after the observable is chosen makes it difficult (for me) to view collapse as simply updating based on new information.
Probabilities refer to predictions about relative frequencies of certain events in case that they are observed. Thus they are purely theoretical entities which always exist. The probability of casting 1 with a particular die is 1/6 even if the experimenter cannot cast this particular die.
 
  • Like
Likes vanhees71
  • #45
A. Neumaier said:
This is no different from the classical situation, where we idealize our die-casting system to a stochastic process, and are left with no way to explain how a single die would behave.
I agree, collapse in experiments on classical systems works exactly the same as the quantum case, and the epistemology of how we use probabilities is also exactly the same. So there is no epistemological problem at all-- epistemologically, "collapse" just means "connecting the predicted behavior of ensembles with the concept of individual outcomes." The problem only appears when one attempts to build a quantum ontology, because if that ontology says "all is wavefunctions that evolve unitarily", then one cannot understand how a measurement occurs that is capable of obtaining a single outcome without taking the measuring device out of the system that is evolving unitarily. So I agree that collapse is not a problem, but I don't agree that the reason it's not a problem is that systems are really open, I see it as not a problem because all formal physical theories describe the ontologies of closed systems, which we then use epistemologically by marrying them to how we process information. Thus, we should never be surprised when our formal structures fail to provide a complete ontology, because we always open systems to look at them. QT is merely the place where we realized this, something we should have known all along. So I have no problem with collapse-- I regard it as an epistemological device stemming from how we cull and correlate information.
Idealizations always introduce uncertainty, and if this uncertainty is big because thlemerere is sensitive dependence on unmodled degrees of freedom then only probabilities (i..e, relative frequencies for a large sample) can be predicted.
I completely agree-- it's all about the degrees of freedom we choose to model. We create collapse, it is part of how we do science. All we should expect the equations of physics to do for us is to give us a diagonal density matrix. The rest comes from us. The Copenhagen view is there is no quantum ontology, MWI says there is no classical ontology, Bohm says there is both a classical and quantum ontology and they are both just the same, but I say all ontology is really epistemology in a convincing disguise.
 
  • #46
Ken G said:
The Copenhagen view is there is no quantum ontology, MWI says there is no classical ontology, Bohm says there is both a classical and quantum ontology and they are both just the same, but I say all ontology is really epistemology in a convincing disguise.
Whereas I assert an ontology that smoothly combines deterministic and stochastic, classical and quantum aspects without needing variables beyond orthodox quantum mechanics. This ontology is given by my thermal interpretation. The thermal interpretation simply spells out what is the hidden secret of all shut-up-and-calculate successes. It is consistent on every level and has all properties one can reasonably ask for.

If the predicted uncertainty of the value of an observable given by quantum mechanics is small, it is a reliable prediction for a single system. The larger the uncertainty is the more independent repetitions one needs to reduce the uncertainty of the statistics to a desired level of accuracy, according to the ##N^{-1/2}## rule for the law of large numbers.

There are no interpretation problems with experiments where the position outcome depends on how an electron spin turns out, since the predicted uncertainty is then large. Neither is there an interpretation problem for macroscopic observables, since under the usual classically predictable circumstances their quantum uncertainty is tiny.

Thus I am very satisfied with this interpretation. It gives me the feeling that I really understand quantum mechanics.
 
  • #47
Ken G said:
All we should expect the equations of physics to do for us is to give us a diagonal density matrix.
They do so only under very special circumstances (quantum measurement). More usually, the density operator remains non-diagonal in any reasonable basis.

But no matter whether or not diagonal, the diagonal entries encode probabilities of interest, and the expectations computed from the full density operator encode approximate values of measurable observables.
 
  • #48
stevendaryl said:
But that's a weird perspective. When it comes to the two-particle composite system, Bob and Alice know everything there is to know about this system. It's described by a pure state, which is, for quantum mechanics, the maximum amount of information you can have about a system. To say that Bob's mixed state reflects his ignorance about his particle means that he knows less about a part of a system than he knows about the whole system.
That's just one more example for the fact that a completely determined state doesn't imply that all observables are determined. In this case the single-particle spins are even maximally uncertain (in the Shannon sense of information theory). Indeed you know everything you can now about the total spin, namely that it's 0 but you lack as much information about the single-particle spins as you can. That's QT at its purest form :-).
Actually, I read a paper once that described entanglement in exactly these terms. For a classical composite system, the entropy of the complete system has to be greater than the entropy of any of the components. But for quantum mechanics, this isn't always the case. For two-particle entangled system, the entropy for the composite system can be zero, because you know exactly what the state is. But the entropy of the components can be nonzero.
Yes, here ##S=-\mathrm{Tr} \ln (\hat{\rho})=-\mathrm{Tr} \ln(|\psi \rangle \langle \psi|)=0## (as for any proper pure state. The knowledge is maximal concerning the total spin of the two-spin system. For Bob's particle it's ##S_B=-\mathrm{Tr} \ln(\hat{\rho}_B)=-\ln 2##.
 
  • #49
A. Neumaier said:
This is a severe problem for the knowledge interpretation of quantum mechanics, and only for that interpretation. This interpretation claims that a pure state gives the maximum knowledge one can have about a system, while a mixed state represents incomplete knowledge.

But this view is self-contradictory as your tracing out example shows. If you know everything about the whole system, it would imply that you know very little about the subsystem, while if you know everything about a subsystem but nothing about the remainder of the system, this cannot even be described in this model of knowledge.

Thus I reject the whole basis of your reasoning as it is conceptually unsound.

A sensible interpretation of quantum mechanics must not only assign meaning to the whole system but to all subsystems. Indeed, of complex systems we usually know a lot about subsystems but not so much about the whole system. My interpretation of a density operator satisfies this requirement (and is completely different from your conceptually inconsistent view).
The point is that quantum theory tells you that even if you have maximal possible knowledge about a system, you don't know the values of all possible observables. That's all the example shows.
 
  • #50
https://arxiv.org/abs/1405.3483
Steven Weinberg
Quantum Mechanics Without State Vectors
In this paper, SW proposes a formulation of QM based solely on density matrices.
Does this solve the problem? How is it similar or different to the AN formulation?
TIA Jim Graber
 
  • #51
stevendaryl said:
Probability enters in that a measurement of one component of an entangled system updates the probabilities associated with the other component.
Again, I have to ask, are you suggesting that probabilty is a dynamical variable in a physical process ?

What you are describing as collapse is a change in the Hamiltonian. There is no physical wave function. It is a way of calculating probabilites that honour the physical symmetries and contains no dynamical information.
 
  • Like
Likes vanhees71
  • #52
Mentz114 said:
Again, I have to ask, are you suggesting that probabilty is a dynamical variable in a physical process ?

What you are describing as collapse is a change in the Hamiltonian. There is no physical wave function. It is a way of calculating probabilites that honour the physical symmetries and contains no dynamical information.

I don't know what you're calling a change in the Hamiltonian. What Hamiltonian are you talking about? In an EPR-type experiment, I can imagine a number of Hamiltonians that might be relevant, but I don't see that any of them quite fit what you said above:
  1. The Hamiltonian describing the process for creating the twin pair.
  2. The Hamiltonian governing the pair as they travel from the source to the detector. (Usually, this is treated as free-particle propagation.)
  3. The Hamiltonian governing the interaction between the particles and the detectors.
 
  • #53
The way it seems to me is that you have two possibilities:
  1. Either a measurement reveals some pre-existing property of the system being measured, or
  2. The property doesn't exist before the measurement act, and the act of measurement causes the property to have a value. (This is the claim that microscopic systems don't have properties until they are measured.)
(I guess to be complete, I should include the Many-Worlds possibility, which is that systems can simultaneously have different values in different "possible worlds", and a measurement simply determines which branch you (or the measurement device) is in.)

Option #1 seems incompatible with Bell's theorem, and option #2 seems incompatible with locality, because Alice can remotely measure a property of Bob's particle. That's no problem, if measurement is just revealing a pre-existing property (#1), but seems like a nonlocal interaction if the measurement changes the system being measured (from an indefinite value to a definite value).
 
  • #54
.
stevendaryl said:
The way it seems to me is that you have two possibilities:
  1. Either a measurement reveals some pre-existing property of the system being measured, or
  2. The property doesn't exist before the measurement act, and the act of measurement causes the property to have a value. (This is the claim that microscopic systems don't have properties until they are measured.)
(I guess to be complete, I should include the Many-Worlds possibility, which is that systems can simultaneously have different values in different "possible worlds", and a measurement simply determines which branch you (or the measurement device) is in.)

Option #1 seems incompatible with Bell's theorem, and option #2 seems incompatible with locality, because Alice can remotely measure a property of Bob's particle. That's no problem, if measurement is just revealing a pre-existing property (#1), but seems like a nonlocal interaction if the measurement changes the system being measured (from an indefinite value to a definite value).
I don't understand how any of this is relevant to my question - 'are you suggesting that probabilty is a dynamical variable in a physical process ?'.
You also seem to think all physics is EPR and Bell.
You've lost me. I won't partake further in this discussion because I don't understand what you are saying. You are making too many wrong assumption to make sense to me. :frown:
 
  • #55
stevendaryl said:
The way it seems to me is that you have two possibilities:
  1. Either a measurement reveals some pre-existing property of the system being measured, or
  2. The property doesn't exist before the measurement act, and the act of measurement causes the property to have a value. (This is the claim that microscopic systems don't have properties until they are measured.)
(I guess to be complete, I should include the Many-Worlds possibility, which is that systems can simultaneously have different values in different "possible worlds", and a measurement simply determines which branch you (or the measurement device) is in.)

Option #1 seems incompatible with Bell's theorem, and option #2 seems incompatible with locality, because Alice can remotely measure a property of Bob's particle. That's no problem, if measurement is just revealing a pre-existing property (#1), but seems like a nonlocal interaction if the measurement changes the system being measured (from an indefinite value to a definite value).
Isn't there a third alternative?
3. There is a pre-existing property of the the system being measured that is altered by the act of measurement.
 
  • #56
jimgraber said:
https://arxiv.org/abs/1405.3483
Steven Weinberg
Quantum Mechanics Without State Vectors
In this paper, SW proposes a formulation of QM based solely on density matrices.
Does this solve the problem? How is it similar or different to the AN formulation?
See https://www.physicsforums.com/posts/5419800 and the subsequent discussion. The most interesting aspect is that in the ##C^*##-algebra setting for interacting quantum fields (featuring factors of type ##III_1##), pure states do not even exist! This is quite unlike the situation in quantum mechanics of finitely many degrees of freedom and for free quantum fields.
 
  • #57
stevendaryl said:
The way it seems to me is that you have two possibilities:
  1. Either a measurement reveals some pre-existing property of the system being measured, or
  2. The property doesn't exist before the measurement act, and the act of measurement causes the property to have a value. (This is the claim that microscopic systems don't have properties until they are measured.)
This might be the only possibilities if the system were isolated - but then it would be unmeasurable. In the real world, were systems are open, there is a third, and actually realized, possibility:

3. A measurement reveals some preexistent property of the universe, but due to the approximations made in delineating a specific piece of the universe as the ''system'', the revealed property (a macroscopic pointer reading) can only be very imperfectly related to a property of the single system, resulting in an only stochastic description.

If one sees how the approximations come about and the mathematics behind the approximation process (rather than only the idealized end result), this is indeed the way it happens both in classical and in quantum mechnaics.
 
  • Like
Likes Mentz114
  • #58
A. Neumaier said:
They do so only under very special circumstances (quantum measurement). More usually, the density operator remains non-diagonal in any reasonable basis.
I wouldn't call it "very special circumstances" when those are the only circumstances we ever test! Everything else is demonstrably just a stepping stone to the laws of physics giving us something we can test, so that's what I mean when I say "all we can expect those laws to give us."
 
  • #59
A. Neumaier said:
Whereas I assert an ontology that smoothly combines deterministic and stochastic, classical and quantum aspects without needing variables beyond orthodox quantum mechanics. This ontology is given by my thermal interpretation.
But to me, it doesn't sound like an ontology at all-- it sounds like an epistemology only! It does sound like exactly the epistemology we actually use, so it's very much what I'm talking about-- it is not a law of physics in the conventional sense, because it does not describe an ontology, it describes what we will get if we analyze information in a given way, which is just the way we do it.
Thus I am very satisfied with this interpretation. It gives me the feeling that I really understand quantum mechanics.
I would say you understand how to use quantum mechanics to get it to do for you what you want it to do for you, which is to approximately predict observations. Whether you attribute the inherent uncertainty to the observation or to the system doesn't really matter, you are asserting a fundamental disconnect between the two that we could never test or pinpoint. So it sounds to me like your comfort with it comes from not attempting to create an ontology at all, it's ducking that need-- and I'm saying that's exactly the way to get comfortable with any theory. Ontologies always create discomfort unless one doesn't dig into them too deeply. But if you want to regard your epistemological formulation as an ontology instead, it seems to me it needs to address this question: why are the observations inherently approximate?
Indeed, I see that you have already answered that just above:
A. Neumaier said:
A measurement reveals some preexistent property of the universe, but due to the approximations made in delineating a specific piece of the universe as the ''system'', the revealed property (a macroscopic pointer reading) can only be very imperfectly related to a property of the single system, resulting in an only stochastic description.
I would claim that the epistemological foundations of that statement are clear, one merely cuts out the first phrase before the comma and the other parts that have no direct connection to what is actually being done by the physicist. I agree with the rest-- we choose how to correlate and bin the information at our disposal, and the way we do that generates concepts like "systems" and "properties", none of which need exist anywhere but in our heads. It is what we are doing with the information that creates the collapse, we can use the formalism to understand the generation of a diagonal density matrix in a straightforward way, and that's all it is needed for.
 
Last edited:
  • #60
Neumaier: does this old post of yours describe an aspect of your thermal interpretation, a consequence of it, or is it an addition?

A. Neumaier said:
To be able to discuss why I find the assumptions of Bell far too strong, let me distinguish two kinds of causality: extended causality and separable causality. Both kinds of causality are manifestly local Lorentz invariant and imply a signal speed bounded by the speed of light. Here a signal is defined as a dependence of measured results at one spacetime point caused by a preparation at another spacetime point.

Separable causality is what is assumed in Bell-type theorems, and is thereby excluded by the standard experiments (assuming that all other conditions used in the derivation of such theorems hold in Nature). On the other hand, extended causality is far less demanding, and therefore is not excluded by the standard arguments.

To define these two kinds of causality I use the following terminology. A point object has, at any given time in any observer's frame, properties only at a single point, namely the point in the intersection of its world line and the spacelike hyperplane orthogonal to the observer's 4-momentum at the time (in the observer frame) under discussion. An extended object has properties that, in some observer frames at some time depend on more than one space-time position. A joint property is a property that explicitly depends on more than one space-time location within the space-time region swept out by the extended object in the course of time.

Both kinds of causality agree on the causality properties of point objects (''point causality'') but differ on the causality properties of extended objects. Extended causality takes into account what was known almost from the outset of modern quantum mechanics - that quantum objects are intrinsically extended and must be treated as whole. This is explicitly expressed in Bohr's writing (N. Bohr, On the notions of causality and complementarity, Dialectica 2 (1948), 312. Reprinted in Science, New Ser. 111 (1950), 51-54.):
(Thanks to Danu for locating this quote!)

Here are the definitions:
  • Point causality: Properties of a point object depend only on its closed past cones, and can influence only its closed future cones.
  • Extended causality: Joint properties of an extended object depend only on the union of the closed past cones of their constituent parts, and can influence only the union of the closed future cones of their constituent parts.
  • Separable causality: Joint properties of an extended object consist of the combination of properties of their constituent points.
I believe that only extended causality is realized in Nature. It can probably be derived from relativistic quantum field theory. If this is true, there is nothing acausal in Nature. In any case, causality in this weaker, much more natural form is not ruled out by current experiments.

Thanks.
 
  • #61
Here is why I don't think the thermal interpretation should count as an ontology. As I understand it, if you have a hydrogen atom making a transition at the end of the era of recombination, then it produces a photon amplitude that starts spreading out throughout the universe, with a relatively low chance of interaction over most of the surface of a sphere that by now extends to tens of billions of light years in radius. When astronomers on Earth measure the arrival of that photon, the normal view is that its wavefunction "collapses" on the Earth. It sounds like Dr. Neumeier is arguing that what we do on Earth is a position measurement that is highly approximate, so although the photon wavefunction did indeed extend over much of the visible universe, our measurement localized it in our telescope out of a kind of measurement inaccuracy that could not detect the true spatial extent of that photon. Now, I admit that we are postulating the occurrence of a vastly unlikely individual event, that this particular photon should be detected in that tiny telescope has a truly miniscule probability, so somehow we are trading off the tiny chance of that particular photon (the indistinguishability of photons is of no significance here, the thermal interpretation can be applied similarly in a hypothetical universe where photons are distinguishable) being detected in that telescope against the vast number of possible photons that could have been detected, and this justifies an extremely unlikely hypothetical. But a quantum ontology that blames the uncertainty on the inaccuracy of the measurement must hold that any of those photons could have been detected anywhere in the universe. Now, that's a pretty darn inaccurate position measurement! Can we really say that is an ontology, can we claim we have an ontological description that says measurements are really that inaccurate, or must any ontology worthy of the name say that those photons really could have been detected anywhere on that huge sphere because they really could have been, in some sense, at that location on that sphere-- despite the impossibility of locally constrained unitary evolution giving such localization? I don't even see how MWI handles that case, it seems like no telescope could ever be involved in a unitary evolution that decoheres a wavefunction ten billion light years away.

On the other hand, if we treat the situation epistemologically only, we can just say that the wave function is a mathematical device for determining the probability that a given telescope will detect a given photon. The photon doesn't have a location until we say how we are going to define our meaning of its location, and that involves a position measurement that is correlated against all the other information we have in the problem, such as the information that was claimed in the scenario: a given atom emits a given photon. We don't have to say what the photon's position was prior to the measurement, because we don't have an ontology, and there is not any prescription in place for giving the photon a location except at the telescope. Epistemologically, any question never asked is also never answered, that's the difference between epistemology and ontology. We live in a universe where all is information, not because information is ontology, but because physics is epistemology.
 
Last edited:
  • #63
Ken G said:
I don't even see how MWI handles that case, it seems like no telescope could ever be involved in a unitary evolution that decoheres a wavefunction ten billion light years away.
I don't see any difficulty here. Coherence or decoherence is a property of the state as a whole, which exists in Hilbert space and is inherently nonlocal in physical space. The practical effects of decoherence- loss of quantum interference etc.- can only be felt once the various regions come into causal contact. So whether you describe the state as "decohered" or not can depend on which spacelike slice you look at; it has no physical significance until there is potential for interaction between the "branches".
In terms of the EPR setup: Alice's & Bob's measurements each "decohere" the entire state, but this is physically meaningless until the two results can be compared.
 
  • #64
I see what you're saying, that MWI would allow a decohered subspace that regards the density matrix as locally diagonal, and calling that a "world" by allowing that subspace to create a new normalization for that subspace in which the probability is "1", even though in the "master wavefunction" most of the density matrix is still not diagonal and attributes a minute probability to that decohered subspace. It is as though in our "world", the wavefunction is "collapsed" (in both senses of decohered and given a new probability of 1), but in most other "worlds", the wavefunction remains uncollapsed (in the broader sense of not even decohered). So MWI can handle that, because it is indeed an ontology-- albeit a bizarre one. But it seems even more bizarre to call it an ontology that holds that purely unitary evolution occurred, and it was just a very inaccurate measurement that attributes a location to a particle that is, in fact in the thermal ontology, still spread all over the universe. Maybe that's no less bizarre than MWI, since both say that we are in some sense vastly misinformed about the true state of things, but MWI seems more like an ontology in the sense that it does not sweep "off the page" the "other worlds", it holds that they still exist in the mathematical description. The thermal interpretation seems to focus so much on what we are making of our measurements that I don't really see what the claim is about the "true ontology" of that photon. (Of course I don't ascribe to the existence of a true ontology of anything, as I think that the intersection of ontology and science equals epistemology, so I regard ontology as akin to a religious belief. I do, like everyone, create a kind of mental ontology to help me picture what is going on, I just don't take it seriously.)
 
  • #65
Ken G said:
an ontology that holds that purely unitary evolution occurred, and it was just a very inaccurate measurement that attributes a location to a particle that is, in fact in the thermal ontology, still spread all over the universe.
I don't have any understanding of the ""thermal interpretation", but this description certainly sounds wrong. A photon that was absorbed here will definitely not be detected anywhere else (except possibly in a different "world").
 
  • #66
Ken G said:
You're dealt a hand in cards, and it could be anything, but you pick the hand up and look at it, and now you have new information.
But the new information existed before you looked at it. The new information is discovered, not brought into existence by the act of looking.
 
  • #67
David Lewis said:
But the new information existed before you looked at it. The new information is discovered, not brought into existence by the act of looking.
Any time the density matrix is decohered, so diagonalized, everything that happens will be consistent with saying the information pre-existed, and everything that happens will be consistent with saying it was discovered. It is purely our preference of philosophy that chooses the former. Nevertheless, I agree that is a natural choice to make, my point is only that the physicist never uses that choice over the other, it is always irrelevant to the tests that are done. All we need is that there is completely successful correlation between all the measurements that follow with saying that the density matrix is diagonal. So what I'm saying is that the MWI already existed for classical mechanics, and suffered no contradictions with observations-- it was just not a popular choice of interpretation in that context.

But more to the point of the thermal interpretation, it seems to me that interpretation is saying that when you look at the cards you are dealt, and do a "measure the card's identity" operation on it, your measurement is subject to significant uncertainty. Hence you might think the card identity is quite a bit different from what it actually is, but this error propagates through all other measurements of that and all other cards, so everyone is similarly mistaken. Only when you repeat the measurement many times do you see the full range of cards that your measurement could have shown, and that is the actual uncertainty in your measurement, not a reflection of the various different states of the reality. For if it is different states of the reality, then the measurement does not have any error, and the distribution of outcomes must be inherent in the ensemble, making the result a type of "collapse" on the actual state of each card. That is if I understand the interpretation correctly, mapped from position measurements of a broad wavefunction onto card identities after a shuffle produces a diagonal density matrix.
 
  • #68
Ken G said:
Any time the density matrix is decohered... everything that happens will be consistent with saying the information pre-existed, and everything that happens will be consistent with saying it was discovered.
When a photon has two paths to get to a detector screen, whether it travels as a wave or as a particle is not information that pre-exists waiting to be discovered before observations are made.
 
  • #69
David Lewis said:
When a photon has two paths to get to a detector screen, whether it travels as a wave or as a particle is not information that pre-exists waiting to be discovered before observations are made.
The situation you describe is not treated with a diagonal density matrix, so is not relevant to my statement. But the spots made on the screen are, so they would be relevant.
 
Last edited:
  • #70
Ken G said:
Any time the density matrix is decohered, so diagonalized, everything that happens will be consistent with saying the information pre-existed, and everything that happens will be consistent with saying it was discovered.

Right. That's the frustrating (for me) thing about quantum measurements. On the one hand, whatever it is that we measure, it's as if it always had that value, and we're just discovering it. On the other hand, Bell's theorem shows that it can't be the case that every quantity that we might measure has a pre-existing value. (Or at least, it's impossible to make sense of such a thing using standard reasoning about probabilities).
 

Similar threads

Back
Top