qm_interpretations

The Fundamental Difference in Interpretations of Quantum Mechanics

Estimated Read Time: 5 minute(s)
Common Topics: state, quantum, mechanics, interpretations, make

A topic that continually comes up in discussions of quantum mechanics is the existence of many different interpretations. Not only are there different interpretations, but people often get quite emphatic about the one they favor so that discussions of quantum mechanics can easily turn into long arguments. Sometimes this even reaches the point where proponents of a particular interpretation claim that anyone who doesn’t believe it is “idiotic”, or some other extreme term. This seems a bit odd given that all of the interpretations use the same theoretical machinery of quantum mechanics to make predictions, and therefore whatever differences there are between them are not experimentally testable.

In this article, I want to present what I see as a fundamental difference in interpretation that I think is at the heart of many of these disagreements and arguments.

I take no position on whether either of the interpretations I will describe is “right” or “wrong”; my purpose here is not to argue for either one but to try to explain the fundamental beliefs underlying each one to people who hold the other. If people are going to disagree about interpretations of quantum mechanics, which is likely to continue until someone figures out a way of extending quantum mechanics so that the differences in interpretations become experimentally testable, it would be nice if they could at least understand what they are disagreeing about instead of calling each other idiots. This article is an attempt to make some small contribution towards that goal.

The fundamental difference that I see is how to interpret the mathematical object that describes a quantum system. This object has various names: quantum state, state vector, wave function, etc. I will call it the “state” both for brevity and to avoid adopting any specific mathematical framework since they’re all equivalent anyway. The question is, what does the state represent? The two fundamentally different interpretations give two very different answers to this question:

(1) The state is only a tool that we use to predict the probabilities of different results for measurements we might choose to make of the system. Changes in the state represent changes in the predicted probabilities; for example, when we make a measurement and obtain a particular result, we update the state to reflect that observed result, so that our predictions of probabilities of future measurements change.

(2) The state describes the physically real state of the individual quantum system; the state allows us to predict the probabilities of different results for measurements because it describes something physically real, and measurements do physically real things to it. Changes in the state represent physically real changes in the system; for example, when we make a measurement, the state of the measured system becomes entangled with the state of the measuring device, which is a physically real change in both of them.

(Some people might want to add a third answer: the state describes an ensemble of a large number of similar systems, rather than a single system. For purposes of this discussion, I am considering this to be equivalent to answer #1, because the state does not describe the physically real state of a single system, and the role of the ensemble is simply to enable a frequentist interpretation of the predicted probabilities.)

(Note: Originally, answer #1 above talked about the state as describing our knowledge of the system. However, the word “knowledge” is itself open to various interpretations, and I did not intend to limit answer #1 to just “knowledge interpretations” of quantum mechanics; I intended it to cover all interpretations that do not view the state as directly describing the physically real state of the system.)

The reason there is a fundamental problem with the interpretation of quantum mechanics is that each of the above answers, while it contains parts that seem obviously true, leads us, if we take it to its logical conclusion, to a place that doesn’t make sense. There is no choice that gives us just a set of comfortable, reasonable statements that we can easily accept as true. Picking an interpretation requires you to decide which of the obviously true things seems more compelling and which ones you are willing to give up, and/or which of the places that don’t make sense is less unpalatable to you.

For #1, the obviously true part is that we can never directly observe the state, and we can never make deterministic predictions about the results of quantum experiments. That makes it seem obvious that the state can’t be the physically real state of the system; if it were, we ought to be able to pin it down and not have to settle for merely probabilistic descriptions. But if we take that idea to its logical conclusion, it implies that quantum mechanics must be an incomplete theory; there ought to be some more complete description of the system that fills in the gaps and allows us to do better than merely probabilistic predictions. And yet nobody has ever found such a more complete description, and all indications from experiments (at least so far) are that no such description exists; the probabilistic predictions that quantum mechanics gives us really are the best we can do.

For #2, the obviously true part is that interpreting the state as physically real makes quantum mechanics work just like all the other physical theories we’ve discovered, instead of being a unique special case. The theoretical model assigns the system a state that reflects, as best we can in the model, the real physical state of the real system. But if we take this to its logical conclusion, it implies that the real world is nothing like the world that we perceive. We perceive a single classical world, but the state that QM assigns is a quantum superposition of many worlds. We perceive a single definite result for measurements, but the state that QM assigns is a quantum superposition of all possible results, entangled with all possible states of the measuring device, and of our own brains, perceiving all the different possible results.

Again, my purpose here is not to pick either one of these and try to argue for it. It is simply to observe that, as I said above, no matter which one you pick, #1 or #2, there are obvious drawbacks to the choice, which might reasonably lead other people to pick the other one instead. Neither choice is “right” or “wrong”; both are just best guesses, based on, as I said above, which particular tradeoff one chooses to make between the obviously true parts and the unpalatable parts. And we have no way of resolving any of this by experiment, so we simply have to accept that both viewpoints can reasonably coexist at the present state of our knowledge.

I realize that pointing all this out is not going to stop all arguments about interpretations of quantum mechanics. I would simply suggest that, if you find yourself in such an argument, you take a moment to step back and reflect on the above and realize that the argument has no “right” or “wrong” outcome and that the best we can do at this point is to accept that reasonable people can disagree on quantum mechanics interpretations and leave it at that.

194 replies
« Older CommentsNewer Comments »
  1. bhobba says:
    Auto-Didact

    You somehow manage to twist and misunderstand everything I say. Nowhere did I imply that we still use the same first principles that Newton used, I said that physicists still use derivation from first principles as invented by Newton,And I could also say the same thing.

    Didn't you get what was being inferred – we do not use the same methods as Newton because they do not work. We cant elucidate those 'first principles' you talk about, even for such a simple thing as what time is.

    The modern definition of time is – its what a clock measures.

    Wow what a great revelation – but as a first principle – well its not sating much is it beyond common sense – basically things called clocks exist and they measure this thing called time. It does however have some value – it stops people trying to do what Newton tried – and failed.

    Want to know what the 'first principles' of modern classical mechanics is:

    1. The principle of least action.
    2. The principle of relativity.

    Now, if what you say is true then you should be able to state those in your 'first principles' form. I would be very interested in seeing them. BTW 1. follows from QM – but that is just by the by – I even gave a non rigorous proof – see post 3:
    https://www.physicsforums.com/threa…fication-of-principle-of-least-action.881155/

    You will find it doesn't matter what you do, you at the end of the day end up with very vague, or even when looked at deeply enough, nonsensical statements. That's why it's expressed in mathematical form with some terms left to just common sense in how you apply it. In fact that's what Feynman was alluding to at the end of the second video you posted. Physics is not mathematics but is written in the language of math. How do you go from one to the other? Usually common-sense. But if you want to go deeper then you end up in a philosophical morass that we do not discuss here.

    Thanks
    Bill

  2. Auto-Didact says:
    bhobba

    Newton – first principles – well lets look at those shall we:
    Absolute, true and mathematical time, of itself, and from its own nature flows equably without regard to anything external, and by another name is called duration: relative, apparent and common time, is some sensible and external (whether accurate or unequable) measure of duration by the means of motion, which is commonly used instead of true time …

    What utter nonsense, and there is zero doubt Feynman would agree. I know those vireos you posted very well and they say nothing of the sort.

    With all due respect to Newton of course who as Einstein said was really the only path a man of the highest intellect could take in his time.

    But things have moved on.

    Thanks
    BillYou somehow manage to twist and misunderstand everything I say. Nowhere did I imply that we still use the same first principles that Newton used, I said that physicists still use derivation from first principles as invented by Newton, i.e. we still use the method, which Newton invented, of mathematically deriving physical laws from first principles. Before Newton there simply was no such approach to physics and therefore no true inkling of physical law; in this sense it can be said that Newton invented (mathematical) physics.

    In the video Feynman makes crystal clear that the physics approach to mathematics and how mathematics is used in physics to derive laws from principles and vice versa, is as far from formalist mathematician-type axiomatic mathematics as can be.

  3. bhobba says:
    Auto-Didact

    I feel I need to expand on this by explaining what exactly the difference is between a formal axiomatization as is customarily used in contemporary mathematics since the late 19th/early 20th century and a derivation from first principles as was invented by NewtonNewton – first principles – well lets look at those shall we:
    Absolute, true and mathematical time, of itself, and from its own nature flows equably without regard to anything external, and by another name is called duration: relative, apparent and common time, is some sensible and external (whether accurate or unequable) measure of duration by the means of motion, which is commonly used instead of true time …

    What utter nonsense, and there is zero doubt Feynman would agree. I know those vireos you posted very well and they say nothing of the sort.

    With all due respect to Newton of course who as Einstein said was really the only path a man of the highest intellect could take in his time.

    But things have moved on.

    Thanks
    Bill

  4. bhobba says:
    AlexCaledin

    … one of the most important features of the development and the analysis of modern physics is the experience that the concepts of natural language, vaguely defined as they are, seem to be more stable in the expansion of knowledge than the precise terms of scientific language, derived as an idealization from only limited groups of phenomena. To be fair he did not write that in light of future development that shows the exact opposite to an even greater degree than was then known. But even then they knew the work of Wigner and Noether that showed it most definitely is NOT true. It can only be expressed in the language of math.

    If you think otherwise state Noether's theorem in plain English without resorting to technical concepts that can only be expressed mathematically. Here is the theorem: Noether's first theorem states that every differentiable symmetry of the action of a physical system has a corresponding conservation law.

    Mathematical concepts used – diferrentiable symmetry and action. If you can explain it in plain English – be my guest.

    Thanks
    Bill

  5. Auto-Didact says:
    Auto-Didact

    As for the axiomatic treatment a la Ballentine, I believe the others have answered that adequately, but I will reiterate my own viewpoint: that is a mathematical axiomatization made in a similar vein to measure theoretic probability theory, not a physical derivation from first principles.@bhobba I feel I need to expand on this by explaining what exactly the difference is between a formal axiomatization as is customarily used in contemporary mathematics since the late 19th/early 20th century and a derivation from first principles as was invented by Newton and is practically unaltered customarily used in physics up to this day. I will once again let Feynman do the talking, so just sit back and relax:

    I hope this exposition makes things somewhat more clear. If it doesn't, well…

    Spoiler
  6. vanhees71 says:
    AlexCaledin

    From Physics and Philosophy by Werner Heisenberg

    … one of the most important features of the development and the analysis of modern physics is the experience that the concepts of natural language, vaguely defined as they are, seem to be more stable in the expansion of knowledge than the precise terms of scientific language, derived as an idealization from only limited groups of phenomena. This is in fact not surprising since the concepts of natural language are formed by the immediate connection with reality; they represent reality. It is true that they are not very well defined and may therefore also undergo changes in the course of the centuries, just as reality itself did, but they never lose the immediate connection with reality. On the other hand, the scientific concepts are idealizations; they are derived from experience obtained by refined experimental tools, and are precisely defined through axioms and definitions. Only through these precise definitions is it possible to connect the concepts with a mathematical scheme and to derive mathematically the infinite variety of possible phenomena in this field. But through this process of idealization and precise definition the immediate connection with reality is lost. The concepts still correspond very closely to reality in that part of nature which had been the object of the research. But the correspondence may be lost in other parts containing other groups of phenomena.Well, I think the opposite is true. With the refined means of the scientific effort we come closer and closer to reality. Our senses and "natural language" are optimized to survive under the specific "macroscopic" circumstances on Earth but not necessarily to understand realms of reality which are much different in scale than the one relevant for our survival like the microscopic scale of atoms, atomic nuclei, and subatomic/elementary particles or the very large scale of astronomy and cosmology. It's very natural to expect that our "natural language" is unsuitable to describe, let alone in some sense understand, what's going on at these vastly different scales. As proven by evidence the most efficient way to communicate about and to some extent understand nature on various scales is mathematics, and as with natural languages to learn a new language is never a loss but always a gain in understanding and experience.

  7. stevendaryl says:
    Lord Jestocost

    Bohr and Heisenberg were confronted with simple materialistic views that prevailed in the natural science of the nineteenth century and which were still held during the development of quantum theory by, for example, Einstein. What you call “philosophical ballast“ are at the end nothing else but attempts to explain to Einstein and others that the task of “Physics” is not to promote concepts of materialistic philosophy.What's funny (to me) about the anti-philosophy bent of so many physicists is that many of them actually do have deeply-held philosophical beliefs, but they prefer to only use the word "philosophy" to apply to philosophies that are different from their own.

  8. stevendaryl says:
    ohwilleke

    I'm not quite clear on why it is that if "we can never make deterministic predictions about the results of quantum experiments" that this implies non-reality, as opposed, for example, to a system that is chaotic (in the sense of having dynamics that are highly sensitive to slight changes in initial conditions) with sensitivity to differences in initial conditions that aren't merely hard to measure, but are inherently and theoretically impossible to measure because measurement is theoretically incapable of measuring both location and momentum at the scale relevant to the future dynamics of a particle.Well, Bell's theorem was the result of investigating exactly this question: How do we know that the indeterminacy of QM isn't due to some unknown dynamics that are just too complicated to extract deterministic results from? The answer is: As long as the dynamics is local (no instantaneous long-range interactions), you can't reproduce the predictions of QM this way.

    The use of the word "realism" is a little confusing and unclear. But if you look at Bell's proof, a local, realistic theory is one satisfying the following conditions:

    1. Every small region in space has a state that possibly changes with time.
    2. If you perform a measurement that involves only a small region in space, the outcome can depend only on the state of that region (or neighboring regions–it can't depend on the state in distant regions)

    Why is the word "realism" associated with these assumptions? Well, let's look at a classical example from probability to illustrate:

    Suppose you have two balls, a red ball and a black ball. You place each of them into an identical white box and close it. Then you mix up the two boxes. You give one box to Alice, and another box to Bob, and they go far apart to open their boxes. We can summarize the situation as follows:

    • The probability that Alice will find a red ball is 50%.
    • The probability that Alice will find a black ball is 50%.
    • The probability that Bob will find a red ball is 50%.
    • The probability that Bob will find a black ball is 50%.
    • The probability that they both will find a black ball is 0.
    • The probability that they both will find a red ball is 0.

    If you consider the probability distribution to be a kind of "state" of the system, then this system violates locality: The probability that Bob will find a red ball depends not only on the state of Bob and his box, but also on whether Alice has already found a red ball or a black ball. So this is a violation of condition 2 in my definition of a local realistic theory.

    However, the correct explanation for this violation is that classical probability is not a realistic theory. To say that Bob's box has a 50% probability of producing a red ball is not a statement about the box; it's a statement about Bob's knowledge of the box. A realistic theory of Bob's box would be one that describes what's really in the box, a black ball or a red ball, and not Bob's information about the box. Of course, Bob may have no way of knowing what his box's state is, but after opening his box and seeing that it contains a red ball, Bob can conclude, using a realistic theory, "The box really contained a red ball all along, I just didn't know it until I opened it."

    In a realistic theory, systems have properties that exist whether or not anyone has measured them, and measuring just reveals something about their value. (I wouldn't put it as "a measurement reveals the value of the property", because there is no need to assume that the properties are in one-to-one correspondence with measurement results. More generally, the properties influence the measurement results, but may not necessarily determine those results, nor do the results need to uniquely determine the properties).

    Bell's notion of realism is sort of the opposite of the idea that our observations create reality. Reality determines our observations, not the other way around.

  9. Lord Jestocost says:
    vanhees71

    …..superfluous philosophical balastBohr and Heisenberg were confronted with simple materialistic views that prevailed in the natural science of the nineteenth century and which were still held during the development of quantum theory by, for example, Einstein. What you call “philosophical ballast“ are at the end nothing else but attempts to explain to Einstein and others that the task of “Physics” is not to promote concepts of materialistic philosophy.

  10. lightarrow says:
    vanhees71

    Yes, and in 1926ff we learnt that this is a misleading statement. The explanation of the photoelectric effect on the level of Einstein's famous 1905 paper does not necessitate the quantization of the electromagnetic field but only of the (bound) electrons.

    https://www.physicsforums.com/insights/sins-physics-didactics/I didn' want to be the one to say that and I was pretty sure you or Arnold would have corrected me :smile:
    Thanks.

    lightarrow

  11. AlexCaledin says:

    From Physics and Philosophy by Werner Heisenberg

    … one of the most important features of the development and the analysis of modern physics is the experience that the concepts of natural language, vaguely defined as they are, seem to be more stable in the expansion of knowledge than the precise terms of scientific language, derived as an idealization from only limited groups of phenomena. This is in fact not surprising since the concepts of natural language are formed by the immediate connection with reality; they represent reality. It is true that they are not very well defined and may therefore also undergo changes in the course of the centuries, just as reality itself did, but they never lose the immediate connection with reality. On the other hand, the scientific concepts are idealizations; they are derived from experience obtained by refined experimental tools, and are precisely defined through axioms and definitions. Only through these precise definitions is it possible to connect the concepts with a mathematical scheme and to derive mathematically the infinite variety of possible phenomena in this field. But through this process of idealization and precise definition the immediate connection with reality is lost. The concepts still correspond very closely to reality in that part of nature which had been the object of the research. But the correspondence may be lost in other parts containing other groups of phenomena.

  12. vanhees71 says:
    Fra

    I find the different perspectives interacting here truly entertaining.

    We can probably agree that physics is not logic, nor mathematics, nor is it philosophy. But all ingredients are needed, this is why i think physics is so much more fun than pure math.

    I think Neumaier said this already elsewhere but there is also a difference in progressing science or creating new sensible hypothesis, and applying mature science to technology. Its not a coincidence that the founders of quantum theory seemed to be very philosophical, and the people that some years later formalized and cleaned up the new ideas was less so. I think it is deeply unfair to somehow suggest that the founders like Bohr or Heisenberg was someone inferior physicists than those that worked out the mathematical formalism better in an almost axiomatic manner. This is not so at all! I think all the ingredients are important. (Even if noone said the word inferior, its easy to get almost that impression, that the hard core guys to math, and the others do philosophy)

    On the other hand, RARE are those people that can span the whole range! It takes quite some "dynamic" mindset, to not only understand complex mathematics, but also the importance or sound reasoning and how to create feedback between abstraction and fuzzy reality. If you narrow in too much anywhere along this scale you are unavoidable going to miss the big picture.

    As for "wild", I think for some pure theorists and philosophers even a soldering iron my be truly wild stuff! Who knows what can go wrong? You burn tables books and fingers. Leave it to experts ;-)

    /FredrikI'd say Bohr (in 1912) and Heisenberg (in 1925) were very important in discovering the new theory to give the more mathematically oriented people the idea to work out. To study Bohr and Heisenberg is historically very interesting but it's not a good way to learn quantum theory since their writings tend to clutter the physics with superfluous philosophical balast which confuses the subject more than it helps to understand it. Of course, you need all kinds of thinkers to make progress in physics, and the philsosophical or more intuitive kind like Bohr and Heisenberg is in no way inferior to the physics/math or more analytical type like Dirac or Pauli. A singular exception is Einstein, who was both since on the one hand he was very intuitive, but he also knew about the necessity of the clear analytical mathematical formulation of the theory, for which part he usually had mathematical help from his collaborators (like Großmann in the case of GR).

  13. vanhees71 says:
    lightarrow

    If the experimental-physicist tells the theoretical-physicist /how/ to measure something, the latter tells the former /what/ he is actually measuring. :-)
    Example: before 1905, experimental-physicists performing photoelectric effect were measuring "light"; after, with A. Einstein, they were measuring "photons" .


    lightarrowYes, and in 1926ff we learnt that this is a misleading statement. The explanation of the photoelectric effect on the level of Einstein's famous 1905 paper does not necessitate the quantization of the electromagnetic field but only of the (bound) electrons.

    https://www.physicsforums.com/insights/sins-physics-didactics/

  14. Fra says:
    Auto-Didact

    My viewpoint is that deduction and induction are a false dichotomy, for there is an excluded middle, namely Pierce's abduction. Abduction has historically gotten a bad reputation due to it actually being an example of fallacious reasoning, but even so, it seems to be an effective way of thinking; only a puritan logicist would try to insist that fallacious reasoning was outright forbidden, but I digress.

    Induction may be necessary to generalize and so generate hypotheses, but inference to the best explanation, i.e. abduction or just bluntly guessing (in perhaps a Bayesian manner) is the only way to actually select a hypothesis from a multitude of hypotheses which can then be compared to experiment; if the guessed hypothesis turns out to be false, just rinse and repeat.You are right of course there are much to elaborate here! (my focus was not on strict dichotomies or not, just a quick comment that this question belongs to a general analysis of inferences)

    But a more elaborated treatment would risk diverging. In science indeed abductive reasoning is the right term for "induction of a rule". The exact relation here, and the relevance to physical law and interaction between information processing agents is exactly my core focus. But that whole discussion would quickly go off topic, and off rules.

    /Fredrik

  15. Auto-Didact says:
    Fra

    Just for reference the more convetional terminology for the various kinds of inferences here are, deductive vs inductive inference.

    "hard contradictions" are typically what you get in deductive logic, as this deals with propositions that are true or false.
    "soft contradictions" are more of the probabilistic kind, where you have various degrees of beliefs or support in certain propositions.

    The history or probability theory has its roots in inductive reasoning and its philosophy. The idea was that in order to make inductive reasoning rational and objective, one can simply "count evidence", and construct a formal measure of "degree of belief". That is one way to understanding the roots of probability theory. Probability theory is thus one possible mathematical model for rational inference.

    Popper was also grossly disturbed by the fact that science seemed to be an inductive process, and he wanted to "cure this" buy supress the inductive process of how to generate a new hypothesis from a falsified theory in a rational way, and instead focus on the deductive part: falsification. Ie. his idea was that the progress of science is effectively made at the falsification of a theory – this is also the deductive step – which popper liked! But needless to say this analysis is poor and inappropriate.

    Obviously deductive reasoing is cleaner and easier. So if its possible, its not hard to see the preference. But unfortunately reality is not well described by pure deductive reasoning. Propositions corresponding to precesses in nature are rarely easily characterised as true or false.

    The intereresting part (IMO) is the RELATION between deductive and inductive reasoning WHEN you take into acount the physical limits of the "hardware" that executes the inferences. This is exactly my personal focus, and how this relates to foundational physics, and the notion of physical law, which is deductive vs the inductive nature of "measurement", which merely "measures nature" by accounting for evidence, in an inductive way.

    But almost no people think along these line, ive learned, so this is why i am an oddball here.

    /FredrikMy viewpoint is that deduction and induction are a false dichotomy, for there is an excluded middle, namely Pierce's abduction. Abduction has historically gotten a bad reputation due to it actually being an example of fallacious reasoning, but even so, it seems to be an effective way of thinking; only a puritan logicist would try to insist that fallacious reasoning was outright forbidden, but I digress.

    Induction may be necessary to generalize and so generate hypotheses, but inference to the best explanation, i.e. abduction or just bluntly guessing (in perhaps a Bayesian manner) is the only way to actually select a hypothesis from a multitude of hypotheses which can then be compared to experiment; if the guessed hypothesis turns out to be false, just rinse and repeat.

    Here is where my viewpoint not just diverges away from standard philosophy of science, but also from standard philosophy of mathematics: in my view not only is Pierce's abduction necessary to choose scientific hypotheses, abduction seems more or less at the basis of human reasoning itself. For example, if we observe a dark yellowish transparant liquid in a glass in a kitchen, one is easily tempted to conclude it is apple juice, while it actually may be any of a million other things, i.e. it is possibly any of a multitude of things. Yet our intuition based on our everyday experience will tell us that it probably is apple juice; if we for some reason doubt that, we would check it by smelling or tasting or some other means of checking and then updating our idea what it is accordingly. (NB: contrast probability theory and possibility theory).

    But if you think about this even more carefully, we can step back and ask if the liquid was even a liquid, if the cup was even a glass, and so on. In other words, we seem to be constantly be abducing without even being aware that we are doing so or even mistakenly believing we are deducing; the act of merely describing things we see in the world around us in words already seems to require the use of abductive reasoning.

    Moreover, much of intuition also seems to be the product of abductive reasoning, which would imply that abduction lays at the heart of mathematical reasoning as well. There is actually a modern school of mathematics, namely symbolism, which seems to be arguing as much although not nearly as explicitly as I am doing here (here is a review paper on mathematical symbolism). In any case, if this is actually true it would mean that the entire Fregean/Russellian logicist and Hilbertian formalist schools and programmes are hopelessly misguided; coincidentally, since @bhobba mentioned him before, Wittgenstein happened to say precisely that logicism/formalism were deeply wrong views of mathematics; after having carefully thought about this issue for years, I tend to be in agreement with Wittgenstein on this.

  16. Fra says:
    ohwilleke

    I like that terminology. I'll have to file it away for future use.Just for reference the more convetional terminology for the various kinds of inferences here are, deductive vs inductive inference.

    "hard contradictions" are typically what you get in deductive logic, as this deals with propositions that are true or false.
    "soft contradictions" are more of the probabilistic kind, where you have various degrees of beliefs or support in certain propositions.

    The history or probability theory has its roots in inductive reasoning and its philosophy. The idea was that in order to make inductive reasoning rational and objective, one can simply "count evidence", and construct a formal measure of "degree of belief". That is one way to understanding the roots of probability theory. Probability theory is thus one possible mathematical model for rational inference.

    Popper was also grossly disturbed by the fact that science seemed to be an inductive process, and he wanted to "cure this" buy supress the inductive process of how to generate a new hypothesis from a falsified theory in a rational way, and instead focus on the deductive part: falsification. Ie. his idea was that the progress of science is effectively made at the falsification of a theory – this is also the deductive step – which popper liked! But needless to say this analysis is poor and inappropriate.

    Obviously deductive reasoing is cleaner and easier. So if its possible, its not hard to see the preference. But unfortunately reality is not well described by pure deductive reasoning. Propositions corresponding to precesses in nature are rarely easily characterised as true or false.

    The intereresting part (IMO) is the RELATION between deductive and inductive reasoning WHEN you take into acount the physical limits of the "hardware" that executes the inferences. This is exactly my personal focus, and how this relates to foundational physics, and the notion of physical law, which is deductive vs the inductive nature of "measurement", which merely "measures nature" by accounting for evidence, in an inductive way.

    But almost no people think along these line, ive learned, so this is why i am an oddball here.

    /Fredrik

  17. Auto-Didact says:
    ohwilleke

    That is really helpful. I've never heard anyone say that quite that clearly before.This surprises me somewhat, I was under the impression that the linearity (or unitarity) of QM was extremely well recognized, i.e. the fact that conventional QM has absolutely nothing to do with nonlinear dynamics; this is exactly why I for example believe that QM can be only at best a provisional theory, because almost all phenomena in nature are inherently non-linear, which is reflected in the fact that historically almost all fundamental theories in physics were special linearized limiting cases which eventually got recast into their more correct non-linear form. This trend continues to this very day; just take a look at condensed matter physics, hydrodynamics, biophysics and so on.

    bhobba

    So I have zero idea where you are getting this from – its certainly not from textbooks that carefully examine QM. Textbooks at the beginner/intermediate level sometimes have issues – but they are fixed in the better, but unfortunately, more advanced textsI got this from having read several papers, systematic reviews, conference transcripts and books on the subject. I don't have much time to spare atm but will link to them later if needed.

    As for the axiomatic treatment a la Ballentine, I believe the others have answered that adequately, but I will reiterate my own viewpoint: that is a mathematical axiomatization made in a similar vein to measure theoretic probability theory, not a physical derivation from first principles.

    bhobba

    That leads to exactly the same situation as QM – a deterministic equation describing a probabilistic quantity. Is that inconsistent too? Of course not. Inconsistency – definition: If there are inconsistencies in two statements, one cannot be true if the other is true. Obviously it can be true that observations are probabilistic and the equation describing those probabilities deterministic. There is no inconsistency at all,Your analogy fails because what you are describing there are internal parts, i.e. a hidden variables theory; for QM these types of theories are ruled out experimentally by various inequality theorems (Bell, Leggett), at least assuming locality, but this seems to be an irrelevant digression.

    In any case, what you go on to say here makes it clear that you are still missing my point, namely where exactly the self-inconsistency of QM arises, i.e. not merely of measurement or of the Schrodinger equation, but of full QM: the full theory is not self-consistently captured by a single mathematical viewpoint, like say from within analysis or within geometry. It is instead usually analytic and sometimes stochastically discontinuous. I do not believe that Nature is actually schizofrenic in this manner, and I believe that the fact that QM is displaying these pathological symptoms is due to the physical theory displaying the failure of being merely some linearized limiting case.

    Imagine it like this, say you have an analytic function on some time domain, where you artificially introduce in cuts and stochastic vertical translations to the function at certain parts and so manually introduce in discontinuity. Is this new function with artificially added in discontinuities still appropriately something akin to an analytic function? Or is there perhaps a novel kind of mathematical viewpoint/treatment needed to describe such an object, similar to how distribution theory was needed for the Dirac delta? I don't know, although I do have my own suspicions. What I do know is that insisting that 'QM as is is fully adequate by just using the ensemble interpretation; just accept it as the final theory of Nature' does not help us one bit further in going beyond QM.

    Lastly, I think Penrose here says quite a few sensible things and extremely relevant things on exactly the topic at hand:

  18. microsansfil says:
    ohwilleke

    I'm not quite clear on why it is that if "we can never make deterministic predictions about the results of quantum experiments" that this implies non-realityDefinition of "reality" It's a humain being viewpoint … popper's three worlds, Heisenberg's 3 regions of knowledge, …

    Jean Cavaillès

    If any physical law is merely a bet on action, the scandal of probability ceases: far from being an inadequate substitute for our power to know, it is the springboard of all scientific activity.

    It is easier to aknowledge that physical laws are gambles of action when no way of interpreting them as descriptions of an independent, detached, "primary" nature is unanimously accepted … Instead: an inventory of relations-with/withing-natureView attachment 218360

    View attachment 218363

    Best regards
    Patrick

  19. Auto-Didact says:
    PeterDonis

    I don't know that my intuition means much; it's extremely difficult for most people who already know a subject in detail to imagine how people who don't have that knowledge will respond to different ways of conveying it. But FWIW, my intuition is that the "shut up and calculate" interpretation is pedagogically the best place to start, because until you understand the underlying math and predictions of QM, trying to deal with any interpretation on top of that is more likely to confuse you than to help you.Feynman's response to this is extremely apt, dare I say prescient:

  20. PeterDonis says:
    ohwilleke

    is there any serious research into which interpretation is most effective pedagogically?I don't know of any, but I'm not at all up to speed on this kind of research.

    ohwilleke

    what is your intuition on that point?I don't know that my intuition means much; it's extremely difficult for most people who already know a subject in detail to imagine how people who don't have that knowledge will respond to different ways of conveying it. But FWIW, my intuition is that the "shut up and calculate" interpretation is pedagogically the best place to start, because until you understand the underlying math and predictions of QM, trying to deal with any interpretation on top of that is more likely to confuse you than to help you.

  21. ohwilleke says:
    PeterDonis

    A better way of asking the question you might be trying to ask is, do people care about case 1 vs. case 2 because of the different ways the two cases suggest of looking for a more comprehensive theory of which our current QM would be a special case? The answer to that is yes; case 1 interpretations suggest different possibilities to pursue for a more comprehensive theory than case 2 interpretations do. Such a more comprehensive theory would indeed make different predictions from standard QM for some experiments. But the interpretations themselves are not the more comprehensive theories; they make the same predictions as standard QM, because they are standard QM, not some more comprehensive theory.That was part of the question I was trying to ask.

    Going back to one of the other issues that doesn't get into a more comprehensive theory, is there any serious research into which interpretation is most effective pedagogically? If not, what is your intuition on that point?

  22. PeterDonis says:
    ohwilleke

    even if they are indistinguishable in practiceThey aren't indistinguishable "in practice"; they're indistinguishable period.

    A better way of asking the question you might be trying to ask is, do people care about case 1 vs. case 2 because of the different ways the two cases suggest of looking for a more comprehensive theory of which our current QM would be a special case? The answer to that is yes; case 1 interpretations suggest different possibilities to pursue for a more comprehensive theory than case 2 interpretations do. Such a more comprehensive theory would indeed make different predictions from standard QM for some experiments. But the interpretations themselves are not the more comprehensive theories; they make the same predictions as standard QM, because they are standard QM, not some more comprehensive theory.

    ohwilleke

    I certainly get the feeling that people who are debating the interpretations feel like they are arguing over more than semantics and terminology.Yes, they do. But unless they draw the key distinction I am drawing between interpretations of an existing theory, standard QM, and more comprehensive theories that include standard QM as a special case, they are highly likely to be talking past each other. Which is indeed one common reason why discussions of QM interpretations go nowhere.

  23. ohwilleke says:
    PeterDonis

    Also, if you find yourself thinking that case 1 and case 2 make different predictions, you are doing something wrong. The whole point of different interpretations of QM is that they all use the same underlying mathematical model to make predictions, so they all make the same predictions. If you have something that makes different predictions, it's not an interpretation of QM, it's a different theory.Isn't the reason that people would care about case 1 v. case 2 that even if they are indistinguishable in practice for the foreseeable future, or at least, even if it is not even theoretically possible to ever distinguish the two even in theory, that one could imagine some circumstances either with engineering that is literally impossible in practice (along the lines of Maxwell's Demon), or (e.g. with Many Worlds) with an observer located somewhere that it is impossible for anyone from the perspective of our universe at a time long after the Big Bang to see, where case 1 and case 2 would imply different things?

    Likewise, even if case 1 v. case 2 are indistinguishable in the world of SM + GR core theory, wouldn't a distinction between one and the other have implications for what kind of "new physics" would be more promising to investigate in terms of BSM hypothesis generation?

    For example, suppose the "state" of case 2 is "real" (whatever that means). Might that not suggest that brane-like formulations of more fundamental theories might make more sense to investigate than they would if case 1 is a more apt interpretation?

    I certainly get the feeling that people who are debating the interpretations feel like they are arguing over more than semantics and terminology. After all, if it really were only just purely semantics, wouldn't the argument between the interpretations be more like the argument between drill and kill v. New Math ways to teaching math: i.e., between which is easier for physics students to learn and grok more quickly in a way that gives them the most accurate intuition when confronted with a novel situation (something that incidentally is amenable to empirical determination), rather than over which is right in a philosophical way?

  24. lightarrow says:
    vanhees71

    Again, a theory book or scientific paper has not the purpose to tell how something is measured. That's done in experimental-physics textbooks and scientific papers! Of course, if you only read theoretical-physics and math literature you can come to the deformed view about physics that everything should be mathematically defined, but physics is no mathematics. It just uses matheamtics as a language to express its findings using real-world devices (including our senses to the most complicated inventions of engineering like the detectors at the LHC).If the experimental-physicist tells the theoretical-physicist /how/ to measure something, the latter tells the former /what/ he is actually measuring. :-)
    Example: before 1905, experimental-physicists performing photoelectric effect were measuring "light"; after, with A. Einstein, they were measuring "photons" .


    lightarrow

  25. PeterDonis says:
    ohwilleke

    Suppose that we have a top quark and an ultra sensitive gravitational force sensor.We don't have a good theory of quantum gravity, so this is not a good example to use, since we have no actual theoretical model on which to base predictions.

    Also, if you find yourself thinking that case 1 and case 2 make different predictions, you are doing something wrong. The whole point of different interpretations of QM is that they all use the same underlying mathematical model to make predictions, so they all make the same predictions. If you have something that makes different predictions, it's not an interpretation of QM, it's a different theory.

  26. ohwilleke says:
    PeterDonis

    I thought cases 1 and 2 in the article already described that, but I'll give it another shot.

    Case 1 says the state is not real; it's just a description of our knowledge of the system, in the same sense that, for example, saying that a coin has a 50-50 chance of coming up heads or tails describes our knowledge of the system–the coin itself isn't a 50-50 mixture of anything, nor is what happens when we flip it, it's just that we don't know–we can't predict–how it is going to land, we can only describe probabilities.

    Case 2 says the state is real, in the same sense that, for example, a 3-vector describing the position of an object in Newtonian physics is real: it describes the actual position of the actual object.So, to give a concrete example. Suppose that we have a top quark and an ultra sensitive gravitational force sensor.

    In Case 1, the gravitational force sensor is always going to report a gravitational force consistent with a point particle at all times.

    In Case 2, the gravitational force sensor is going to report a gravitational force consistent with having the mass-energy of the top quark smeared predominantly within a volume of space that is not point-like, because the top quark is literally present at all of the places it could be when measured to a greater or lesser degree, simultaneously.

    Or, I have a misunderstood something?

  27. ohwilleke says:
    stevendaryl

    When you're talking about huge numbers, there is the possibility of what I would call a "soft contradiction", which is something that maybe false but you're not likely to ever face the consequences of its falseness. An example from classical thermodynamics might be "Entropy always increases". You're never going to see a macroscopic violation of that claim, but our understanding of statistical mechanics tells us that it can't be literally true; there is a nonzero probability of a macroscopic system making a transition to a lower-entropy state.I like that terminology. I'll have to file it away for future use.

  28. ohwilleke says:
    PeterDonis

    I didn't want to overcomplicate the article, but this is a fair point: there should really be an additional qualifier that the dynamics of QM, the rules for how the quantum state changes with time, are linear, so there is no chaos–i.e., there is no sensitive dependence on initial conditions. You need nonlinear dynamics for that. So whatever is keeping us from making deterministic predictions about the results of quantum experiments, it isn't chaos due to nonlinear dynamics of the quantum state.That is really helpful. I've never heard anyone say that quite that clearly before.

    It's basically the same, at least to the extent that the term "reality" has a reasonably well-defined meaning at all in those discussions. The discussions about entanglement that you refer to are more relevant to my case #2: they are discussions of problems you get into if you try to interpret the quantum state in the model as modeling the physically real state of a quantum system. For case #1, entanglement and all of the phenomena connected to it are not an issue, because you don't have to believe that anything "real" happens to the system when your knowledge about it changes.Thanks again. That makes sense.

  29. PeterDonis says:
    ohwilleke

    I'm not quite clear on why it is that if "we can never make deterministic predictions about the results of quantum experiments" that this implies non-reality, as opposed, for example, to a system that is chaotic (in the sense of having dynamics that are highly sensitive to slight changes in initial conditions)I didn't want to overcomplicate the article, but this is a fair point: there should really be an additional qualifier that the dynamics of QM, the rules for how the quantum state changes with time, are linear, so there is no chaos–i.e., there is no sensitive dependence on initial conditions. You need nonlinear dynamics for that. So whatever is keeping us from making deterministic predictions about the results of quantum experiments, it isn't chaos due to nonlinear dynamics of the quantum state.

    ohwilleke

    I am also unclear with regard to whether the "reality" that you are discussing is the same as the "reality" people are talking about in QM when they state that given quantum entanglement, you can have locality, reality, or causality, but you can't simultaneously have all three, or whether you are talking about something different.It's basically the same, at least to the extent that the term "reality" has a reasonably well-defined meaning at all in those discussions. The discussions about entanglement that you refer to are more relevant to my case #2: they are discussions of problems you get into if you try to interpret the quantum state in the model as modeling the physically real state of a quantum system. For case #1, entanglement and all of the phenomena connected to it are not an issue, because you don't have to believe that anything "real" happens to the system when your knowledge about it changes.

  30. ohwilleke says:

    For #1, the obviously true part is that we can never directly observe the state, and we can never make deterministic predictions about the results of quantum experiments. That makes it seem obvious that the state can’t be the physically real state of the system; if it were, we ought to be able to pin it down and not have to settle for merely probabilistic descriptions. But if we take that idea to its logical conclusion, it implies that QM must be an incomplete theory; there ought to be some more complete description of the system that fills in the gaps and allows us to do better than merely probabilistic predictions. And yet nobody has ever found such a more complete description, and all indications from experiments (at least so far) are that no such description exists; the probabilistic predictions that QM gives us really are the best we can do.I'm not quite clear on why it is that if "we can never make deterministic predictions about the results of quantum experiments" that this implies non-reality, as opposed, for example, to a system that is chaotic (in the sense of having dynamics that are highly sensitive to slight changes in initial conditions) with sensitivity to differences in initial conditions that aren't merely hard to measure, but are inherently and theoretically impossible to measure because measurement is theoretically incapable of measuring both location and momentum at the scale relevant to the future dynamics of a particle.

    Now, I'm not saying that there aren't other aspects of QM that make a chaotic system with real particles interpretation problematic – I'm thinking of experiments that appear to localize different properties of the same particle in different physical locations, or little tricks like off-shell virtual particles and quantum tunneling. But, chaotic but deterministic systems can look so much like truly random systems phenomenologically for lots of purposes (which is why people invented tools like dice, lottery ball spinners, slot machines, card decks, and roulette wheels), so you can have a deterministic and stochastic conceptions of QM that are indistinguishable experimentally, at least for many purposes, but which have profoundly different theoretical implications. But, then again, maybe I'm wrong and there are easy ways to distinguish between the two scenarios.

    Also, I do hear you when you say that the question is whether the "state" is real, and not just whether particles are real, and the "state" is a much more empheral, ghost-like thing than a particle.

    I am also unclear with regard to whether the "reality" that you are discussing is the same as the "reality" people are talking about in QM when they state that given quantum entanglement, you can have locality, reality, or causality, but you can't simultaneously have all three, or whether you are talking about something different. To be clear, I'm not asking the more ambitious question of what "reality" means, only the less ambitious question of whether one kind of reality that is hard to define non-mathematically is the same as another kind of reality that is also hard to define non-mathematically. It could be that "reality" is instead two different concept that happens to share the same name and both of which are hard to define non-mathematically, in which case the term is a "false friend" as they say in foreign language classes.

  31. Fra says:

    I find the different perspectives interacting here truly entertaining.

    vanhees71

    This is all solid scientific work and not wild "philosophical" speculation.We can probably agree that physics is not logic, nor mathematics, nor is it philosophy. But all ingredients are needed, this is why i think physics is so much more fun than pure math.

    I think Neumaier said this already elsewhere but there is also a difference in progressing science or creating new sensible hypothesis, and applying mature science to technology. Its not a coincidence that the founders of quantum theory seemed to be very philosophical, and the people that some years later formalized and cleaned up the new ideas was less so. I think it is deeply unfair to somehow suggest that the founders like Bohr or Heisenberg was someone inferior physicists than those that worked out the mathematical formalism better in an almost axiomatic manner. This is not so at all! I think all the ingredients are important. (Even if noone said the word inferior, its easy to get almost that impression, that the hard core guys to math, and the others do philosophy)

    On the other hand, RARE are those people that can span the whole range! It takes quite some "dynamic" mindset, to not only understand complex mathematics, but also the importance or sound reasoning and how to create feedback between abstraction and fuzzy reality. If you narrow in too much anywhere along this scale you are unavoidable going to miss the big picture.

    As for "wild", I think for some pure theorists and philosophers even a soldering iron my be truly wild stuff! Who knows what can go wrong? You burn tables books and fingers. Leave it to experts ;-)

    /Fredrik

  32. bhobba says:
    AlexCaledin

    Feynman would say measurement is done when "nature knows" the outcome – is this a wild speculation?The quote you gave does not support what you said Feynman says. He was saying regardless of if you look or not if nature decides its up then its up.

    Thsnks
    Bill

  33. AlexCaledin says:

    – I mean just what an ordinary student must think reading Feynman's Lectures :

    You do add the amplitudes for the different indistinguishable alternatives inside the experiment, before the complete process is finished. At the end of the process you may say that you “don’t want to look at the photon.” That’s your business, but you still do not add the amplitudes. Nature does not know what you are looking at, and she behaves the way she is going to behave whether you bother to take down the data or not.
    . . .
    …You may argue, “I don’t care which atom is up.” Perhaps you don’t, but nature knows…"

    http://www.feynmanlectures.caltech.edu/III_03.html

  34. vanhees71 says:
    A. Neumaier

    I only asserted that the bridge between theoretical physics (mathematically defined) and experimental physics (operationally defined) is philosophy, and hence open to interpretation. Only what is theoretically precise can be subject to precise arguments.Well, you can call that philosophy, but it's constraint by the necessity for empirical testability, i.e., you have an idea of how to theoretically describe an experiment, including the preparation of the observed system and the measurement of the quantities of interest and then see whether the experiment agrees or disagrees with that prediction. Of course, it's not easy to analyze the errors in both experiment and theory etc. etc. E.g., the claim one might have discovered faster-than-light neutrinos at CERN could not immediately lead to giving up relativity but one had to exclude all sources of errors in the experimental setup first, and indeed after an independent control measurement in accordance with the theory and a long search one found two defects in the time-of-flight-measurement setup which finally explained the wrong findings etc. etc. This is all solid scientific work and not wild "philosophical" speculation.

  35. A. Neumaier says:
    bhobba

    Yes – but its usually pretty obvious philosophy. If you get too deep about it you are led into very murky waters and really don't get anywhere.It is pretty obvious in classical mechanics but not in quantum mechanics. This is why the quantum measurement problem constitutes very murky waters, and nobody really gets anywhere, before it is made pretty obvious.

  36. bhobba says:
    A. Neumaier

    I only asserted that the bridge between theoretical physics (mathematically defined) and experimental physics (operationally defined) is philosophy, and hence open to interpretation. Only what is theoretically precise can be subject to precise arguments.Yes – but its usually pretty obvious philosophy. If you get too deep about it you are led into very murky waters and really don't get anywhere.

    With regard to Ballentine he does indeed analyse such things and uses the Ensemble interpretation. I wont argue if it's the right one, but its pretty simple and does provide a minimalist sort of link between the formalism and application.

    It's like when you first learn the Kolmogorov axioms how to apply it ie what is meant by events etc is picked up with a bit of experience. They gave it meaning with a not too rigorous reference to the strong law of large numbers – which of course cant be fully detailed at the beginner level. Its fixed up later but the alternate Bayesian view isn't usually presented until Bayesian statistics. I don't think I ever formally studied the Cox axioms – l learnt about it years later in my own reading.

    Difficult interpretive issues are mostly deferred. Strangely though around here with beginners it seems to dominate. Interesting isnt it.

    Thanks
    Bill

  37. A. Neumaier says:
    vanhees71

    the deformed view about physics that everything should be mathematically defined, but physics is no mathematics.I only asserted that the bridge between theoretical physics (mathematically defined) and experimental physics (operationally defined) is philosophy, and hence open to interpretation. Only what is theoretically precise can be subject to precise arguments.

  38. vanhees71 says:

    Again, a theory book or scientific paper has not the purpose to tell how something is measured. That's done in experimental-physics textbooks and scientific papers! Of course, if you only read theoretical-physics and math literature you can come to the deformed view about physics that everything should be mathematically defined, but physics is no mathematics. It just uses matheamtics as a language to express its findings using real-world devices (including our senses to the most complicated inventions of engineering like the detectors at the LHC).

  39. A. Neumaier says:
    bhobba

    Ballentine has two axioms

    1. Outcomes of observations are the eigenvalues of some operator.
    2. The Born Rule.But he doesn't say what it means that one subsystem (an observer) of the physical system called the Earth has measured a property of another subsystem (a particle, say). This is the gap that makes all the interpretations vague when it comes to analyzing the measurement process as a physical process rather than as a metaphysical appearance of outcomes of observations from nowhere. It is filled by vague words about a collapse happening (when and where), worlds splitting (why and how?), etc..

  40. vanhees71 says:
    Ian J Miller

    In my opinion, part of the reason there is such scope for interpretations is that nobody actually KNOWS what Ψ means. Either there is an actual wave of there is not, and here we have the first room for debate. If there is, how come nobody can find it, and if there is not, how come a stream of particles reproduce a diffraction pattern in the two slit experiment? No matter which option you try, somewhere there is a dead rat to swallow. As it happens, I have my own interpretation which differs from others in two ways after you assume there is an actual wave. The first, the phase exp(2πiS/h) becomes real when S = h (or h/2) – from Euler. This is why electrons pair in an energy well, despite repelling each other. Since it becomes real at the antinode, I add the premise that the expectation values of variables can be obtained there. The second is that if there is a wave, the wave front has to arrive at the two slits about the same time as the particle. If so, the wave must transmit energy (which waves generally do, but the dead rat here is where is this extra energy? However, it is better than Bohm's quantum potential because it has a specific value.) The Uncertainty Principle and Exclusion Principle follow readily, as does why the electron does not radiate its way to the nucleus. The value in this, from my point of view, is it makes the calculation of things like the chemical bond so much easier – the hydrogen molecule becomes almost mental arithmetic, although things get more complicated as the number of electrons increase. Nevertheless, the equations for Sb2 gave an energy within about 2 kJ/mol, which is not bad.In real-world physics there's not much scope for interpretations. QT is just used as what it is, namely a mathematical description of what's observed in nature, and it is very well known what ##psi## (or more generally quantum states) mean: It gives probabilities for the outcome of measurements on a system which is prepared in the corresponding state. The diffraction pattern is quite easily predicted by solving the Schrödinger equation. There's no "dead rat to swallow". I don't comment on your enigmatic claims on the phase etc.

  41. bhobba says:
    Auto-Didact

    What is mathematically inconsistent about QM is that half the formalism (measurement process) has completely different mathematical properties than the other half (Schrodinger equation). Its even worse than that since the measurement process has not even actually been fully formalized. There is no other physical theory which suffers from these problems.I have a couple of minutes spare before going to the dentist so can answer some of the other issues raised.

    Remember what I said before: 'Imagine you have a coin that has a predictable mechanism inside it so its bias deterministically varies in time. You can write a deterministic equation giving the probabilities of getting heads or tales if flipped.'

    That leads to exactly the same situation as QM – a deterministic equation describing a probabilistic quantity. Is that inconsistent too? Of course not. Inconsistency – definition: If there are inconsistencies in two statements, one cannot be true if the other is true. Obviously it can be true that observations are probabilistic and the equation describing those probabilities deterministic. There is no inconsistency at all,

    And the measurement process has been fully formalized – its just after decoherence has occurred. The issue is not everyone agrees for reasons I have mentioned before. I will not argue they are wrong and I am correct – but saying that it has not been resolved is misrepresenting the situation. It just has not been resolved to everyone's satisfaction eg some want a 'formalization' where interference terms just don't decay to a very small value – but are actually zero. Most would say for all practical purposes (FAPP) it has been resolved – in fact probably many who think issues of defining remain will likely agree it has been resolved FAPP – they just want more than FAPP. I am not going to argue if they are right or wrong – I think everyone knows my opinion – but opinions are like bums – everyone has one – it does not make it correct. But this is in large part a philosophical morass – is FAPP good enough? It's part of what I mean if you push it too hard you are lead into a morass of issues.

    And, as posted previously – its the same with many areas of applied math – push it too hard and you are lead down a path like Wittgenstein was led down. He was an excellent applied mathematician studying aeronautical science. He went to do his PhD but came under the influence of Bertrand Russell and wanted to know 'why' about issues with even basic arithmetic. He then became a philosopher. His view was very strange from a scientific viewpoint – he thought it all just convention. Was he correct – blowed if I know – all I can say to me its a very strange view.

    Thanks
    Bill

  42. bhobba says:
    Stephen Tashi

    Why is the concept of the collapse of a wave function any more (or any less) of a problem in logical consistency than other applications of probability theory where there are probabilities of various outcomes followed by only one of the outcomes happening?Of course it isn't – leaving aside that collapse isn't really part of QM – only some interpretations. In fact QM is simply a generalization of ordinary probability theory:
    https://arxiv.org/abs/1402.6562

    Stephen Tashi

    Mathematical probability theory (based on measure theory) says nothing about events actually happening and has no axiom stating that it is possible to take take random samples. Taking random samples (i.e. "realizing" the outcome of a random variable) is a topic for applications of probability theory, so it involves an interpretation of probability theory not explicitly given in the mathematical axioms.Exactly – and that's where the morass I previously spoke about comes into it. If you push it too deeply you run unto unresolved philosophical issues of a formidable nature – even in ordinary probability theory when you apply it they rear their ugly head. But its usually bypassed by simple reasonableness criteria such as for all practical purposes. An example is the issue I mentioned, since QM is a theory about observations that appear here in an assumed macro world (we will exclude strange interpretations like consciousness creates the external world and causes collapse and stick with what most consider reasonable and not 'mystical') how can it explain that world? We have made a lot of progress in that but if you really push it, it still has issues eg decoherence models show that interference terms very quickly decay to zero – but usually never quite reach it – but are so small to be irrelevant. Some do not accept that (ie it must be zero to truly explain it) – which is OK – but then you are stuck, as I put it in a very deep morass of complex issues. Physics has long made reasonableness assumptions – if you don't do that then you are unlikely to get anywhere – although occasionally you can find something very deep and powerful such as in figuring out what that damnable Dirac Delta Function really is you create distribution theory that has wide applicability. But often you get nowhere.

    Stephen Tashi

    Is that a good analogy, from your point of view?Of course.

    Thanks
    Bill

  43. bhobba says:
    Auto-Didact

    One of the goals and duties of mathematical and theoretical physicists is to be able to demonstrate mathematical consistency of a physical theory by being to able to derive a theory entirely from first principles; QM is just another physical theory and thus not an exception to this. As the theory stands today, since its conception, this full derivation is not yet possible; no other accepted physical theory suffers from this. (NB: QFT has foundational issues as well, but that's another discussion).Ballentine has two axioms

    1. Outcomes of observations are the eigenvalues of some operator.
    2. The Born Rule.

    Everything is derived from that except some things that in physics is usually accepted eg you can find the derivative of a wave-function and the POR.

    So I have zero idea where you are getting this from – its certainly not from textbooks that carefully examine QM. Textbooks at the beginner/intermediate level sometimes have issues – but they are fixed in the better, but unfortunately, more advanced texts.

    There are other misconceptions – but that's the one that stood out to me.

    Thanks
    Bill

  44. Fra says:
    AlexCaledin

    But how it can be coherent? What happens in every local measurement is choice between variants of the whole universe, while QM is telling about a limited experimental system.Posting past midnight isnt a good thing but here is a simplified view of what i think of as an "inference interpretation", which is a highly twisted version of Peter Donis (1) version of the interpretation.

    Coherence requires unifying unitary evolution with information updates, in the sense that in the unitary description by O3 of [O1 observing O2] must have a hamiltonian describing the internation interactions of the O1-O2 system that as per the inside view, is information updates. The problem is that if O1 is not a classical observer, the current theory does not apply. This is conceptually incoherent.

    1) "Observer equivalence"
    A coherent theory of physical inference must somehow apply to any observers inference on its environment. Not only to classical observers, because the difference is simply a matter of complexity scale(mass?). Current experimental evidence provides NO insight into the inferences of non-classical observers(*)

    2) "Inferrability"
    An inference itself contains premises and some rule of the inference. This rule can be a deductive rule such as hamiltonian evolution, or it can be a random walk. The other premises are typicaly initial conditions or priorly prepared states. Now from the point of view of requiring that only inferrable arguments enter the inference, we end up with the conclusion that we must treat information about initial conditions, no different than information about the rules. Ie. a coherent theory should unify state and law.

    => The inference systems itself, is inferred, and thus evolves. We natuarally reach a paradigm of evolution of physical law.

    (*) This ultimately relates to unifying the interactions. To unify forces, and to understand how the hamiltonian or lagrangian of the unified interactions look like, is the same problem as to understand how all physical interactions in the standard model can be understood as the small non-classical observers making inferences and measurements on each other.

    Once this is "clear", the task is to "reinvent" the mathematical models we need:
    My mathematical grip on this, is that i have started a reconstruction of an algorithmic style of inference, implemented as random processes guided by evolving constraints. Physical interactions will be modelled a bit like "interacting computers", where the computer hardware are associated with the structure of matter. Even the computers evolve, and if this is consistent, one should get predictions from stable coexisting structures, that match the standard model. All in a conceptually coherent way.

    Conventional models based on continuum mathematics should also correspond to steady states. In particular certain deductive logical system are understood as emergent "stable rules" in an a priori crazy game. In this sense we will see ALL interactions as emergent.

    The big problem here is that the complexity here is so large, that no computer simulation can simulated the real thing, because the computational time actually relates to time evolution and tehre is simply no way to "speed up time". But theere is on exploit that has given me hope, and that is to look at the simplest possible observers, it would be probably doable to simulate parts of the first fractions of the big bang for one reason – the rules from the INSIDE are expected to be almost trivially simple at unification scale. They just LOOK complicated from the low energy perspective. But the task is huge indeed. But its not just a philosophical mess at all! Its rather a huge task of trying to reconstruct from this picture all spacetime and matter properties.

    /Fredrik

  45. David Halliday says:

    I understand that you were primarily contrasting two (2) interpretations (alluding to something like a third, or so), but there are a whole lot more than just a few interpretations that can be "cubbyholed" into the dichotomy you presented. (Basically, there are additional choices and other "dimensions" that can differ as well.)

    However, your conclusion that "the best we can do at this point is to accept that reasonable people can disagree on QM interpretations and leave it at that" still holds firm!

« Older CommentsNewer Comments »

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply