Against "interpretation" - Comments

In summary, Greg Bernhardt submitted a new blog post discussing the limitations of "interpretation" as a way to discuss QM disagreements.]In summary, Greg Bernhardt discussed the limitations of "interpretation" as a way to discuss QM disagreements. He argued that interpretation is a signal that the disagreement can't be resolved, and that it doesn't create the next problem to explain why interpretation and model will be the same. He also suggested the merger of theory and model as a way to solve the discrepancy.
  • #141
A. Neumaier said:
How would you differentiate between objective and subjective? How is measurement, or an electron, or a particle position, or an ideal gas, or - defined objectively?

That is an issue Gell-Mann and others are grappling with in trying to complete the decoherent histories program. Progress has been made, but problems remain.

Thanks
Bill
 
  • Like
Likes *now*
Physics news on Phys.org
  • #142
bhobba said:
That is an issue Gell-Mann and others are grappling with in trying to complete the decoherent histories program. Progress has been made, but problems remain.

Thanks
Bill
Is there a good guide to open problems anywhere? I just finished Griffith's book "Consistent Histories" today and I'm eager to know more.
 
  • Like
Likes *now*
  • #143
DarMM said:
Is there a good guide to open problems anywhere? I just finished Griffith's book "Consistent Histories" today and I'm eager to know more.

I think you have to look at some of the papers eg:
https://arxiv.org/abs/1312.7454

Thanks
Bill
 
  • Like
Likes *now* and DarMM
  • #144
Dale said:
We already discussed that, didn’t we? Anything necessary to predict the outcome of an experiment is objective.
This begs the issue. What is is that is necessary to predict the outcome? How do you differentiate between the necessary and the unnecessary?

An experiment tells, for example that when you point an unknown very weak source of light to a photodetector, it will produce every now and then an outcome - a small photocurrent, measured in the traditional way. Nothing predicts when this will happen. Predicted is only the average number of events in dependence on the assumed properties of the incident light, in the limit of an infinite long time - assuming the source is stationary. Nowhere photons, though the experimenters talk about these in a vague, subjective way that guides them to a sensible correspondence between their assumptions and the theory. All this is murky waters from the point of view of the subjective/objective distinction.
 
  • Like
Likes zonde, dextercioby and Auto-Didact
  • #145
Dale said:
No, you cannot predict the outcome of an experiment with only the mathematical framework.
Only in as far as the outcome involves subjective elements.

In his famous textbook [H.B. Callen. Thermodynamics and an introduction to thermostatistics, 2nd. ed., Wiley, New York, 1985.] (no quantum theory!), Callen writes on p.15:
Callen said:
Operationally, a system is in an equilibrium state if its properties are consistently described by thermodynamic theory.
At first sight, this sounds like a circular definition (and indeed Callen classifies it as such). But a closer look shows there is no circularity since the formal meaning of ''consistently described by thermodynamic theory'' is already known. The operational definition simply moves this formal meaning from the domain of theory to the domain of reality by defining when a real system deserves the designation ''is in an equilibrium state''. In particular, this definition allows one to determine experimentally whether or not a system is in equilibrium.

Nothing else is needed to relate a mathematical framework objectively to experiment.

What is ''consistent'' in the eye of a theorist or experimenter is already subjective.
 
  • Like
Likes dextercioby and Auto-Didact
  • #146
A. Neumaier said:
Nothing else is needed to relate a mathematical framework objectively to experiment.
Ok, I have a mathematical framework: ##a=bc ##. Using nothing more than that framework, what is the objective relationship to experiment?
 
  • #147
Dale said:
Ok, I have a mathematical framework: ##a=bc ##. Using nothing more than that framework, what is the objective relationship to experiment?
This is not the framework of a physical theory. It is just a mathematical formula.

According to Callen, if it were the mathematical framework of a physical theory it would predict that whenever you have something behaving like a and b, the product behaves like c. That's fully objective, and as you can see, needs a subjective interpretation of what a,b,c are in terms of reality (i.e, experiment).

A mathematical framework of a successful physical theory has concepts named (the objective interpretation part) after analogous concepts from experimental physics, in such a way that a subjective interpretation of the resulting system allows the theory to be successfully applied.M. Jammer, Philosophy of Quantum Mechanics, Wiley, New York 1974.

gives on p.5 five axioms for quantum mechanics (essentially as today), and comments:

p.5: ''The primitive (undefined) notions are system, observable (or "physical quantity" in the terminology of von Neumann), and state.''

p.7: ''In addition to the notions of system, observable, and state, the notions of probability and measurement have been used without interpretations.''

That's the crux of the matter. Since the properties of probability and measurement are not sufficiently specified in the framework, they remain conceptually ill-defined. Therefore one cannot tell objectively whether something on the level of experiments is consistent with the framework. One needs subjective interpretation.

And indeed, Jammer says directly after the above statement:
Jammer said:
Although von Neumann used the concept of probability, in this context, in the sense of the frequency interpretation, other interpretations of quantum mechanical probability have been proposed from time to time. In fact, all major schools in the philosophy of probability, the subjectivists, the a priori objectivists, the empiricists or frequency theorists, the proponents of the inductive logic interpretation and those of the propensity interpretation, laid their claim on this notion. The different interpretations of probability in quantum mechanics may even be taken as a kind of criterion for the classification of the various interpretations of quantum mechanics. Since the adoption of such a systematic criterion would make it most difficult to present the development of the interpretations in their historical setting it will not be used as a guideline for our text.

Similar considerations apply a fortiori to the notion of measurement in quantum mechanics. This notion, however it is interpreted, must somehow combine the primitive concepts of system, observable, and state and also, through Axiom III , the concept of probability. Thus measurement, the scientist's ultimate appeal to nature, becomes in quantum mechanics the most problematic and controversial notion because of its key position.
 
  • Like
Likes dextercioby and Auto-Didact
  • #148
A. Neumaier said:
This is not the framework of a physical theory.
It is a perfectly valid mathematical framework, one of the most commonly used ones in science.

It is your claim (as I understand it) that objective science can be done with only a mathematical framework. I think that is obviously false, as shown here.
 
  • #149
Dale said:
It is a perfectly valid mathematical framework, one of the most commonly used ones in science.

It is your claim (as I understand it) that objective science can be done with only a mathematical framework. I think that is obviously false, as shown here.

You didn't show anything. I gave the experimental meaning of your framework ##ab=c##, in precisely the same way as any physical framework gets its physical meaning:

A. Neumaier said:
whenever you have something behaving like a and b, the product behaves like c..
 
  • Like
Likes dextercioby
  • #150
A. Neumaier said:
I gave the experimental meaning of your framework ab=cab=cab=c, in precisely the same way as any physical framework gets its physical meaning:
Nonsense, you cannot do an experiment with only that “experimental meaning”. It is insufficient for applying the scientific method.

Suppose I do an experiment and measure 6 values: 1, 2, 3, 4, 5, 6. Using only the above framework and your supposed “experimental meaning” do the measurements verify or falsify the theory?
 
  • #151
Dale said:
Nonsense
Please read my whole posts and don't make ridiculous arguments with meaningless theories!

As already said, the mathematical framework of a successful physical theory have (and must have) must have enough of their important concepts labelled not a,b,c but with sensible concepts from the world of experimental physics so that the subjective part of the interpretation is constrained enough to be useful.

For example, take the mathematical framework defined by ''Lines are sets of points. Any two lines intersect in a unique point. There is a unique line through any two points.'' (This defines the mathematical concept of a projective plane.) This is sufficiently constrained that every schoolboy knows without any further explanation how to apply it to experiment, and can check its empirical validity. There are some subjective interpretation questions regarding parallel lines, whose existence would be thought to falsify the theory, but the theory is salvaged by allowing in the subjective interpretation points at infinity. Another, more sophisticated subjective interpretation treating lines as grand circles on the sphere (undistinguishable by poor man's experimental capabilities) would be falsifiable since there are multiple such lines through antipodal points.

This shows that there is room for nontrivial subjective interpretation, and that the discussion of their testability is significant, as it may mean progress, by adding more details to the theory in a way eliminating the undesired interpretations.

Dale said:
you cannot do an experiment with only that “experimental meaning”. It is insufficient for applying the scientific method.
What did you expect? A mathematical framework of 4 characters is unlikely to give much information about experiment. It says no more than what I claimed.

Most theories are inconsistent with experiment, and only a few, successful ones are consistent with them. Only these are the ones the philosophy of science is about, and they typically are of textbook size!
Dale said:
Suppose I do an experiment and measure 6 values: 1, 2, 3, 4, 5, 6. Using only the above framework and your supposed “experimental meaning” do the measurements verify or falsify the theory?
They verify the theory if you measured a=2, b=3, c=6, and they falsify it if you measured a=2, b=3, c=5. Given your framework, both are admissible subjective interpretations. Your framework is too weak to constrain the subjective interpretation, so some will consider it correct, others invalid, and still others think it is incomplete and needs better foundations. The future will tell whether your new theory ##ab=c## will survive scientific practice...

Just like in the early days of quantum mechanics, where the precise content of the theory was not yet fixed, and all its (subjective since disagreeing) interpretations had successes and failure - until a sort of (but not unanimous) consensus was achieved.
 
  • Like
Likes Auto-Didact and dextercioby
  • #152
A. Neumaier said:
As already said, the mathematical framework of a successful physical theory have (and must have) must have enough of their important concepts labelled not a,b,c but with sensible concepts from the world of experimental physics
What you are describing here is more than just the mathematical framework. That is the mathematical framework plus a mapping to experiment. This mapping to experiment is what distinguishes a scientific theory from a mathematical framework. That is the objective interpretation.

I read your posts, but you are using the words in such a strange way that reading doesn’t help. What I wrote above is directly what I got from reading it.
 
  • #153
Dale said:
What you are describing here is more than just the mathematical framework. That is the mathematical framework plus a mapping to experiment.
The names are traditionally part of the mathematical framework, not a separate interpretation. Look at any mathematical theory with some relation to ordinary life, e.g., the modern axioms for Euclidean geometry or for real numbers, or Kolmogorov's axioms for probability!

The naming provides a mapping of mathematical concepts to concepts assumed already known (i.e., to informal reality, as I use the term). This part is the objective interpretation and is independent of experiment. This is necessary for a good theory, since the relation between a mathematical framework and its physics must remain the same once the theory is mature. A mature scientific theory fixes the meaning of the terms uniquely on the mathematical level so that there can be no scientifically significant disagreement about the possible interpretation, using just Callen's criterion for deciding upon the meaning.

On the other hand, experimental art changes with time and with improving theory. We now have many more ways of measuring things than 100 years ago, which usually even need theory to even be related to the old notions. There are many thousands of experiments, and new and better ones are constantly devised - none of these experiments appear in the objective interpretation part of a theory - at best a few paradigmatic illustrations!

The theories that have a fairly large and still somewhat controversial interpretation discussions are probability theory, statistical mechanics, and quantum mechanics. It is not a coincidence that precisely in these cases, the naming does not suffice to pin down the concepts sufficiently to permit an unambiguous interpretation. Hence the need arose to add more interpretive stuff. Most of the extra stuff is controversial, hence the many interpretations. The distinction between subjective and objective interpretation does not help here, because people do not agree upon the meaning that should deserve the label objective!

Please reread my post #147 in this light.

Dale said:
I read your posts, but you are using the words in such a strange way that reading doesn’t help. What I wrote above is directly what I got from reading it.
Well, I had said,

A. Neumaier said:
As I said, in simple cases, the interpretation is simply calling the concepts by certain names. In the case of classical Hamiltonian mechanics, ##p## is called momentum, ##q## is called position, ##t## is called time, and everyone is supposed to know what this means, i.e., to have an associated interpretation in terms of reality.
I cannot understand how this can be misinterpreted after I had explained that for me, reality just means the connection to experiment.

Dale said:
Anything necessary to predict the outcome of an experiment is objective.
But in probability theory, statistical mechanics, and quantum mechanics, different people differ in what they consider necessary. So how can it be objective?

Dale said:
It is a perfectly valid mathematical framework, one of the most commonly used ones in science.
No. ##ab=c## is just a formula. Without placing it in a mathematical framework it does not even have an unambiguous mathematical meaning.

The mathematical framework to which it belongs could be perhaps Peano arithmetic. This contains much more, since it says what natural numbers are (in purely mathematical terms), how they are added and multiplied, and that the variables denote arbitrary natural numbers.

Then ##ab=c## gets (among many others) the following experimental meaning: Whenever you have a children, b apples, and c ways of pairing children and apples then the product of a and b equals c. This is testable and always found correct. (If not one questions the counting procedure and not the theory.)

Thus no interpretation is needed beyond the mathematical framework itself. Every child understands this.
 
  • #154
A. Neumaier said:
The names are traditionally part of the mathematical framework, not a separate interpretation.
While that is true, within the mathematical framework itself the names are merely arbitrary symbols. This is why a, b, and c are perfectly valid elements of the mathematical framework of a scientific theory.

The mapping to experiment is separate from the mathematical framework itself, even when the names are highly suggestive. This becomes particularly important when different theories use the same name for different concepts. The mapping to experiment is different because the names are merely arbitrary symbols, and the same name does not force the same mapping for different theories.

A. Neumaier said:
The naming provides a mapping of mathematical concepts to concepts assumed already known
My understanding of your previous comments was that this mapping is precisely what we were calling the “objective interpretation”, not the mathematical framework. Otherwise the objective interpretation is empty. I am fine with that, but it is a change from the position I thought you were taking above.

A. Neumaier said:
So how can it be objective?
“Objective” was your word, not mine. I am not sure why you are complaining to me about your own word.

A. Neumaier said:
a children, b apples, and c ways of pairing children and apples
Again, I understood from our previous discussion that this mapping from the mathematical symbols to experimental quantities is what we were calling the objective interpretation.
 
Last edited:
  • #155
DarMM said:
This is just a difference in the use of the word "Foundations", which is sometimes used to include interpretations.

Also see the parts in bold.

"There is no debate in Foundations of probability if we ignore the guys who say otherwise and one of them lost anyway, in my view"

Seems very like the kind of thing I see in QM Foundations discussions.

"Ignore Wallace's work on the Many Worlds Interpretation it's a mix of mathematics and philosophical polemic"
(I've heard this)
"Copenhagen has been shown to be completely wrong, i.e. Bohr lost" (also heard this)

In my opinion there's a major lack of focus in your post. My comment about de Finetti had to do with axioms used (finite vs countable addivitiy). Axioms selected has really nothing to do with QM interpretations.

DarMM said:
I think if I asked a bunch of subjective Bayesians I'd get a very different view of who "won" and "lost".

Jaynes is regarded as a classic by many people I've spoken to, I'm not really sure why I should ignore him.

I don't know why we're talking about best seller general audience books.

As I've already said, the books mentioned in posts 109 and 111 did not include Jaynes' book. I'm trying to be disciplined and actually keep the line of conversation coherent. Jaynes' views were said to be addressed by a different author and that is what my posts have been about.

I never asserted anything was a "best seller general audience book" and I don't think sales have much to do with anything here. I did say that the books mentioned were not math books and they were aimed at a general audience.

Bayesians are in general fine with Kolmogorov formulation of probability. I don't know what you're talking about here... it seems @atyy already addressed this.

DarMM said:
"Foundations" here includes interpretations, so "Kolmogorov vs Jaynes" for example was meant in terms of their different views on probability. There are others like Popper, Carnap. Even if you don't like the word "Foundational" being applied it doesn't really change the basic point.
I've actually read a couple of Popper books, but I don't care about what he has to say about probability --he was not mathematically sophisticated enough. I struggle to figure out why you bought up philosophers here. It's something of a red flag. If you brought up, say the views of some mixture of Fisher, Wald, Doob, Feller and some others, that would be a very different matter.

DarMM said:
Also note that in some cases there is disagreement over which axioms should be the Foundations. Jaynes takes a very different view from Kolmogorov here, eschewing a measure theoretic foundation.

I don't know what this has to do with anything. Measures are the standard analytic glue for probability. That's the settled point. There are also non-standard analysis formulations of probability (e.g. Nelson). The book I referenced by Vovk and Shafer actually tries to redo the formulation of probability, getting rid of measure theory in favor of game theory. The mechanism is betting. It's a work in progress designed to try to get people to think in a different way.

I don't think Jaynes had a complete formulation of probability but that isn't the main problem. He's perfectly fine to read if you already know a lot about probability. Part of the problem is that people who don't know much about probability read his book and then they over-fit their understanding of probability theory to his polemic. The fact that you keep bringing him up is very worrisome in this regard.
 
  • #156
atyy said:
...Bayesians can use the Kolmogorov axioms, just interpreted differently. (And yes, interpretation is part of Foundations, but the Kolmogorov part is settled.)

I think interpretation is even settling, with de Finetti having won in principle, but in practice one uses whatever seems reasonable, or both as this cosmological constant paper did: https://arxiv.org/abs/astro-ph/9812133.

I didn't understand the the italicized part. Countable additivity is typically used because it is mathematically convenient. I'm only aware of a small handful of serious probability works that use finite additivity (e.g. Dubins and Savage's book in addition to de Finetti). I skimmed the link and don't really get your comment. When people say things like

"the ratio of densities is a special, infinitesimal value of order ##10^{−100}## in order for the two densities to coincide today. "
I infer that mathematical subtleties don't have much to do with it.

Perhaps you are referring to something else related to de Finetti?
 
  • #157
StoneTemplePython said:
I didn't understand the the italicized part. Countable additivity is typically used because it is mathematically convenient. I'm only aware of a small handful of serious probability works that use finite additivity (e.g. Dubins and Savage's book in addition to de Finetti). I skimmed the link and don't really get your comment. When people say things like

"the ratio of densities is a special, infinitesimal value of order ##10^{−100}## in order for the two densities to coincide today. "
I infer that mathematical subtleties don't have much to do with it.

Perhaps you are referring to something else related to de Finetti?

I wasn't thinking of finite additivity at all. I think modern Bayesians use Kolmogorov eg. http://www.stat.columbia.edu/~gelman/research/published/philosophy.pdf is written by a Bayesian and a frequentist (maybe I'm oversimplifying), and both accept Kolmogorov. Just that in general, Bayesian thinking is valued for its intellectual framework of coherence eg. http://mlg.eng.cam.ac.uk/mlss09/mlss_slides/Jordan_1.pdf. Also, the concept of exchangeability and the representation theorem are generally taught nowadays, at least in statistics/machine learning: https://people.eecs.berkeley.edu/~jordan/courses/260-spring10/lectures/lecture1.pdf

Since this is a quantum thread, let's add https://arxiv.org/abs/quant-ph/0104088 as another example of de Finetti's influence.
 
Last edited:
  • #158
StoneTemplePython said:
I don't think Jaynes had a complete formulation of probability but that isn't the main problem. He's perfectly fine to read if you already know a lot about probability. Part of the problem is that people who don't know much about probability read his book and then they over-fit their understanding of probability theory to his polemic. The fact that you keep bringing him up is very worrisome in this regard.

One failure of Jaynes' relevant for a quantum thread is that he did not understand the Bell theorem https://arxiv.org/abs/quant-ph/0301059 (yeah, we might have banned him on PF as a crackpot) ...
 
Last edited:
  • #159
StoneTemplePython said:
In my opinion there's a major lack of focus in your post. My comment about de Finetti had to do with axioms used (finite vs countable addivitiy). Axioms selected has really nothing to do with QM interpretations.
Some different interpretations of QM use different axioms, so I don't see how this is true. And like De Finetti's approach these alternate axioms have had later advocates extend them or add something to them to recover the "standard" theory. Such as monotone continuity for De Finetti's approach to get countable additivity, or Wallace's axioms for Everett's approach to QM.

As for the rest of your post, I don't understand what's really wrong with @A. Neumaier 's references or why discussion should be confined to them (apparently introducing any new references is "unfocused"). I'm not going to go on and on with this, it's a simple fact that there are several interpretations of probability with debate and discussion over them. The only way you seem to be getting around this is by saying anybody referenced is wrong in some way, Jaynes is "worrisome", Popper is "just a philosopher", @A. Neumaier 's references are just "general audience write ups". There simply is disagreement over the interpretation of probability theory, I don't really see why you'd debate this.

You'll even see it in textbooks with Feller criticizing Bayesians and Jaynes then criticizing Feller in turn.

Also I really don't get why referencing Jaynes is "worrisome", he's polemical and there are many topics not covered in his book and gaps in what his treatment can cover, as well as his errors in relation to Bell's theorem as @atyy said (it's probable he didn't understand Bell's work). However it's a well regarded text, so I don't see the problem with simply referencing him.

StoneTemplePython said:
Bayesians are in general fine with Kolmogorov formulation of probability
I never said they weren't.

I actually don't understand what your point of contention is.
The way I see it:
  1. Is there debate in the interpretation of probability theory? Yes.
  2. Do some of these different interpretive approaches go via different axioms? Sometimes yes.
  3. Nevertheless is there a commonly agreed axiomatic basis? Yes, Kolmogorov's (most of whose axioms in some of the other approaches become theorems).
  4. Is such debate mostly confined to a smaller more philosophical community, at times actually philosophers and much rarer in on the ground practice? Yes.
This seems very like the situation in QM to me, which is what I was saying to @bhobba
 
Last edited:
  • Like
Likes dextercioby
  • #160
Hmmm, Renner is one of the people who did a quantum version of the de Finetti theorem. Maybe that is enough to forgive him the Frauchiger and Renner papers :P

https://arxiv.org/abs/quant-ph/0512258
 
  • Like
Likes Auto-Didact, DarMM and Demystifier
  • #161
atyy said:
Hmmm, Renner is one of the people who did a quantum version of the de Finetti theorem. Maybe that is enough to forgive him the Frauchiger and Renner papers :P

https://arxiv.org/abs/quant-ph/0512258
If you can forgive Renner, maybe you could also forgive Ballentine for his misunderstanding of collapse, decoherence and the quantum Zeno? :biggrin:
 
  • Like
Likes Auto-Didact and bhobba
  • #162
Demystifier said:
If you can forgive Renner, maybe you could also forgive Ballentine for his misunderstanding of collapse, decoherence and the quantum Zeno? :biggrin:

I guess I forgive Renner more easily because I didn't spend much time on Frauchiger and Renner (I thought it was like perpetual motion machines), and you and DarMM sorted it out for me. OTOH I wasted so much time with Ballentine because he was rated so highly on this forum. And what good thing did Ballentine do equivalent to Renner's quantum de Finetti contribution?

BTW, I haven't forgiven Renner yet :biggrin:
 
  • Like
Likes bhobba, Demystifier and DarMM
  • #163
atyy said:
And what good thing did Ballentine do equivalent to Renner's quantum de Finetti contribution?
Appart from the wrong parts, I still think that his book is the best graduate general QM textbook that exists. And as with Renner, I always have much more respect for being non-trivially wrong than for being trivially right.
 
  • Like
Likes bhobba
  • #164
Demystifier said:
Appart from the wrong parts, I still think that his book is the best graduate general QM textbook that exists. And as with Renner, I always have much more respect for being non-trivially wrong than for being trivially right.

I guess we differ on whether they are trivially wrong or non-trivially wrong. To me it seems that both Ballentine and Frauchiger and Renner are interested in the wrong problems in quantum foundations, and never properly address the measurement problem (the only problem of real worth in quantum foundations).

http://schroedingersrat.blogspot.com/2013/11/do-not-work-in-quantum-foundations.html

Incidentally, the papers by Renner mentioned by Schroedinger's rat did address things closer to the measurement problem.
 
  • Like
Likes Auto-Didact and Demystifier
  • #165
Demystifier said:
Appart from the wrong parts, I still think that his book is the best graduate general QM textbook that exists. And as with Renner, I always have much more respect for being non-trivially wrong than for being trivially right.

I guess the Frauchiger and Renner paper is more non-trivially wrong from the Bohmian point of view (from Copenhagen their setup just seems wrong). So perhaps that's another point in favour of forgiving them - they are unconscious Bohmians :)
 
  • Like
Likes Demystifier
  • #166
  • Like
Likes bhobba, atyy and DrChinese
  • #167
Dale said:
While that is true, within the mathematical framework itself the names are merely arbitrary symbols. This is why a, b, and c are perfectly valid elements of the mathematical framework of a scientific theory.
No. There is a huge difference between a formula (which is meaningless outside of a mathematical framework) and a mathematical framework itself, which is a logical system giving a complete set of definitions and axioms withing which formulas become meaningful. While the names of concepts are in principle arbitrary, once chosen, they mean the same thing throughout (unlike variables) - to the extent that one can understand math texts written in a different language by restoring the familiar wording, without knowing the language itself.

The axioms and definitions carry the complete intrinsic meaning. With Peano's system of axioms you recover everywhere in the universe, no matter which language is used, the same concept of counting, no matter how it is worded, and this is enough to reconstruct the meaning, and then apply it to reality by devising experiment to check its usefulness.

Dale said:
The mapping to experiment is separate from the mathematical framework itself, even when the names are highly suggestive.
This becomes particularly important when different theories use the same name for different concepts. The mapping to experiment is different because the names are merely arbitrary symbols, and the same name does not force the same mapping for different theories.
Most experiments are never mapped to theory by the content of a book on theoretical physics, but they are used to test these theories. This is possible precisely because the theory cannot be mapped arbitrarily to experimental physics without becoming obviously wrong. In a mature theory there is only one way to do the mapping, given the mathematical framework (with axioms, definitions, and results) - even when the names of the concepts are unfamiliar. Unlike in you caricature of a mathematical framework that means nothing at all without context.

Dale said:
Again, I understood from our previous discussion that this mapping from the mathematical symbols to experimental quantities is what we were calling the objective interpretation.
An arbitrary mapping from the mathematical framework to experimental quantities is a valid interpretation iff it satisfies Callen's criterion. In a sufficiently mature theory (such as projective geometry) there is only one such mapping (apart from universal symmetries in the mathematical framework). Thus the mathematical framework alone determines the objective interpretation in this sense, the meaning of everything, and the falsifiability of the theory. Precisely this is the reason why there are no discussions about interpretation in most good theories.

But the current theory of quantum mechanics is underspecified since it uses the undefined notions of measurement and probability in its axioms and hence leaves plenty of room for interpretation.
 
Last edited:
  • Like
Likes dextercioby and Auto-Didact
  • #168
A. Neumaier said:
No. There is a huge difference between a formula (which is meaningless outside of a mathematical framework) and a mathematical framework itself, which is a logical system giving a complete set of definitions and axioms withing which formulas become meaningful.
Yes, that is a good point. I concede this. So for the previous example the framework would have to include the standard axioms of algebra and arithmetic with real numbers.

A. Neumaier said:
While the names of concepts are in principle arbitrary, once chosen, they mean the same thing throughout (unlike variables) - to the extent that one can understand math texts written in a different language by restoring the familiar wording, without knowing the language itself.
Fair enough. So a is the Daleage, b is the Neumaierian , and c is the Demystifier number. Now we have a fully specified mathematical framework complete with names of concepts, axioms, and formulas. And yet, it is impossible from this alone to determine if an experiment validates or falsifies the theory. This is therefore a counter-example to your claim.

A. Neumaier said:
In a mature theory there is only one way to do the mapping, given the mathematical framework (with axioms, definitions, and results). Unlike in you caricature of a mathematical framework that means nothing at all without context.
The context is not part of the framework. That should be obvious. The very meaning of "context" implies looking outside of something to see how it fits into a broader realm beyond itself. The whole purpose of the caricature was to remove the context and look only at the mathematical framework itself. From that "toy" exercise it is clear that the framework is insufficient for experimental testing.

If context is required for the mapping then the framework is insufficient by definition of “context”.

A. Neumaier said:
Thus the mathematical framework alone determines the objective interpretation in this sense
I completely reject this assertion. Certainly, the common usage of the term "theory" states that something in addition to the mathematical framework is required to make the mapping to experiment.
 
Last edited:
  • #169
atyy said:
I wasn't thinking of finite additivity at all. I think modern Bayesians use Kolmogorov eg. http://www.stat.columbia.edu/~gelman/research/published/philosophy.pdf is written by a Bayesian and a frequentist (maybe I'm oversimplifying), and both accept Kolmogorov. Just that in general, Bayesian thinking is valued for its intellectual framework of coherence eg. http://mlg.eng.cam.ac.uk/mlss09/mlss_slides/Jordan_1.pdf. Also, the concept of exchangeability and the representation theorem are generally taught nowadays, at least in statistics/machine learning: https://people.eecs.berkeley.edu/~jordan/courses/260-spring10/lectures/lecture1.pdf

This kind of is the problem... lots of ambigous language. I inferred finite addiviity many posts ago... but this wasn't the right inference at all it seems.

Exchangeability is a big umbrella but really is just specializing symmetric function theory to probability. Off the top of my head, I would have said typical use cases are really martingale theory (e.g. Doob backward martingale). But yes, graphical models and a whole host of other things can harness this. We're getting in the weeds here... a lot of big names have worked on exchangeability.

I have again lost the scent of how this is somehow related to a different kind of probability advocated by de Finetti. I have unfortunately remembered why I dislike philosophy these days.
- - - -
re: Bayes stuff... it is in some ways my preferred what of thinking about things. But people try to make it into a cult, which is unfortunate. As you've stated correctly, frequentists and bayesians are still using the same probability theory -- they just meditate on it rather differently.
 
  • #170
Demystifier said:
Appart from the wrong parts, I still think that his book is the best graduate general QM textbook that exists.

So do I and many here do also. But it can polarize - a number of people here are quite critical of it. I guess it's how you react to the wrong bits - they are there for sure but for some reason do not worry me too much - probably because there are not too many and are easy to spot and ignore. Of greater concern to me personally is Ballentine's dismissal of decoherence as important in interpretations - he thinks decoherence is an important phenomena, just of no value as far as interpretations go:
https://core.ac.uk/download/pdf/81824935.pdf
'Decoherence theory is of no help at all in resolving Schrödinger’s cat paradox or the problem of measurement. Its role in establishing the classicality of macroscopic systems is much more limited than is often claimed.'

That however would be a thread all by itself :rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes:

Thanks
Bill
 
  • Like
Likes Auto-Didact and Demystifier
  • #171
Demystifier said:
Now suppose that someone else develops another theory T2 that makes the same measurable predictions as T1. So if T1 was a legitimate theory, then, by the same criteria, T2 is also a legitimate theory. Yet, for some reason, physicists like to say that T2 is not a theory, but only an interpretation. But how can it be that T1 is a theory and T2 is only an interpretation? It simply doesn’t make sense.
Scientific approach requires that prediction is made before experiment. So I can see a way how T1 is considered as something more than T2. Say T1 is verified by experiment (T1 has made predictions before experiment) but T2 is developed later by knowing experimental results with which it has to agree and it does not produce any new predictions. Then T1 is verified but T2 is not, even so they give exactly the same predictions.
And there is good reason for that rule that predictions have to be produced before experiment - people are very good at cheating themselves.

There is another thing I would like to add concerning discussion about the topic. Theory has to include things needed for it to produce testable predictions. But then QM as a statistical theory makes this task difficult and ambiguous. There is a lot of event based reasoning on experimental side before we get statistics (consider coincidence counters for example). And on one hand QM as a statistical theory can not replace that event based classical reasoning but on the other hand it overlaps with classical theories and is more correct, so it sort of should replace it.
So to me it seems that without something we usually call "interpretation" QM connection to experiments remains somewhat murky.
 
  • Like
Likes kurt101
  • #172
I can't help wondering that, however interesting this thread is, it is metaphysics rather than physics?

If I am mistaken could you explain why.

Regards Andrew
 
  • Like
Likes dextercioby, Mentz114 and Dale
  • #173
Dale said:
Fair enough. So a is the Daleage, b is the Neumaierian , and c is the Demystifier number. Now we have a fully specified mathematical framework complete with names of concepts, axioms, and formulas. And yet, it is impossible from this alone to determine if an experiment validates or falsifies the theory. This is therefore a counter-example to your claim.
No. Each time I measure two numbers a and b I can apply your theory and say, ''Ah, if I interpret a as the Daleage and b as the Neumaierian then their product is the Demystifier number. Interesting'' (or boring).

This is the same as what happens when applying quantum mechanics to experiment. We deduce information about the wave function (a purely theoretical concept) by interpreting certain experimental activities as instances of the theory.

Dale said:
The context is not part of the framework.
I agree. It is also not part of the theory. Thus your example is ridiculous.

The example of projective planes shows that the framework itself, if it is good enough, contains everything needed to apply it in a context appropriate for the theory. This holds even when the naming is different. The context has its structure and the theory has its structure, and anyone used to recognizing structure will recognize the unique way to match them such that the theory applies successfully.

A. Neumaier said:
An arbitrary mapping from the mathematical framework to experimental quantities is a valid interpretation iff it satisfies Callen's criterion. In a sufficiently mature theory (such as projective geometry) there is only one such mapping (apart from universal symmetries in the mathematical framework). Thus the mathematical framework alone determines the objective interpretation in this sense, the meaning of everything, and the falsifiability of the theory.
Dale said:
I completely reject this assertion. Certainly, the common usage of the term "theory" states that something in addition to the mathematical framework is required to make the mapping to experiment.
Yes, namely the experience of the experimenter. The relation between theory and experiment is far more complex than the few hints a given in a book on theoretical physics. It is not the subject of such books but of books on experimental physics!
Dale said:
I also found a paper entitled "What is a scientific theory?" by Patrick Suppes from 1967 (Philosophy of Science Today) who says "The standard sketch of scientific theories-and I emphasize the word 'sketch'-runs something like the following. A scientific theory consists of two parts. One part is an abstract logical calculus ... The second part of the theory is a set of rules that assign an empirical content to the logical calculus. It is always emphasized that the first part alone is not sufficient to define a scientific theory".
As he describes this as the "standard sketch" and as this also agrees with the Wikipedia reference and my previous understanding, then I take it that your definition of theory is not that which is commonly used.
But Suppes says there:
Suppes said:
scientific theories cannot be defined in any simple or direct way in terms of other non-physical, abstract objects. [...] To none of these questions do we expect a simple and precise answer. [...] This is also true of scientific theories.
He calls your view the ''standard sketch'' meaning that this is (i) the (uninformed) usually heard opinion and (ii) a vast simplification. Then he gives his (better informed) critique of the standard sketch, which he disqualifies as ''highly schematic'' and ''relatively vague'', and refers to ''different empirical interpretations''. Thus he says that the same theory has different empirical interpretations, which therefore cannot be part of the theory!
Suppes said:
It is difficult to impose a definite pattern on the rules of empirical interpretation.
Then he talks about ''models of the theory [...] highly abstract'', which makes sense only if his view of theory is just the mathematical framework which is the meaning he then uses throughout. On p.62, he talks about ''the necessity of providing empirical interpretation of a theory''. This formulation makes sense only if one identifies ''theory = the formal part'' and treats the interpretation as separate! Then he goes on saying that the formulations in the standard sketch
Suppes said:
have their place in popular philosophical expositions of theories, but in the actual practice of testing scientific theories a more elaborate and more sophisticated formal machinery for relating a theory to data is required. [...] There is no simple procedure for giving co-ordinating definitions for a theory. It is even a bowdlerization of the facts to say that co-ordinating definitions are given to establish the proper connections between models of the theory and models of the experiment.
and then he discusses (starting p.63 bottom) the morass one enters if one wants to take your definition seriously!

So the only clean and philosophically justified conceptual division is to have
  • theory = mathematical framework (which includes suggestive names attached to the concepts)
  • interpretation = explaining the suggestive names on the basis of informal key examples from experiment.
 
Last edited:
  • Like
Likes dextercioby
  • #174
andrew s 1905 said:
I can't help wondering that, however interesting this thread is, it is metaphysics rather than physics?

If I am mistaken could you explain why.
It's about semantics with a bits from philosophy of science. As to why - metaphysics "examines the fundamental nature of reality" [wikipedia] but this discussion is about borders between mathematics, theory and interpretation (philosophy of science) and common meaning for last two terms (semantics).
 
  • #175
A. Neumaier said:
Each time I measure two numbers a and b I can apply your theory and say, ''Ah, if I interpret a as the Daleage and b as the Neumaierian then their product is the Demystifier number. Interesting'' (or boring).
Exactly. You have to go beyond the mathematical framework and map the experimental results to the labels.

A. Neumaier said:
I agree. It is also not part of the theory. Thus your example is ridiculous.
The part of the context that gives the mapping from the framework to experiment is part of the theory. This is precisely the point where your misuse of the term “theory” is causing problems.

A. Neumaier said:
He calls your view the ''standard sketch'' meaning that this is (i) the (uninformed) usually heard opinion and (ii) a vast simplification. Then he gives his (better informed) critique of the standard sketch,
Yes. But as far as I am aware the standard sketch remains the standard meaning of the terms and the scientific community has not adopted his “better informed” opinion. I.e. even someone who is openly antagonistic to the standard definition admits that it is in fact the standard definition.

At this point I think that further discussion is pointless (as is always the case in interpretations discussions here). You are clearly going to continue to use the non-standard terminology, and so you are going to continue to have multi-page semantic arguments due to the fact that you are using non-standard terminology. I also think that your supportive references don't actually support your position. In particular, the Wikipedia reference on interpretation uses the term "reality" instead of your term "observation/reality", and also your reference to Callen doesn't support your point since he is talking about the "theory" (mathematical framework + mapping to experiment) and not just the "mathematical framework" as you claim.

You are free to get the last word in, but I am disengaging at this point. Your whole approach uses a non-standard meaning for the word "theory" and I will not adopt your meaning and I suspect you will not adopt the standard meaning. It is the standard meaning, despite your distaste for it.
 
Last edited:

Similar threads

Replies
190
Views
25K
Replies
1
Views
1K
Replies
4
Views
903
Replies
20
Views
972
2
Replies
39
Views
2K
Replies
46
Views
5K
Replies
239
Views
10K
Replies
5
Views
2K
Replies
28
Views
3K
Back
Top