Against “interpretation”
I am against “interpretations” of Quantum Mechanics (QM) in a sense in which John Bell [1] was against measurement in QM and Travis Norsen [2] is against realism in QM. Bell was not against doing measurements, he was against using the concept of measurement as a central concept in quantum foundations. Norsen does not think that realism does not exist, he thinks that the existence of realism is so obvious and basic that one should not even talk about it. In a similar spirit, I do not think that physicists should not study interpretations, I think that it is misleading to talk about interpretations as something different from theories. The titles “Against measurement” [1] and “Against realism” [2] were chosen by Bell and Norsen to provoke, by imitating the provocative style of Paul Feyerabend – the famous philosopher of science who was “Against method” [3]. My intentions here are provocative too.
Physicists often say that in physics we need theories that make new measurable predictions and that we don’t need interpretations that make the same measurable predictions as old theories. I think it’s nonsense. It’s nonsense to say that theories are one thing and interpretations are another. The interpretations are theories. Making a distinction between them only raises confusion. So we should ban the word “interpretation” and talk only about the theories.
Let me explain. Suppose that someone develops a theory called T1 that makes measurable predictions. And suppose that those predictions were not made by any previous theory. Then all physicists would agree that T1 is a legitimate theory. (Whether the predictions agree with experiments is not important here.)
Now suppose that someone else develops another theory T2 that makes the same measurable predictions as T1. So if T1 was a legitimate theory, then, by the same criteria, T2 is also a legitimate theory. Yet, for some reason, physicists like to say that T2 is not a theory, but only an interpretation. But how can it be that T1 is a theory and T2 is only an interpretation? It simply doesn’t make sense.
To resolve that issue, one might say that both T1 and T2 are interpretations. Fine, but then what is the theory? T1 was a legitimate theory before someone developed T2, but now T1 ceased to be a theory just because someone developed T2? It doesn’t make sense either.
Or perhaps the theory is just the set of final measurable predictions of T1 and T2, while all the other “auxiliary” elements of T1 and T2 are the “interpretation”? It doesn’t make sense either, because no theory in physics deals only with measurable predictions. All physics theories have some “auxiliary” elements that are an integral part of the theory.
Or perhaps an interpretation is a theory that emphasizes philosophical aspects? I think this is what most physicists mean by interpretation, even if they don’t want to say it explicitly. The problem with this definition is that it cannot be put into a precise form. All theories have some philosophical aspects, some theories more, some less. So exactly how much of philosophy does a theory have to have to call it an interpretation? It’s simply impossible to tell. And where exactly is the borderline between philosophy and non-philosophy? There is no such borderline.
To conclude, we can talk about a theory, we can distinguish the measurable predictions of the theory from other elements of the theory that cannot be directly measured, but it doesn’t make sense to distinguish an interpretation from a theory. There are no interpretations of QM, there are only theories.
References:
[1] J. Bell, Against measurement, https://m.tau.ac.il/~quantum/Vaidman/IQM/BellAM.pdf
[2] T. Norsen, Against “realism”, http://de.arxiv.org/abs/quant-ph/0607057
[3] P. Feyerabend, Against method, https://en.wikipedia.org/wiki/Against_Method
Read my next article on Anyons
Theoretical physicist from Croatia
Now suppose that someone else develops another theory T2 that makes the same measurable predictions as T1. So if T1 was a legitimate theory, then, by the same criteria, T2 is also a legitimate theory. Yet, for some reason, physicists like to say that T2 is not a theory, but only an interpretation. But how can it be that T1 is a theory and T2 is only an interpretation? It simply doesn’t make sense.Latecomer's comment: let's take mechanics as an example. By my standards D'Alembert's principle, Hamilton's principle, Newton's laws are all fits to the category of 'theory', since even if they are about the same topic they have different math behind them.
Interpretation, however would be something like the history of Newton's first law: originally it was a statement of the behavior of isolated bodies, then slowly it was transformed to the definition of inertial systems. The underlying math is the same, however the translation to actual language became different.
In this context the debate about the interpretation of QM would be more about the frustration caused by the conflict of language and complex math what prevents the clear description of principle with common language than about different theories of the same topic – since there is no fundamentally different math for the topic…
But Professor Henry Stapp comforts me a bit, saying that my conscious choice of the thought pattern in my brain is a genuine quantum choice . . .Oh, really? Poor you! (How does he know you so well??)
My conscious choices are almost never random but usually based on more or less predictable preferences. (Of course predictable only by those who know me well enough.)
I also prefer my friends to be reliable rather than that they act randomly….
In reality, we have no choice but to be content with the given one in which we live.Ah, that's a bit sad:smile:
But Professor Henry Stapp comforts me a bit, saying that my conscious choice of the thought pattern in my brain is a genuine quantum choice . . .
the choice between the available variants of the universeIn reality, we have no choice but to be content with the given one in which we live.
If you look at the abstractions here, I see strong analogies in the discussion here between the "theory vs interpretations" and other concepts such as the nature of gauge symmetry, and questions of philosophy of science such as objective vs subjective information.
This characterisation is tempting to make for me:
Theory ~ equivalence class of the versions of the theories we index by interpretation.
Here the new question induced is, how does one scientifically defined the equivalence class? How do we know that the set of versions of theories are exhausted? How much interactions and comparasion are required to concluded equivalence? Is this process even physically realisable in finite time with by bounded physical system?
Ie. we can consider the equivalence class the "theory" and the choice of interpretations as gauge choices, that in some contexts are also described as redundancies, of choosing an observer.
However, this raises more deep complicated questions, that puts the focus on how objectivity (as in gauge equivalences, or gauge symmetries) are actually established, given that the process of "science" (inference) take place INSIDE this system; not outside or external to.
One can also associate this into an evolutionary perspective, and here it seems that different interpretations, yield difference expectations on the future development by the natural extrapolation. Each interpretation defines a measure of naturality and extrapolations.
So i associate the interpretations in the context of evolution as part of the variability required. This is how i always talked about "interpretations"; they make no difference and are of no survival value at the present moment, but they represent the healthy variation that sets out the researhc direction for the future; and there they will be discriminated.
This is why my personal view is that "interprations of QM" become interesting only when they are taking to their full implications BEYOND the standard model.
/Fredrik
Why, naturally, the QM is the only theory that tries to deal explicitly with the choice between the available variants of the universe, each variant obeying all the other theories.
I read that and I totally disagree that this supports your view over mine, but I am not willing to argue the point further. Everyone has a built in psychological tendency towards what is called confirmation bias, where evidence is fit into a preconceived view. I believe that you are suffering from that quite strongly, and I assume that you feel I am also suffering from the same. To me, that same passage supports my view, not yours.Schrödinger did not have my confirmation bias. He confirmed in his 1958 paper ''Might perhaps energy be a merely statistical concept?'' my reading of Rosenfeld although he strongly opposes its truth:
I feel induced to contradict emphatically an opinion that Professor L. Rosenfeld has recently uttered in a meeting at Bristol, to the effect that a mathematically fully developed, good and self-consistent physical theory carries its interpretation in itself, there can be no question of changing the latter, of shuffling about the concepts and formulae.
The proceedings contain a paper by Rosenfeld (pp.41-45) discussing the relation of theory to physical experience, expressing also the same view as I did, not @Dale's:I read that and I totally disagree that this supports your view over mine, but I am not willing to argue the point further. Everyone has a built in psychological tendency towards what is called confirmation bias, where evidence is fit into a preconceived view. I believe that you are suffering from that quite strongly, and I assume that you feel I am also suffering from the same. To me, that same passage supports my view, not yours.
Funnily enough I always thought QM was very similar to probability theory in this regard. Although most people just apply it, there is a pretty active community of debate on Foundations, e.g. Frequentist vs Kolmogorov vs De Finetti vs Jaynes. Famously summed up in I.J. Good's title "46656 Varieties of Bayesians" for the Third Chapter of his 1983 book "Good Thinking: The Foundations of Probability and Its Applications".The debate is actually older than the debate of the foundations of quantum mechanics. An interesting snapshot from 1957 is given by the proceedings
S. Körner (ed.), Observation and Interpretation, Butterworths, London 1957.
It shows similarities and interrelations between the interpretation problems in quantum mechanics and in probability theory.
The proceedings contain a paper by Rosenfeld (pp.41-45) discussing the relation of theory to physical experience, expressing also the same view as I did, not @Dale's:
The ordinary language, (spiced with technical jargon for the sake of conciseness) is thus inseparably united, in a good theory, with whatever mathematical apparatus is necessary to deal with the quantitative aspects. It is only too true that, isolated from their physical context, the mathematical equations are meaningless: but if the theory is any good, the physical meaning which can be attached to them is unique.The proceedings also contain a paper by D. Bohm on his hidden variable theory, with discussion.
Einstein himself was a proponent of it.But he never gave an explicit formal expression of it, I think. While Weyl is quite explicit about it – well before the Bohr-Einstein debate.
(Thus – @bhobba, @atyy – the ensemble interpretation starts at least with Weyl 1927, and not only with Ballentine 1980!)Of course. Einstein himself was a proponent of it. As I think I mentioned it is interesting the interpretation has come through to modern times pretty much unchanged, Copenhagen – not so well. Of course Copenhagen has the added issue of there being all sorts of different versions. When I speak of Copenhagen I mean the version advocated by Bohr even though he is a bit too philosophical for my taste – just me of course – its got nothing to do with its validity – just I find such hard to understand. It is of course understandable – I have no doubt Einstein understood what his good friend was saying even though he disagreed with him – but I am a philosophical philistine as my philosophy teacher was only too well aware (I took a graduate course in philosophy – actually two, but the second one I pulled out of because it really was philosophy in a historical context and history did not interest me that much).
Thanks
Bill
Or we can spin off a separate thread about the "what is a theory" question so it can be discussed independently of QM interpretations.It is a mix of QM and QM-independent stuff that is difficult to disentangle, hence it is better to leave it here. @Dale could open a new thread, however, and I'd repeat the main features of my point of view.
It is difficult to satisfy everyone. @Dale wants to leave quantum physics out of the discussion, you want to concentrate exclusively on it.@Dale has said that he himself has little interest in QM interpretations, yes. But for better or worse, that is what this thread is supposed to be about, since that's what the article in the OP is about. @Dale can always just not post further in this thread if the topic gets too tiresome for him. Or we can spin off a separate thread about the "what is a theory" question so it can be discussed independently of QM interpretations.
I was talking about an example using QM, since interpretations of QM is the topic of this thread.It is difficult to satisfy everyone. @Dale wants to leave quantum physics out of the discussion, you want to concentrate exclusively on it.
a theory textbook won't do this except for a highly idealized experiment.But according to Dale, a scientific theory must contain the map from theory to experiment, and surely a book on quantum theory should provide enough of the theory so that it is a scientific theory. According to you, it would not be a scientific theory in Dale's sense. Do you agree with Dale or with Suppes in this respect?
the point I'm trying to make is that none of that work has anything to do with QM interpretations as that term is used in the article in the OP of this thread. Collapse vs. MWI, for example, does not enter into that process at all; a collapse proponent and an MWI proponent can both tell their preferred stories about what happens, unaffected by any of the work the experimenters had to do to match up the theory with the actual events in their lab.In MWI nothing ever is predicted, unless you tell MWI which world is realized in the experiment. Thus MWI robs quantum mechanics of its predictive value.
Of course, the MWI proponents hide this by fuzzy terminology, but when you follow up on their justification of the empirical recipes you find nothing of substance.
In Bohmian mechanics, additional unobservable position variables are introduced, but it seems that these positions have no empirical content and hence give a misleading sense of ''reality''.
In the Copenhagen interpretation, nothing is predicted if you consider the solar system as a quantum system, since none of our observations are done from the outside. Of course, the Copenhagen interpretation was not intended for large systems such as the solar system, but for tiny systems under study in the 1920's and 1930's. But it showed its limitations later, and ultimately was found questionable by many. In the microscopic realm it is fully adequate. But it refuses to give a map to experiment as Dale would require it; it leaves that to classical physics, which is outside the scope of Copenhagen quantum physics (except in a correspondence limit).
The same hold for the statistical interpretation, but for different reasons: We cannot create enough independent copies of the solar system to perform adequate statistics on it. Again, for tiny systems, there are no problems with this interpretation.
Similar things can be said for any of the interpretations of quantum mechanics listed in Wikipedia.
Thus for tiny systems, shut-up-and-calculate is adequate. The mathematical framework of quantum mechanics (with highly suggestive names for the concepts) has enough structure to enforce its interpretation in the microscopic realm. This is meant in the same sense as I demonstrated it – for simplicity, both to avoid having to discuss all the stuff specific to quantum mechanics, and since Dale wanted the discussion to apply to general scientific theories – for the Peano axioms and for projective planes.
For large systems, in particular for the solar system, no current interpretation of quantum mechanics is adequate. Although quantum theory is obviously complete on this level (when gravitation is modelled semiclassically in the post-Newton approximation), the physics community simply does not know how to set up a mapping from theory (with or without interpretation) to experiment that is both logically consistent and applies to the solar system and all its subsystems. But the principles of quantum theory have been unchanged since around 1975 (with the advent of POVMs and the standard model) and are unlikely to change in the future, except perhaps with the incorporation of gravity.
This is the reason why the number of interpretations has proliferated, each new proposal being made in the hope that its fate would be better than that of the earlier ones. It also shows that the mapping from theory (with or without interpretation) to experiment cannot be part of quantum theory
Please give me a reference to an online article or well-known textbook that gives this unique ''mapping between the mathematical model and experiment''.As you note, a theory textbook won't do this except for a highly idealized experiment. Obviously, as you say, an experimenter doing a real experiment has to do significant additional work to connect what the theory says to what he actually does in his lab.
However, none of this changes what I was saying. Let me try to rephrase what I was saying to show this. Suppose we are running a Stern Gerlach experiment–a real one, like the original one Stern and Gerlach did, where you are using silver atoms, not individual electrons, and you have a beam of them, not individual ones passing through the apparatus one at a time, and you vary the magnetic field and watch the beam on the detector split, as shown on the postcard that they sent to Bohr (IIRC). Obviously, as you say, a lot of work has to be done to match up what they saw in this real experiment with the theoretical model of a qubit.
But the point I'm trying to make is that none of that work has anything to do with QM interpretations as that term is used in the article in the OP of this thread. Collapse vs. MWI, for example, does not enter into that process at all; a collapse proponent and an MWI proponent can both tell their preferred stories about what happens, unaffected by any of the work the experimenters had to do to match up the theory with the actual events in their lab.
At least, that's how I see it; but perhaps, since you discuss the ensemble interpretation, the argument you are making is that experiments like Stern-Gerlach, properly interpreted, actually do rule out, say, the MWI? Or a collapse interpretation that makes claims about individual electrons (or silver atoms) instead of ensembles? If so, that certainly does not seem to be a common view among physicists.
And even that doesn't exhaust the possibilities: you can pick any integer (positive, negative, or zero) you like, and adopt it as the starting "natural number", and you will satisfy all of the axioms. In other words, there are an infinite number of possible isomorphisms to some "canonical" set of natural numbers (say the ones starting with 1, since those are the ones you say you prefer), each of the form ##x rightarrow x + a##, with ##a## being any integer.
In other words, while it is certainly possible to discover a mapping between a mathematical model and experience in the process of understanding the mathematical model, there is no guarantee that the mapping you discover is the only mapping. So people both using the same mathematical model can still end up making different predictions, if they have not taken steps to ensure that they're both using the same mapping as well. For example, if you use 1-based counting and I use 0-based counting, we're likely to get confused trying to match up our counts if we don't realize the difference and make appropriate adjustments.Yes, and as I had said in my last post, exactly the same happens in classical mechanics, which can be mapped only up to a rigid motion.
With all that said, I still come back to what I said in my previous post: none of this has anything to do with interpretations of QM, because different interpretations of QM all agree about which mapping between the mathematical model and experiment to use.No, they don't. Please give me a reference to an online article or well-known textbook that gives this unique ''mapping between the mathematical model and experiment''. And I'll show (like Suppes did) that it says almost nothing about real experiments. Theoretical sources only say something about relations to abstract buzzwords like ''observable'' and ''measure'' about whose precise meaning the interpretations (and the experimental practice) widely differ. There are many thousands of experiments to be covered, the term ''experiment'' in your comment is something very theoretical….
It might be helpful if you would give a concrete example. Perhaps this would qualify as one: the "theory" (for your meaning of that term) is the standard quantum theory of a qubit. Two different possible "mappings" are: interpreting the qubit theory as describing the spin of an electron in a Stern-Gerlach experiment; interpreting the qubit theory as describing the polarization of a photon passing through a beam splitter. Is that the sort of thing you have in mind?Well this is a meta setting in which the real world is replaced by a theoretical world, in which the qubit has two different interpretations. No, this was not what I had meant.
In your context, consider the qubit first discussed by Weyl 1927 in the context of a Stern-Gerlach-like experiment. [H. Weyl, Quantenmechanik und Gruppentheorie, Z. Phys. 46 (1927), 1-46.] The title of the first part is ''The meaning of the repesentation of physical quantities through Hermitian operators'' (''Bedeutung der Repräsentation von physiksalischen Größen durch Hermitesche Formen''). It discusses among others the paradox that the angular momentum in the three coordinate axes can only take the values ##pm 1## (in units of ##hbar/2##) but the angular momentum in other directions, too – which is inconsistent with the algebra. This shows the need for proper interpretation. He then introduces the ensemble interpretation (ensemble = ''Schwarm'') in pure and mixed states, and resolves the paradox in the well-known statistical way. (Thus – @bhobba, @atyy – the ensemble interpretation starts at least with Weyl 1927, and not only with Ballentine 1980!)
The map from theory to experiment is stated not as part of the theory but as ''the assumption by Goudsmit and Uhlenbeck, which has proven itself well'' [Goudsmit, S. and Uhlenbeck, G.E., 1926. Die Kopplungsmöglichkeiten der Quantenvektoren im Atom. Z. Physik A 35 (1926), 618-625.] – but for electrons, and it was formulated in purely spectroscopic terms. Weyl applies it to a Stern-Gerlach-like experiment (with electrons in place of the original silver atoms).
Why was he allowed to do that? One map from theory to experiment was given through spectroscopy, another map was given through Stern-Gerlach for silver atoms. Fron these, Weyl created by analogy (not by theory) a third map for electrons in the Stern-Gerlach-like experiment. Thus the map changed.
Moreover, there are many more experiments related to angular momentum, and no quantum theory book I know points out how these are connected to the theory.
Nobel prizes are given to new ways of devising useful measurements at unprecedented accuracy, but nobody ever has suggested that each time the theory needs to be amended by mapping its mathematics to these new experimental possibilities. This mapping is described instead in papers published in experimental physics journals!
This is very typical. A theory book gives informally (not as part of the theory, since different expositions of the same theory use different examples) some key experiments in a very simplified description and relates these in an exemplary manner to theory, in order to create suggestive relations between theory and experiment. These are of the same nature as the (according to @Dale ''highly suggestive'') hints to reality given in a purely mathematical theory to make it intelligible. And they have precisely the same limitations that Dale pointed out:
The mapping to experiment is separate from the mathematical framework itself, even when the names are highly suggestive.By the same token, the mapping to experiment is separate from the physical theory itself. No theory book gives more than highly suggestive names and pointers to experiments. The connection to real experiments must be made by the experimenter who understands the difference between a real experiment and a symbolic toy demonstration.
I already gave in #153 and #167 the example of the axioms for natural numbers (plus a little finite set theory for the Cartesian product), and in #151 and #173 that of projective planes.I was talking about an example using QM, since interpretations of QM is the topic of this thread.
the interpretation is essentially forced upon us by the structure of the theoryExcept that, as you say, there are two interpretations that are consistent with the theory (the one starting with 0, and the one starting with 1).
And even that doesn't exhaust the possibilities: you can pick any integer (positive, negative, or zero) you like, and adopt it as the starting "natural number", and you will satisfy all of the axioms. In other words, there are an infinite number of possible isomorphisms to some "canonical" set of natural numbers (say the ones starting with 1, since those are the ones you say you prefer), each of the form ##x rightarrow x + a##, with ##a## being any integer.
In other words, while it is certainly possible to discover a mapping between a mathematical model and experience in the process of understanding the mathematical model, there is no guarantee that the mapping you discover is the only mapping. So people both using the same mathematical model can still end up making different predictions, if they have not taken steps to ensure that they're both using the same mapping as well. For example, if you use 1-based counting and I use 0-based counting, we're likely to get confused trying to match up our counts if we don't realize the difference and make appropriate adjustments.
With all that said, I still come back to what I said in my previous post: none of this has anything to do with interpretations of QM, because different interpretations of QM all agree about which mapping between the mathematical model and experiment to use. (This mapping still depends on the specific experiment: in my previous post I gave the example of the same qubit mathematical model applying to both electron spins and photon polarizations.) The different QM interpretations only disagree about what story to tell about "what is going on behind the scenes", so to speak; but those stories have nothing to do with matching up the mathematical model to experiment. So I don't see how any of what's been said about matching up the mathematical model to experiment/experience has anything to do with QM interpretations, which is the topic of this thread.
It might be helpful if you would give a concrete example.I already gave in #153 and #167 the example of the axioms for natural numbers (plus a little finite set theory for the Cartesian product), and in #151 and #173 that of projective planes. What they demonstrate is the key to a correct understanding. (Though it doesn't quite answer your query – please be patient!)
Let me give a complete example inspired by Onaep, a little known Italian contemporary of François Viète (who invented the notion of variables).
QUOTE="Onaep"]I is a rebmun. If Z is a rebmun then ZI is a rebmun. If ZI=YI then Z=Y. Never ZI=I. Every rebmun is generated in this way.”
In 1600, Dnikeded, an ambitious student of math, sits in Prof. Onaep's class, being told that he is a capacity in the field of applied algebra. He is reading the above (as part of one of Onaep's exercises) for the first time and has not the slightest idea what he is talking about. He never heard of anything called rebmun. Determined to figure out the meaning hep plays with the statements given. Well, at least he knows that I is a rebmun. Setting Z=I he discovers that II is a rebmun. Setting Z=II he discovers that III is a rebmun. Setting Z=III he discovers that IIII is a rebmun. Setting Z=IIII he discovers that IIIII is a rebmun. This reminds him of counting. Each new rebmun is obtained by adding an I to the previous rebmun. The process goes on for ever…. Remembering what he had learnt already about algebra, Dnikeded noticed that the rebmuns could be interpreted in terms of stuff he was familiar with – numbers. If he identified I with 1 then he could equate II with 2, III with 3, IIII with 4, IIIII with 5, etc. ''Ah, this is a variant of the way we count the number of beers in the pub,'' he thought, ''except that each 5th bar would be drawn vertically, a minor issue that doesn't really change things.'' But well, there were more properties: If ZI=YI then Z=Y. ''True – if my friend and I both order a beer and then have the same number of beers, we must have had the same number of beers before. Thus Onaeps theory is predictive, and things come out correctly. Let me try the next item, never ZI=I; can I falsify my interpretation?'' He tries and finds no problem with it – I is too short to be of the form ZI. Dnikeded is left with the final statement to be figured out. He thinks about what he can generate so far: 1,2,3,4,5,6,7,8,9,10,11,… a never ending list of numbers. But neither 0 nor fractions like 2/3. Also no negative numbers. Suddenly everything makes sense. ''Ah, I finally understand. rebmuns are nothing else than the numbers I have been familiar with since childhood, before I got interested in more advanced number theory!''
The mapping from theory to reality/experiment that Dale was conjuring up as belonging to the theory appeared out of nowhere!
The map is not provided by the theory, but it exists (for any theory that deserves to be called scientific). The map is created/discovered in the process of understanding the meaning of a scientific theory! Initially the theory is just a formal system, but after we understand it it is related to our own experience. When we see that it matches experience and satisfies some empirical tests, we know that we really understood! As all students of physics know, this may be quite some time after we heard the details of the theory and checked the logical consistency of the formal stuff we try to understand. We can solve the exercises long before we have a good feeling for the theory, i.e., a good map between theory and experience.
This is the generic situation of a theory without an interpretation problem – the interpretation is essentially forced upon us by the structure of the theory, no matter how things are named. The naming only simplifies the process of understanding.
But wait… When Dnikeded compared his solution of the exercise with that of his friend Rotnac, he noticed that the latter had another way of interpreting Onaep.
He had also played with the statements in Onaep's riddle and associated it with marbles in his pocket.He linked changing Z to ZI to putting a new marble into the pocket. Starting with the empty pocket that contained no marble, he got the correspondence I=0, II=1,III=2,III=3, etc.. Both Dnikeded and Rotnac tried to figure out who made an error and whose interpretation was defective. But they couldn't find one. So they went to Onaep, asking for his advice. Onaep declared both interpretations to be valid.
Indeed, the modern concept of natural numbers (based on the Peano axioms) exists in two forms, and the two different traditions have two different interpretations, depending on whether they call 0 a natural numbers (e.g., friends of C++ and set theorists) or whether they don't (e.g., friends of Matlab and all before Cantor). I belong to the second category and believe that 0 is an unnatural number since I have never seen someone count 0,1,2,3…, and it took ages to discover 0 – and many more centuries to declare it natural.
The two interpretations are related by the fact that ##xto x+1## is an isomorphism between the two. This is analogous to the interpretation of classical mechanics, which unique only up to the choice of an orthonormal coordinate system. In the latter case, a rigid motion provides the necessary isomorphism.
Each time someone finds a new way of testing the theory, the mapping changes!It might be helpful if you would give a concrete example. Perhaps this would qualify as one: the "theory" (for your meaning of that term) is the standard quantum theory of a qubit. Two different possible "mappings" are: interpreting the qubit theory as describing the spin of an electron in a Stern-Gerlach experiment; interpreting the qubit theory as describing the polarization of a photon passing through a beam splitter. Is that the sort of thing you have in mind?
If it is, then I think you are using the term "interpretation" differently from the way it is used when discussing the foundations of QM (which is the meaning of "interpretation" that this thread is supposed to be discussing). QM "interpretation" has nothing to do with which particular experiment you are analyzing. It has to do with what kind of story you tell about what is happening in the experiment. In the above example, say we pick the first "interpretation" in your sense: we interpret the quantum theory of the qubit as describing the spin of an electron. Then we still have different possible QM interpretations: a collapse interpretation says the spin of the electron collapses into an eigenstate when it passes through the Stern-Gerlach device; the many worlds interpretation says the electron's spin gets entangled with its momentum so it ends up in a superposition of two states, one with "up" spin coming out of the device in one direction, the other with "down" spin coming out of the device in a different direction. But there is no way to tell experimentally which of these "interpretations" is right, and these "interpretations" have nothing to do with how you match up the math of the standard quantum theory of a qubit with experimental observations.
Exactly. You have to go beyond the mathematical framework and map the experimental results to the labels.But this mapping is not part of the theory; it is done by the experimenter who wants to use the theory. Each time someone finds a new way of testing the theory, the mapping changes! In your case, there are many possibilities to do the mapping, hence a multitude of interpretations.
The part of the context that gives the mapping from the framework to experiment is part of the theory. This is precisely the point where your misuse of the term “theory” is causing problems.Then please expand your theory fragment (or another one of your choice) to a complete theory that we can discuss.
even someone who is openly antagonistic to the standard definition admits that it is in fact the standard definition.No. he does not even give it the status of a definition – he admits only that it is the standard sketch, and emphasizes this weak status!
But as far as I am aware the standard sketch remains the standard meaning of the terms and the scientific community has not adopted his “better informed” opinion.You are indeed not aware of the state of the art! Not only Suppes, your only witness among the philosophers of science, but also Wikipedia, your only other authoritative source, testify against you:
A scientific theory is an explanation of an aspect of the natural world that can be repeatedly tested and verified in accordance with the scientific method, using accepted protocols of observation, measurement, and evaluation of results. […] theory […] describes an explanation that has been tested and widely accepted as valid.It explicitly separates scientific theory (''an explanation of an aspect of the natural world"") and the relation to experiment (''the scientific method'').
Wikipedia cites other authorities to support its definition; none of them requires a map between theory and experiment as part of the theory:
Theories are structures of ideas that explain and interpret facts
The formal scientific definition of theory is quite different from the everyday meaning of the word. It refers to a comprehensive explanation of some aspect of nature that is supported by a vast body of evidence.
A scientific theory is a well-substantiated explanation of some aspect of the natural world, based on a body of facts that have been repeatedly confirmed through observation and experiment.
The logical positivists thought of scientific theories as statements in a formal language.
The semantic view of theories, which identifies scientific theories with models rather than propositions, has replaced the received view as the dominant position in theory formulation in the philosophy of science. A model is a logical framework […] One can use language to describe a model; however, the theory is the model (or a collection of similar models), and not the description of the model. A model of the solar system, for example, might consist of abstract objects that represent the sun and the planets. These objects have associated properties, e.g., positions, velocities, and masses.This is exactly my view, except that they have ''logical framework'' where you had suggested the term ''mathematical framework''!
Engineering practice makes a distinction between "mathematical models" and "physical models"
In physics, the term theory is generally used for a mathematical frameworkMaybe our dispute comes from the fact that you are an engineer and I am a mathematician and physicist!
But note that this discussion is in a physics forum, not an engineering forum.
Each time I measure two numbers a and b I can apply your theory and say, ''Ah, if I interpret a as the Daleage and b as the Neumaierian then their product is the Demystifier number. Interesting'' (or boring).Exactly. You have to go beyond the mathematical framework and map the experimental results to the labels.
I agree. It is also not part of the theory. Thus your example is ridiculous.The part of the context that gives the mapping from the framework to experiment is part of the theory. This is precisely the point where your misuse of the term “theory” is causing problems.
He calls your view the ''standard sketch'' meaning that this is (i) the (uninformed) usually heard opinion and (ii) a vast simplification. Then he gives his (better informed) critique of the standard sketch,Yes. But as far as I am aware the standard sketch remains the standard meaning of the terms and the scientific community has not adopted his “better informed” opinion. I.e. even someone who is openly antagonistic to the standard definition admits that it is in fact the standard definition.
I can't help wondering that, however interesting this thread is, it is metaphysics rather than physics?
If I am mistaken could you explain why.It's about semantics with a bits from philosophy of science. As to why – metaphysics "examines the fundamental nature of reality" [wikipedia] but this discussion is about borders between mathematics, theory and interpretation (philosophy of science) and common meaning for last two terms (semantics).
Fair enough. So a is the Daleage, b is the Neumaierian , and c is the Demystifier number. Now we have a fully specified mathematical framework complete with names of concepts, axioms, and formulas. And yet, it is impossible from this alone to determine if an experiment validates or falsifies the theory. This is therefore a counter-example to your claim.No. Each time I measure two numbers a and b I can apply your theory and say, ''Ah, if I interpret a as the Daleage and b as the Neumaierian then their product is the Demystifier number. Interesting'' (or boring).
This is the same as what happens when applying quantum mechanics to experiment. We deduce information about the wave function (a purely theoretical concept) by interpreting certain experimental activities as instances of the theory.
The context is not part of the framework.I agree. It is also not part of the theory. Thus your example is ridiculous.
The example of projective planes shows that the framework itself, if it is good enough, contains everything needed to apply it in a context appropriate for the theory. This holds even when the naming is different. The context has its structure and the theory has its structure, and anyone used to recognizing structure will recognize the unique way to match them such that the theory applies successfully.
An arbitrary mapping from the mathematical framework to experimental quantities is a valid interpretation iff it satisfies Callen's criterion. In a sufficiently mature theory (such as projective geometry) there is only one such mapping (apart from universal symmetries in the mathematical framework). Thus the mathematical framework alone determines the objective interpretation in this sense, the meaning of everything, and the falsifiability of the theory.
I completely reject this assertion. Certainly, the common usage of the term "theory" states that something in addition to the mathematical framework is required to make the mapping to experiment.Yes, namely the experience of the experimenter. The relation between theory and experiment is far more complex than the few hints a given in a book on theoretical physics. It is not the subject of such books but of books on experimental physics!
I also found a paper entitled "What is a scientific theory?" by Patrick Suppes from 1967 (Philosophy of Science Today) who says "The standard sketch of scientific theories-and I emphasize the word 'sketch'-runs something like the following. A scientific theory consists of two parts. One part is an abstract logical calculus … The second part of the theory is a set of rules that assign an empirical content to the logical calculus. It is always emphasized that the first part alone is not sufficient to define a scientific theory".
As he describes this as the "standard sketch" and as this also agrees with the Wikipedia reference and my previous understanding, then I take it that your definition of theory is not that which is commonly used.But Suppes says there:
scientific theories cannot be defined in any simple or direct way in terms of other non-physical, abstract objects. […] To none of these questions do we expect a simple and precise answer. […] This is also true of scientific theories.He calls your view the ''standard sketch'' meaning that this is (i) the (uninformed) usually heard opinion and (ii) a vast simplification. Then he gives his (better informed) critique of the standard sketch, which he disqualifies as ''highly schematic'' and ''relatively vague'', and refers to ''different empirical interpretations''. Thus he says that the same theory has different empirical interpretations, which therefore cannot be part of the theory!
It is difficult to impose a definite pattern on the rules of empirical interpretation.Then he talks about ''models of the theory […] highly abstract'', which makes sense only if his view of theory is just the mathematical framework which is the meaning he then uses throughout. On p.62, he talks about ''the necessity of providing empirical interpretation of a theory''. This formulation makes sense only if one identifies ''theory = the formal part'' and treats the interpretation as separate! Then he goes on saying that the formulations in the standard sketch
have their place in popular philosophical expositions of theories, but in the actual practice of testing scientific theories a more elaborate and more sophisticated formal machinery for relating a theory to data is required. […] There is no simple procedure for giving co-ordinating definitions for a theory. It is even a bowdlerization of the facts to say that co-ordinating definitions are given to establish the proper connections between models of the theory and models of the experiment.and then he discusses (starting p.63 bottom) the morass one enters if one wants to take your definition seriously!
So the only clean and philosophically justified conceptual division is to have
I can't help wondering that, however interesting this thread is, it is metaphysics rather than physics?
If I am mistaken could you explain why.
Regards Andrew
Now suppose that someone else develops another theory T2 that makes the same measurable predictions as T1. So if T1 was a legitimate theory, then, by the same criteria, T2 is also a legitimate theory. Yet, for some reason, physicists like to say that T2 is not a theory, but only an interpretation. But how can it be that T1 is a theory and T2 is only an interpretation? It simply doesn’t make sense.Scientific approach requires that prediction is made before experiment. So I can see a way how T1 is considered as something more than T2. Say T1 is verified by experiment (T1 has made predictions before experiment) but T2 is developed later by knowing experimental results with which it has to agree and it does not produce any new predictions. Then T1 is verified but T2 is not, even so they give exactly the same predictions.
And there is good reason for that rule that predictions have to be produced before experiment – people are very good at cheating themselves.
There is another thing I would like to add concerning discussion about the topic. Theory has to include things needed for it to produce testable predictions. But then QM as a statistical theory makes this task difficult and ambiguous. There is a lot of event based reasoning on experimental side before we get statistics (consider coincidence counters for example). And on one hand QM as a statistical theory can not replace that event based classical reasoning but on the other hand it overlaps with classical theories and is more correct, so it sort of should replace it.
So to me it seems that without something we usually call "interpretation" QM connection to experiments remains somewhat murky.
Appart from the wrong parts, I still think that his book is the best graduate general QM textbook that exists.So do I and many here do also. But it can polarize – a number of people here are quite critical of it. I guess it's how you react to the wrong bits – they are there for sure but for some reason do not worry me too much – probably because there are not too many and are easy to spot and ignore. Of greater concern to me personally is Ballentine's dismissal of decoherence as important in interpretations – he thinks decoherence is an important phenomena, just of no value as far as interpretations go:
https://core.ac.uk/download/pdf/81824935.pdf
'Decoherence theory is of no help at all in resolving Schrödinger’s cat paradox or the problem of measurement. Its role in establishing the classicality of macroscopic systems is much more limited than is often claimed.'
That however would be a thread all by itself :rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes::rolleyes:
Thanks
Bill
I wasn't thinking of finite additivity at all. I think modern Bayesians use Kolmogorov eg. http://www.stat.columbia.edu/~gelman/research/published/philosophy.pdf is written by a Bayesian and a frequentist (maybe I'm oversimplifying), and both accept Kolmogorov. Just that in general, Bayesian thinking is valued for its intellectual framework of coherence eg. http://mlg.eng.cam.ac.uk/mlss09/mlss_slides/Jordan_1.pdf. Also, the concept of exchangeability and the representation theorem are generally taught nowadays, at least in statistics/machine learning: https://people.eecs.berkeley.edu/~jordan/courses/260-spring10/lectures/lecture1.pdfThis kind of is the problem… lots of ambigous language. I inferred finite addiviity many posts ago… but this wasn't the right inference at all it seems.
Exchangeability is a big umbrella but really is just specializing symmetric function theory to probability. Off the top of my head, I would have said typical use cases are really martingale theory (e.g. Doob backward martingale). But yes, graphical models and a whole host of other things can harness this. We're getting in the weeds here… a lot of big names have worked on exchangeability.
I have again lost the scent of how this is somehow related to a different kind of probability advocated by de Finetti. I have unfortunately remembered why I dislike philosophy these days.
– – – –
re: Bayes stuff… it is in some ways my preferred what of thinking about things. But people try to make it into a cult, which is unfortunate. As you've stated correctly, frequentists and bayesians are still using the same probability theory — they just meditate on it rather differently.
No. There is a huge difference between a formula (which is meaningless outside of a mathematical framework) and a mathematical framework itself, which is a logical system giving a complete set of definitions and axioms withing which formulas become meaningful.Yes, that is a good point. I concede this. So for the previous example the framework would have to include the standard axioms of arithmetic with real numbers.
While the names of concepts are in principle arbitrary, once chosen, they mean the same thing throughout (unlike variables) – to the extent that one can understand math texts written in a different language by restoring the familiar wording, without knowing the language itself.Fair enough. So a is the Daleage, b is the Neumaierian , and c is the Demystifier number. Now we have a fully specified mathematical framework complete with names of concepts, axioms, and formulas. And yet, it is impossible from this alone to determine if an experiment validates or falsifies the theory. This is therefore a counter-example to your claim.
In a mature theory there is only one way to do the mapping, given the mathematical framework (with axioms, definitions, and results). Unlike in you caricature of a mathematical framework that means nothing at all without context.The context is not part of the framework. That should be obvious. The very meaning of "context" implies looking outside of something to see how it fits into a broader realm beyond itself. The whole purpose of the caricature was to remove the context and look only at the mathematical framework itself. From that "toy" exercise it is clear that the framework is insufficient for experimental testing.
Thus the mathematical framework alone determines the objective interpretation in this senseI completely reject this assertion. Certainly, the common usage of the term "theory" states that something in addition to the mathematical framework is required to make the mapping to experiment.
While that is true, within the mathematical framework itself the names are merely arbitrary symbols. This is why a, b, and c are perfectly valid elements of the mathematical framework of a scientific theory.No. There is a huge difference between a formula (which is meaningless outside of a mathematical framework) and a mathematical framework itself, which is a logical system giving a complete set of definitions and axioms withing which formulas become meaningful. While the names of concepts are in principle arbitrary, once chosen, they mean the same thing throughout (unlike variables) – to the extent that one can understand math texts written in a different language by restoring the familiar wording, without knowing the language itself.
The axioms and definitions carry the complete intrinsic meaning. With Peano's system of axioms you recover everywhere in the universe, no matter which language is used, the same concept of counting, no matter how it is worded, and this is enough to reconstruct the meaning, and then apply it to reality by devising experiment to check its usefulness.
The mapping to experiment is separate from the mathematical framework itself, even when the names are highly suggestive.
This becomes particularly important when different theories use the same name for different concepts. The mapping to experiment is different because the names are merely arbitrary symbols, and the same name does not force the same mapping for different theories.Most experiments are never mapped to theory by the content of a book on theoretical physics, but they are used to test these theories. This is possible precisely because the theory cannot be mapped arbitrarily to experimental physics without becoming obviously wrong. In a mature theory there is only one way to do the mapping, given the mathematical framework (with axioms, definitions, and results). Unlike in you caricature of a mathematical framework that means nothing at all without context.
Again, I understood from our previous discussion that this mapping from the mathematical symbols to experimental quantities is what we were calling the objective interpretation.An arbitrary mapping from the mathematical framework to experimental quantities is a valid interpretation iff it satisfies Callen's criterion. In a sufficiently mature theory (such as projective geometry) there is only one such mapping (apart from universal symmetries in the mathematical framework). Thus the mathematical framework alone determines the objective interpretation in this sense, the meaning of everything, and the falsifiability of the theory. Precisely this is the reason why there are no discussions about interpretation in most good theories.
But the current theory of quantum mechanics is underspecified since it uses the undefined notions of measurement and probability in its axioms and hence leaves plenty of room for interpretation.
http://schroedingersrat.blogspot.com/2013/11/do-not-work-in-quantum-foundations.htmlVery often, people come and tell me: “Rat, you are so magnificent! Here you are, a former magician-actor-tap dancer-model-ninja-vampire-master of the universe-Madonna. How come that someone with your obvious talents and –ehem!- sexy muscular body is wasting his time in foundational research? (blink, blink)”.
:biggrin::biggrin::biggrin:
Appart from the wrong parts, I still think that his book is the best graduate general QM textbook that exists. And as with Renner, I always have much more respect for being non-trivially wrong than for being trivially right.I guess the Frauchiger and Renner paper is more non-trivially wrong from the Bohmian point of view (from Copenhagen their setup just seems wrong). So perhaps that's another point in favour of forgiving them – they are unconscious Bohmians :)
Appart from the wrong parts, I still think that his book is the best graduate general QM textbook that exists. And as with Renner, I always have much more respect for being non-trivially wrong than for being trivially right.I guess we differ on whether they are trivially wrong or non-trivially wrong. To me it seems that both Ballentine and Frauchiger and Renner are interested in the wrong problems in quantum foundations, and never properly address the measurement problem (the only problem of real worth in quantum foundations).
http://schroedingersrat.blogspot.com/2013/11/do-not-work-in-quantum-foundations.html
Incidentally, the papers by Renner mentioned by Schroedinger's rat did address things closer to the measurement problem.
And what good thing did Ballentine do equivalent to Renner's quantum de Finetti contribution?Appart from the wrong parts, I still think that his book is the best graduate general QM textbook that exists. And as with Renner, I always have much more respect for being non-trivially wrong than for being trivially right.
If you can forgive Renner, maybe you could also forgive Ballentine for his misunderstanding of collapse, decoherence and the quantum Zeno? :biggrin:I guess I forgive Renner more easily because I didn't spend much time on Frauchiger and Renner (I thought it was like perpetual motion machines), and you and DarMM sorted it out for me. OTOH I wasted so much time with Ballentine because he was rated so highly on this forum. And what good thing did Ballentine do equivalent to Renner's quantum de Finetti contribution?
BTW, I haven't forgiven Renner yet :biggrin:
Hmmm, Renner is one of the people who did a quantum version of the de Finetti theorem. Maybe that is enough to forgive him the Frauchiger and Renner papers :P
https://arxiv.org/abs/quant-ph/0512258If you can forgive Renner, maybe you could also forgive Ballentine for his misunderstanding of collapse, decoherence and the quantum Zeno? :biggrin:
Hmmm, Renner is one of the people who did a quantum version of the de Finetti theorem. Maybe that is enough to forgive him the Frauchiger and Renner papers :P
https://arxiv.org/abs/quant-ph/0512258
In my opinion there's a major lack of focus in your post. My comment about de Finetti had to do with axioms used (finite vs countable addivitiy). Axioms selected has really nothing to do with QM interpretations.The some different interpretations of QM use different axioms, so I don't see how this is true.
As for the rest of your post, I don't understand what's really wrong with @A. Neumaier 's references or why discussion should be confined to them (apparently introducing any new references is "unfocused"). I'm not going to go on and on with this, it's a simple fact that there are several interpretations of probability with debate and discussion over them. The only way you seem to be getting around this is by saying anybody referenced is wrong in some way, Jaynes is "worrisome", Popper is "just a philosopher", @A. Neumaier 's references are just "general audience write ups". There simply is disagreement over the interpretation of probability theory, I don't really see why you'd debate this.
Also I really don't get why referencing Jaynes is "worrisome", he's polemical and there are many topics not covered in his book and gaps in what his treatment can cover, as well as his errors in relation to Bell's theorem as @atyy said (it's probable he didn't understand Bell's work). However it's a well regarded text, so I don't see the problem with simply referencing him.
I don't think Jaynes had a complete formulation of probability but that isn't the main problem. He's perfectly fine to read if you already know a lot about probability. Part of the problem is that people who don't know much about probability read his book and then they over-fit their understanding of probability theory to his polemic. The fact that you keep bringing him up is very worrisome in this regard.One failure of Jayens' relevant for a quantum thread is that he did not understand the Bell theorem (yeah, we would have banned on PF as a crackpot) …
I didn't understand the the italicized part. Countable additivity is typically used because it is mathematically convenient. I'm only aware of a small handful of serious probability works that use finite additivity (e.g. Dubins and Savage's book in addition to de Finetti). I skimmed the link and don't really get your comment. When people say things like
"the ratio of densities is a special, infinitesimal value of order ##10^{−100}## in order for the two densities to coincide today. "
I infer that mathematical subtleties don't have much to do with it.
Perhaps you are referring to something else related to de Finetti?Just that in general, Bayesian thinking is valued for its intellectual framework, and used in bits and pieces in statistics/machine learning. Also, the concept of exchangeability and the representation theorem are generally taught nowadays, at least in statistics/machine learning: https://people.eecs.berkeley.edu/~jordan/courses/260-spring10/lectures/lecture1.pdf
Since this is a quantum thread, let's add https://arxiv.org/abs/quant-ph/0104088 as another example of de Finetti's influence.
…Bayesians can use the Kolmogorov axioms, just interpreted differently. (And yes, interpretation is part of Foundations, but the Kolmogorov part is settled.)
I think interpretation is even settling, with de Finetti having won in principle, but in practice one uses whatever seems reasonable, or both as this cosmological constant paper did: https://arxiv.org/abs/astro-ph/9812133.I didn't understand the the italicized part. Countable additivity is typically used because it is mathematically convenient. I'm only aware of a small handful of serious probability works that use finite additivity (e.g. Dubins and Savage's book in addition to de Finetti). I skimmed the link and don't really get your comment. When people say things like
"the ratio of densities is a special, infinitesimal value of order ##10^{−100}## in order for the two densities to coincide today. "
I infer that mathematical subtleties don't have much to do with it.
Perhaps you are referring to something else related to de Finetti?
This is just a difference in the use of the word "Foundations", which is sometimes used to include interpretations.
Also see the parts in bold.
"There is no debate in Foundations of probability if we ignore the guys who say otherwise and one of them lost anyway, in my view"
Seems very like the kind of thing I see in QM Foundations discussions.
"Ignore Wallace's work on the Many Worlds Interpretation it's a mix of mathematics and philosophical polemic" (I've heard this)
"Copenhagen has been shown to be completely wrong, i.e. Bohr lost" (also heard this)In my opinion there's a major lack of focus in your post. My comment about de Finetti had to do with axioms used (finite vs countable addivitiy). Axioms selected has really nothing to do with QM interpretations.
I think if I asked a bunch of subjective Bayesians I'd get a very different view of who "won" and "lost".
Jaynes is regarded as a classic by many people I've spoken to, I'm not really sure why I should ignore him.
I don't know why we're talking about best seller general audience books.As I've already said, the books mentioned in posts 109 and 111 did not include Jaynes' book. I'm trying to be disciplined and actually keep the line of conversation coherent. Jaynes' views were said to be addressed by a different author and that is what my posts have been about.
I never asserted anything was a "best seller general audience book" and I don't think sales have much to do with anything here. I did say that the books mentioned were not math books and they were aimed at a general audience.
Bayesians are in general fine with Kolmogorov formulation of probability. I don't know what you're talking about here… it seems @atyy already addressed this.
"Foundations" here includes interpretations, so "Kolmogorov vs Jaynes" for example was meant in terms of their different views on probability. There are others like Popper, Carnap. Even if you don't like the word "Foundational" being applied it doesn't really change the basic point.I've actually read a couple of Popper books, but I don't care about what he has to say about probability –he was not mathematically sophisticated enough. I struggle to figure out why you bought up philosophers here. It's something of a red flag. If you brought up, say the views of some mixture of Fisher, Wald, Doob, Feller and some others, that would be a very different matter.
Also note that in some cases there is disagreement over which axioms should be the Foundations. Jaynes takes a very different view from Kolmogorov here, eschewing a measure theoretic foundation.I don't know what this has to do with anything. Measures are the standard analytic glue for probability. That's the settled point. There are also non-standard analysis formulations of probability (e.g. Nelson). The book I referenced by Vovk and Shafer actually tries to redo the formulation of probability, getting rid of measure theory in favor of game theory. The mechanism is betting. It's a work in progress designed to try to get people to think in a different way.
I don't think Jaynes had a complete formulation of probability but that isn't the main problem. He's perfectly fine to read if you already know a lot about probability. Part of the problem is that people who don't know much about probability read his book and then they over-fit their understanding of probability theory to his polemic. The fact that you keep bringing him up is very worrisome in this regard.
The naming provides a mapping of mathematical concepts to concepts assumed already knownMy understanding of your previous comments was that this mapping is precisely what we were calling the “objective interpretation”, not the mathematical framework. Otherwise the objective interpretation is empty. I am fine with that, but it is a change from the position I thought you were taking above.
So how can it be objective?“Objective” was your word, not mine. I am not sure why you are complaining to me about your own word.
a children, b apples, and c ways of pairing children and applesAgain, I understood from our previous discussion that this mapping from the mathematical symbols to experimental quantities is what we were calling the objective interpretation.
What you are describing here is more than just the mathematical framework. That is the mathematical framework plus a mapping to experiment.The names are traditionally part of the mathematical framework, not a separate interpretation. Look at any mathematical theory with some relation to ordinary life, e.g., the modern axioms for Euclidean geometry or for real numbers, or Kolmogorov's axioms for probability!
The naming provides a mapping of mathematical concepts to concepts assumed already known (i.e., to informal reality, as I use the term). This part is the objective interpretation and is independent of experiment. This is necessary for a good theory, since the relation between a mathematical framework and its physics must remain the same once the theory is mature. A mature scientific theory fixes the meaning of the terms uniquely on the mathematical level so that there can be no scientifically significant disagreement about the possible interpretation, using just Callen's criterion for deciding upon the meaning.
On the other hand, experimental art changes with time and with improving theory. We now have many more ways of measuring things than 100 years ago, which usually even need theory to even be related to the old notions. There are many thousands of experiments, and new and better ones are constantly devised – none of these experiments appear in the objective interpretation part of a theory – at best a few paradigmatic illustrations!
The theories that have a fairly large and still somewhat controversial interpretation discussions are probability theory, statistical mechanics, and quantum mechanics. It is not a coincidence that precisely in these cases, the naming does not suffice to pin down the concepts sufficiently to permit an unambiguous interpretation. Hence the need arose to add more interpretive stuff. Most of the extra stuff is controversial, hence the many interpretations. The distinction between subjective and objective interpretation does not help here, because people do not agree upon the meaning that should deserve the label objective!
Please reread my post #147 in this light.
I read your posts, but you are using the words in such a strange way that reading doesn’t help. What I wrote above is directly what I got from reading it.Well, I had said,
As I said, in simple cases, the interpretation is simply calling the concepts by certain names. In the case of classical Hamiltonian mechanics, ##p## is called momentum, ##q## is called position, ##t## is called time, and everyone is supposed to know what this means, i.e., to have an associated interpretation in terms of reality.I cannot understand how this can be misinterpreted after I had explained that for me, reality just means the connection to experiment.
Anything necessary to predict the outcome of an experiment is objective.But in probability theory, statistical mechanics, and quantum mechanics, different people differ in what they consider necessary. So how can it be objective?
It is a perfectly valid mathematical framework, one of the most commonly used ones in science.No. ##ab=c## is just a formula. Without placing it in a mathematical framework it does not even have an unambiguous mathematical meaning.
The mathematical framework to which it belongs could be perhaps Peano arithmetic. This contains much more, since it says what natural numbers are (in purely mathematical terms), how they are added and multiplied, and that the variables denote arbitrary natural numbers.
Then ##ab=c## gets (among many others) the following experimental meaning: Whenever you have a children, b apples, and c ways of pairing children and apples then the product of a and b equals c. This is testable and always found correct. (If not one questions the counting procedure and not the theory.)
Thus no interpretation is needed beyond the mathematical framework itself. Every child understands this.
As already said, the mathematical framework of a successful physical theory have (and must have) must have enough of their important concepts labelled not a,b,c but with sensible concepts from the world of experimental physicsWhat you are describing here is more than just the mathematical framework. That is the mathematical framework plus a mapping to experiment. This mapping to experiment is what distinguishes a scientific theory from a mathematical framework. That is the objective interpretation.
I read your posts, but you are using the words in such a strange way that reading doesn’t help. What I wrote above is directly what I got from reading it.
NonsensePlease read my whole posts and don't make ridiculous arguments with meaningless theories!
As already said, the mathematical framework of a successful physical theory have (and must have) must have enough of their important concepts labelled not a,b,c but with sensible concepts from the world of experimental physics so that the subjective part of the interpretation is constrained enough to be useful.
For example, take the mathematical framework defined by ''Lines are sets of points. Any two lines intersect in a unique point. There is a unique line through any two points.'' (This defines the mathematical concept of a projective plane.) This is sufficiently constrained that every schoolboy knows without any further explanation how to apply it to experiment, and can check its empirical validity. There are some subjective interpretation questions regarding parallel lines, whose existence would be thought to falsify the theory, but the theory is salvaged by allowing in the subjective interpretation points at infinity. Another, more sophisticated subjective interpretation treating lines as grand circles on the sphere (undistinguishable by poor man's experimental capabilities) would be falsifiable since there are multiple such lines through antipodal points.
This shows that there is room for nontrivial subjective interpretation, and that the discussion of their testability is significant, as it may mean progress, by adding more details to the theory in a way eliminating the undesired interpretations.
you cannot do an experiment with only that “experimental meaning”. It is insufficient for applying the scientific method.What did you expect? A mathematical framework of 4 characters is unlikely to give much information about experiment. It says no more than what I claimed.
Most theories are inconsistent with experiment, and only a few, successful ones are consistent with them. Only these are the ones the philosophy of science is about, and they typically are of textbook size!
Suppose I do an experiment and measure 6 values: 1, 2, 3, 4, 5, 6. Using only the above framework and your supposed “experimental meaning” do the measurements verify or falsify the theory?They verify the theory if you measured a=2, b=3, c=6, and they falsify it if you measured a=2, b=3, c=5. Given your framework, both are admissible subjective interpretations. Your framework is too weak to constrain the subjective interpretation, so some will consider it correct, others invalid, and still others think it is incomplete and needs better foundations. The future will tell whether your new theory ##ab=c## will survive scientific practice….
Just like in the early days of quantum mechanics, where the precise content of the theory was not yet fixed, and all its (subjective since disagreeing) interpretations had successes and failure – until a sort of (but not unanimous) consensus was achieved.