Exploring the Meaning of Ontology: Easy for Kids, Hard for Quantum Physicists

In summary, ontology is the concept of what exists in the world, postulated by a certain theory. It can be understood by children in terms of their everyday experiences, but many mature physicists struggle with understanding it. It is a point of contention between realists and anti-realists, and often used sloppily by physicists. Ultimately, ontology is binary and refers to the existence of things in the real world, rather than just in our imagination.
  • #141
martinbn said:
Let me ask you the same. Do you then think that your engagement on the quantum foundations forum is a waste of time? Or if not, what's your motivation for that?
I will not answer you, because you will think it's a nonsense anyway. :-p
 
Physics news on Phys.org
  • #142
Demystifier said:
I will not answer you, because you will think it's a nonsense anyway. :-p
You have often responded like this, what do you expect me to think!
 
  • #143
martinbn said:
You have often responded like this, what do you expect me to think!
I expect you to think that all I say is nonsense, so why bother.
 
  • #144
Demystifier said:
I expect you to think that all I say is nonsense, so why bother.
I am not the only one reading. You are refusing everyone an answer to what your motivation is! But anyway, I will not bother you anymore.
 
  • #145
martinbn said:
You are refusing everyone an answer to what your motivation is!
Nobody else asked me that.
 
  • #146
vanhees71 said:
probabilities to use probability theory as a way to make decisions, e.g., whether after some probability analysis you decide to gamble at the casino or not, but this has nothing to do with physics.
I think this is superficial association, if this is all there is to it, I would agree with you. But I, and probably others, think there is a deeper way to the understanding. The gambling is an analogy, it's not supposed to be taken literally that agents "gamble". The analogy is IMO rather than physical interactions is a random walk, no one is making an aware "bet", the "bets" are rather natural and unavoidable. But the gambling analogy is still useful as a thinking tool, it's not to suggest that physical interaction are like speculations.

/Fredrik
 
  • Like
Likes vanhees71
  • #147
Fra said:
The gambling is an analogy, it's not supposed to be taken ...
It is more than just an analogy, gambling and game theory were the origin of probability theory, and gambling is still the hydrogen atom of probability theory.
 
  • #148
gentzen said:
It is more than just an analogy, gambling and game theory were the origin of probability theory, and gambling is still the hydrogen atom of probability theory.
Sure, I just commented on what I was guessing was the objection that - somehow "conscious agents" - are involved in the gambling. It's not how I see it, just like "conscious observers" are not what CI is about. I have an evolutioanry perspective, which I see as as "natural" form of gambling, or "spontaneous random walks", which can be seen as a form of gambling, with traits of self organisation. I don't have a problem with the gaming analogy myself, but just like some confuse CI interpretation with conscious observers, I think the gaming perspective can also be seen differently.

/Fredrik
 
  • Like
Likes vanhees71
  • #149
vanhees71 said:
Of course you can use a Bayesian interpretation of probabilities to use probability theory as a way to make decisions, e.g., whether after some probability analysis you decide to gamble at the casino or not, but this has nothing to do with physics. In physics all you can decide is which observable(s) you want to observe and how to construct a measurement device to do so. Then you can model this setup within QT and test the probabilistic predictions against your experimental data on ensembles
If you're interested, and this doesn't really have much to do with QBism, this is handled in Bayesian theory by de Finetti's theorem, which replicates your intuition here. Basically if you have an ensemble with members labelled ##i = 1 \ldots n## each with a possible outcome for some observation ##x_i## and you assume members of the ensemble can be exchanged:
##P(x_{1}, \ldots x_{i}, \ldots, x_{j}, \ldots , x_{n}) = P(x_{1}, \ldots x_{j}, \ldots, x_{i}, \ldots , x_{n})##, i.e. it is not important that one member or another gave a particular outcome, then as ##n \rightarrow \infty## you have:
##P(x_{1}, \ldots , x_{n}) \approx \int{P(\rho(x))d\rho}##
where ##\rho(x)## is the probability distribution for the outcomes, i.e. a model, and ##P(\rho)## is the probability of that model being correct.

In plainer terms under the assumption that you are dealing with a large ensemble of identically prepared systems then beliefs/credences about the outcomes are standard hypothesis testing where one is verifying models ##\rho(x)##.

So it's built into Bayesian theory that testing on large identical ensembles is testing scientific models, but it also allows you to do interesting things like capture errors when you have only medium sized ensembles, i.e. when you're "halfway" between pure "gambling" like in a casino or horse-racing for one shot events and the other extreme with scientific hypothesis testing on large ensembles. In this way your point about the difference between a casino and experimental ensemble is made rigorous as being two different limiting cases of the general theory of probability.
 
  • Like
Likes vanhees71
  • #150
vanhees71 said:
But the fundamental theory of gravity won't be found by such kinds of speculations, and I fear not even by pure theoretical-physics work. Without any guidance from observations, there's perhaps no chance to get an idea, how such a quantum theory of gravitation might look like.
It may not be quantum.
 
  • Like
Likes vanhees71
  • #151
vanhees71 said:
In physics all you can decide is which observable(s) you want to observe and how to construct a measurement device to do so. Then you can model this setup within QT and test the probabilistic predictions against your experimental data on ensembles.
Are you saying that a physicist uses QT predictions to set up an experiment that proves the QT predictions are correct?
 
  • #152
Of course, to test QT you need to use QT to set up a corresponding experiment. If you want to test one theory against another, in our context most interestingly any local deterministic hidden-variable theory a la Bell against Q(F)T, you have to use both theories to build up your experiment. It's the great merit of Bell's idea that it enables scientists rather than philosophers to decide between these two possible descriptions of Nature by observational facts rather than prejudices of some random philosophers. The result clearly is in favor of Q(F)T rather than the prejudices of EPR (where E didn't like this unjustly famous paper himself very much).
 
  • Like
Likes Lord Jestocost
  • #153
vanhees71 said:
you decide to gamble at the casino or not, but this has nothing to do with physics. In physics all you can decide is which observable(s) you want to observe and how to construct a measurement device to do so. Then you can model this setup within QT and test the probabilistic predictions against your experimental data on ensembles.
I appreciate your clean approach, but the problem I see, is that the prerequisites/machinery required for constructing a classical measurement device and collect solid ensembles of data, fails for the cosmological inside perspective. Ie. the data processing resources (both in terms of memory and time) are larger than the time scale where the environment changes.

This is why QT is fine for describing subatomic physics, from the lab perspective. But not for cosmology or perhaps some unification questions. As long as we talk about interpretations of QM or QFT as as description of subatomic events, then all is fine, except for a delicate fine tuning problem. The problem is that when science is no longer descriptive, but becomes an actor in the world, it relates to self organisation, and even the scientific process contains elements of gambling. Such as choosingn the best abduction from input. There is no right choice, you can only try, and the winning agens gets to persists.

/Fredrik
 
  • #154
CelHolo said:
As far as I can tell all progress has been away from classical-like theories like BM.
Not at all. In the area of foundations of quantum theory, there was only one widely accepted key success - Bell's theorem and the experimental falsification of the Bell inequalities. This progress was based on BM (Bell was at that time almost the only defender of BM).

Progress in the domain of realistic interpretations of QT, in particular Caticha's entropic dynamics, has been completely ignored.

What else is there? The SM is essentially phenomenology, all what was fundamentally new there goes back to QED. The fundamental insight was Wilsonian understanding of the meaning of renormalization, which removed the pretense of the SM of being fundamental and left SM + GR as effective field theories.
CelHolo said:
I really think all indications are that progress will continue on these lines, as we see already in QG research with black hole complementarity, dualities and so on.
I think this line is a dead end.
 
  • Like
Likes gentzen and Demystifier
  • #155
AndreiB said:
They use a Bayesian concept of probability. It's an agent's degree of belief that event X will happen.
There are two variants. Your is the subjective one. I prefer the objective one. It is the belief which is justified in a rational way given the information which is available.

There are important differences between the two. Namely, if you have no information about a dice, subjective probability is free to postulate something completely arbitrary, objective probability has only the choice 1/6 for each number, given that there is no information which makes a difference between the different numbers.

In general the state of no information requires the maximal entropy distribution. So, based on this one can justify as thermodynamics, as (in Caticha's entropic dynamics) quantum theory.
AndreiB said:
I think the term "ontic" is not used here in the sense that the theory has some ontology. "Ontic" interpretations are those where the quantum state itself is part of the ontology.
Not necessarily. Caticha's entropic dynamics has a well-defined ontology (the configuration space) so I would name it ontic, but the wave function is not part of the ontology but epistemic. QBism or Copenhagen have, instead, no ontology.
 
  • #156
Sunil said:
Not at all. In the area of foundations of quantum theory, there was only one widely accepted key success - Bell's theorem and the experimental falsification of the Bell inequalities. This progress was based on BM (Bell was at that time almost the only defender of BM).
Bell's results and the experiments are independent of the interpretation. He may have come to it thinking in terms of BM, but the results are independent from it. As far as I can tell that still is the only usefull thing about BM.
Sunil said:
Progress in the domain of realistic interpretations of QT, in particular Caticha's entropic dynamics, has been completely ignored.
May be there is a reason for that.
Sunil said:
What else is there? The SM is essentially phenomenology, all what was fundamentally new there goes back to QED. The fundamental insight was Wilsonian understanding of the meaning of renormalization, which removed the pretense of the SM of being fundamental and left SM + GR as effective field theories.
 
  • #157
martinbn said:
Bell's results and the experiments are independent of the interpretation. He may have come to it thinking in terms of BM, but the results are independent from it. As far as I can tell that still is the only usefull thing about BM.
Its the main progress in fundamental physics. In comparison with zero coming from other interpretations not that bad.
martinbn said:
May be there is a reason for that.
No, it is simply ignorance. All this psi-ontology community proves theorems about its impossibility, I don't that bad about them that they would continue to proof such theorems if they would know an explicit counterexample.
 
  • #159
Sunil said:
Its the main progress in fundamental physics. In comparison with zero coming from other interpretations not that bad.
My point is that this is a result that is independent of interpretations. It's like saying that since D. Deutsch thinks in terms of a many worlds interpretation, all of his work on quantum computing is to the credit of the many worlds interpretation. Not that bad.
Sunil said:
No, it is simply ignorance. All this psi-ontology community proves theorems about its impossibility, I don't that bad about them that they would continue to proof such theorems if they would know an explicit counterexample.
I am sure you have done this before, but can you give the reference again. I am curious now.
 
  • #161
martinbn said:
My point is that this is a result that is independent of interpretations.
Sure, but the same can also be said about all other important results that were obtained from a standard Copenhagen/orthodox/statistical-ensemble/non-ontic point of view. So it all boils down to the question which interpretation makes thinking easier. And of course, this question does not have a universal answer. It depends on the specific problem, but also on the personality of the physicist. It may be true that most physicists find a non-ontic way of thinking easier than an ontic one, but it does not necessarily mean that the non-ontic way of thinking is "better" or "closer to truth".
 
Last edited:
  • #163
martinbn said:
My point is that this is a result that is independent of interpretations.
The main intention of the proof was to get rid of the non-locality argument against BM, by showing that all reasonable interpretations will have to be non-local. Looks a little bit closer than your example:
martinbn said:
It's like saying that since D. Deutsch thinks in terms of a many worlds interpretation, all of his work on quantum computing is to the credit of the many worlds interpretation.
But I agree with your main point, the result itself is interpretation-independent.
martinbn said:
I am sure you have done this before, but can you give the reference again. I am curious now.
Caticha, A. (2011). Entropic Dynamics, Time and Quantum Theory, J. Phys. A 44 , 225303, arxiv:1005.2357

Schmelzer's unpublished variant arXiv:1906.00956 maybe of interest because it addresses the conflict with psi-ontology theorems directly. Moreover, he identifies the other degrees of freedom Y which are left unspecified by Caticha with the usual configuration space outside the system, so that this reduces the ontology even more, to simply the standard classical configuration space.
 
  • #164
martinbn said:
Or which part was first done with BM, before it was done with orthodox QM?
In my recent work https://arxiv.org/abs/2010.07575 I first solved the problem intuitively with BM and then translated the results into the orthodox form. But in the paper itself we presented the logic in the reverse order.
 
  • #165
Demystifier said:
Ontology is the easiest and the hardest concept in the field of quantum foundations.
...
The question for everybody: How to explain the meaning of the word "ontology" such that even a mature orthodox quantum physicist can understand it?
I don't think it's a case of some people understanding the term 'ontology' and others not. I think it's more a disagreement about what is meaningful and what is not; about what the purview of scientific inquiry is and what is not.

Something I believe everyone can agree on is that existence is self-evident. Ontology then is simply the nature of existence or the nature of that which exists.

Ontology is what was present at the big bang and the formation of stars and galaxies, before there were observers.
 
  • #166
Sunil said:
Not at all. In the area of foundations of quantum theory, there was only one widely accepted key success - Bell's theorem and the experimental falsification of the Bell inequalities. This progress was based on BM (Bell was at that time almost the only defender of BM).

Progress in the domain of realistic interpretations of QT, in particular Caticha's entropic dynamics, has been completely ignored.

What else is there? The SM is essentially phenomenology, all what was fundamentally new there goes back to QED. The fundamental insight was Wilsonian understanding of the meaning of renormalization, which removed the pretense of the SM of being fundamental and left SM + GR as effective field theories.

I think this line is a dead end.
But Bell's work is completely independent of Bohmian mechanics. It's a mathematical theorem showing that there is a property of the probabilities predicted by QT (in any interpretation, which doesn't change the physics content, i.e., the predictions from minimally interpreted QT) and the probabilities predicted by any local deterministic hidden-variable theory.

Of course the SM is strictly based on phenomenology. It has been discovered as a paradigmatic example for the interplay between theory and experiments. That's why it is so successful. Speculations of a philosophical kind never have brought much progress in our understanding of Nature.

Bell's work is another paradigmatic example for this: All the quibbles about purely philosophical issues like the EPR paper and Bohr's answer to it haven't brought any progress until Bell found a way to formulate it in a clear cut scientifically decidable question, i.e., whether or not the Bell inequality was violated in the real world or not. As is well known, it is violated (and to an amazing confidence level!) and the predictions of QT are confirmed (at the same amazing confidence level). Indeed, that's the only progress all this philosophical has brought for science. Admittedly it was a great one, leading to the development of all the most current "quantum technology" putting us in the midst of the "2nd quantum evolution" ;-)).
 
  • #167
CelHolo said:
As far as I can tell all progress has been away from classical-like theories like BM.
There has been progress? :oops:

/Fredrik
 
  • #168
vanhees71 said:
But Bell's work is completely independent of Bohmian mechanics. It's a mathematical theorem showing that there is a property of the probabilities predicted by QT (in any interpretation, which doesn't change the physics content, i.e., the predictions from minimally interpreted QT) and the probabilities predicted by any local deterministic hidden-variable theory.
Nobody without an interest in the foundations of quantum theory, that means in that philosophy which you despise, would have proven such a theorem.
vanhees71 said:
Of course the SM is strictly based on phenomenology. It has been discovered as a paradigmatic example for the interplay between theory and experiments. That's why it is so successful. Speculations of a philosophical kind never have brought much progress in our understanding of Nature.
It is, indeed, a paradigmatic example of phenomenological research. It works after those much more interested in philosophy have established the base - QED - and if there is some technological progress which allows to reach better experimental results.
 
  • Like
Likes vanhees71, gentzen and Demystifier
  • #169
Sunil said:
QBism or Copenhagen have, instead, no ontology.
I disagree. In order for any theory to make predictions and provide explanations it needs to postulate "something". The QM postulates speak about a "system" that evolves in agreement with Schrodinger's equation. So, that system has to exist, otherwise the theory is useless. It also speaks about measurements, so appropriate apparatuses need to exist as well. So, the theory has an ontology. I agree however, that this ontology is not clearly spelled out.

QBism also postulates rational agents that can have experiences, memories and so on. They also more ar less postulate an external world which the agent can probe. But, just like in the case of orthodox QM, this ontology is not clearly spelled out. They refuse to say anything of substance about this external world even if they should.

I think this ambiguity in presenting the ontology helps them avoid falsification, so they are reluctant to clarify it.
 
  • #170
AndreiB said:
I disagree. In order for any theory to make predictions and provide explanations it needs to postulate "something".
I agree that in a general, philosophical sense Copenhagen and QBism also presuppose some reality. They specify some probabilities of outcomes of measurements, and these outcomes are really outcomes, something really existing. But they don't give a description what really exists. That means, they don't have a well-defined ontology.
 
  • Like
Likes gentzen and Demystifier
  • #171
Sunil said:
I agree that in a general, philosophical sense Copenhagen and QBism also presuppose some reality. They specify some probabilities of outcomes of measurements, and these outcomes are really outcomes, something really existing. But they don't give a description what really exists. That means, they don't have a well-defined ontology.
Agreed. But they should define it in order for those interpretations to be logically coherent.
 
  • #172
Sunil said:
I think this line is a dead end.
Are you saying that AdS/CFT is a dead end? If yes, can you explain why?
 
  • #173
Sunil said:
Nobody without an interest in the foundations of quantum theory, that means in that philosophy which you despise, would have proven such a theorem.
Sure, there can come good science from thinking philosophical problems. It shows that if there are philosophical quibbles about science the only way to solve them is to first translate the gibberish to a clear cut scientific question decidable by experiment in a unique way. This has been done for the EPR quibbles (or rather Einstein's much better formulated quibble about "inseparability" clarifying what he really wanted to say in the EPR paper in 1948). For me these quibbles are solved now for good. From a scientific point of view there's nothing more that can be said, because it's clearly decided in favor of Q(F)T rather than local deterministic hidden-variable theories.
 
  • #174
@Demystifier: AdS/CFT may be nice mathematics, but what else? My position is similar to the one against string theory in general: A lot of very intelligent people working there many years, and what are the results?

AndreiB said:
Agreed. But they should define it in order for those interpretations to be logically coherent.
Copenhagen is quite vague about this. There is some classical part, where usual common sense works, but what is what is real for the quantum system is not clear at all.

vanhees71 said:
Sure, there can come good science from thinking philosophical problems. It shows that if there are philosophical quibbles about science the only way to solve them is to first translate the gibberish to a clear cut scientific question decidable by experiment in a unique way. This has been done for the EPR quibbles (or rather Einstein's much better formulated quibble about "inseparability" clarifying what he really wanted to say in the EPR paper in 1948). For me these quibbles are solved now for good. From a scientific point of view there's nothing more that can be said, because it's clearly decided in favor of Q(F)T rather than local deterministic hidden-variable theories.
That's funny, given that nobody cared about QFT or deterministic hidden variable theories (there was none to be considered). The very point, namely that nonlocality is not an argument against BM, has not been accepted even today.
 
  • Like
Likes gentzen and Demystifier
  • #175
Sunil said:
I agree that in a general, philosophical sense Copenhagen and QBism also presuppose some reality. They specify some probabilities of outcomes of measurements, and these outcomes are really outcomes, something really existing. But they don't give a description what really exists. That means, they don't have a well-defined ontology.
For me, "agent" is just a label for the abstraction of an "inside observer" that seems like a minimal and a mandatory and modest central starting point in an inference centered approach. The "agent" thus both encodoes and puts constraints in both memories and inferences.

About the question, what "substance" is the agent made of? It seems obvious that it must be made of normal matter. But the question one attempts to answer is rather HOW "normal matter" is constructed, and how it interacts with other matter. Ie. to explain interactions and classify them. Here the agent notion is an abstraction only, similar to abstractions such as geometry.

In order to solve the problem of the starting point, or ontology, the only solution I have found is to release oneself from the preconception that there has to be a fundamental ontology, from which all else is explained. There is a problem with that view. Instead, perhaps we can imagine emergent and evolving relations, where one can identify at best "effective ontologies". This thinking IMO unifies ontolgoy and epistemolgoy, none of them are fundamental, they are rather entangled and evolving. Thus the "ontology" is similar to an initial condition. What is important is how only ontolgoy evolves onto the next?

I agree this is fuzzy but I do not think it's going to get it simpler. I do not see any reason to expect there is a fundamental ontology at all. But there ARE effective ontolgoies, and we need them.

/Fredrik
 
Back
Top