Does anyone actually use the term ensemble interpretation ?

In summary, people are asking about the "ensemble interpretation" and whether or not it's actually used. Some people think it's the interpretation advocated by Bohr, while other people think it's not. The Copenhagen interpretation carries its name due to Bohr, but I don't think that the ensemble interpretation is completely the interpretation advocated by Bohr.
  • #36


Another problem with the idea of literally repeating the same measurement enough to get the data to get "close enough" to represent en ensemble, is that in reality the system of study might be constantly changing, and there is simply no clean with to repeat the same experiment. This is a problem for this kind of "repeat the experiment" interpretation.

Does it make sense that we can not assign expectations for events that can't be exactly repeated in laboratory? I personally don' think so.

If we instead, look at the history, we might find that the retained interaction history to the system, already contains enough information to infer this probability. This can still be given a "frequentist interpretation" even though it's more subtle, and that you actually count not simple time histories but rather infer an effective expected count, in consistency with the retained data.

This way the expectation of the future, depends on the past, like we would expect from causality. And to the extent each observer has it's own "past", the expectations are observer dependent.

What is wrong with "counting the past", then we seem to get rid of the problem of "repeating experiments", and you can still retain a kind of frequentist interpretation.

/Fredrik
 
Physics news on Phys.org
  • #37


Fra said:
Another problem with the idea of literally repeating the same measurement enough to get the data to get "close enough" to represent en ensemble, is that in reality the system of study might be constantly changing, and there is simply no clean with to repeat the same experiment. This is a problem for this kind of "repeat the experiment" interpretation.
That's a possibility, but if the changes are really small they're also insignificant. If they're large but predictable, we can adjust our probability assignments for the changes, and there's still no real problem. If they're large and unpredictable, we clearly have a situation in which scientific methods just don't work.
 
  • #38


kote said:
Just to add, regarding Bohr's treatment of probability in QM, Jan Faye attributes the following view to Bohr (http://plato.stanford.edu/entries/qm-copenhagen/):

Born is probably a good source for what Bohr thought here. I don't think it's much more complicated than what it says above though, and this is probably what we've already been talking about.
He seems to be saying that QM doesn't describe reality, i.e. that it doesn't tell us what "actually happens", that it's just a set of rules that tells us how to calculate probabilities of possibilities. This is exactly how I think of the ensemble interpretation. So I still don't see a real difference between "ensemble" and "Copenhagen/orthodox", other than that the Copenhagenists choose to say that the state vector represents the properties of a system. As soon as they've said that, they seem to be doing everything as if it's the ensemble that has those properties.

There would be a real difference if it hadn't been for Bell's theorem, which Ballentine didn't seem to understand in 1970. And since Home and Whitaker didn't comment on that, I suspect that they didn't understand it either. It's possible that this (i.e. something they're wrong about) is the main reason why they think it's necessary to distingush between ensemble interpretations and orthodox interpretations.
 
Last edited by a moderator:
  • #39


Fredrik said:
Fra said:
Another problem with the idea of literally repeating the same measurement enough to get the data to get "close enough" to represent en ensemble, is that in reality the system of study might be constantly changing, and there is simply no clean with to repeat the same experiment. This is a problem for this kind of "repeat the experiment" interpretation.
That's a possibility, but if the changes are really small they're also insignificant. If they're large but predictable, we can adjust our probability assignments for the changes, and there's still no real problem. If they're large and unpredictable, we clearly have a situation in which scientific methods just don't work.

I disagree with this.

I think many of us in here have drastically differing views on this which is interesting, but since I sometimes seem to be one of the few that represent a more extreme subjective interpretations I'd like to contrast to the dominating view put forward that this is just nonsense because I sincerely think that is a deep misinterpretation of what the subjective information view could mean.

As I see it, the purposes of expectations, is a guide for actions -

this appears to apply coherently to both

- human science level, where the scientific expectations (which as we know are constantly evolving; see history of science) does "largely" determine which new experiments to perform, which accelerators or telescopes to build or not to build. So our expectations determined our actions, regardless of wether they will be revised in the future.

- physical level, where the action of a subsystem of our universe, is "largely" determined by the subsystems expectations on it's own environment. Now this is not to say tha the expectations are right, on the contrary, they are most often not right, which leads to mutual interactions, where both the subsystem and and environment evolve.

So, the fact that we are not able to "predict" the changes in a system, does not mean we can escape the situation of having to interact with this system. Neither does it suggest a breakdown of a reasonable choice of scientific method in any reasonable way as I see it. The process of learning and thus science almost be definition, contains unpredictable and unexpected events, but this is things science, and a physical observer must be able to manage to "stay corroborated".

/Fredrik
 
  • #40


Fredrik,
thanks for the long and interesting response. I think I do better understand your position - though it's not what I had associated with relative fequency views of probability. I still don't completely follow you - but my response is pretty long and I know that time is finite...

> If we take your definition of "the" relative frequency view literally, we don't even have an approximate
>probability until we have performed a large number of identical experiments, and those probabilities
> wouldn't be predictions about what will happen.

Not so. Probability of an F being a G is just defined as {number of Gs/number of Fs}, nothing in what I said about whether the Fs or Gs are actually counted, or whether any experiments are actually done. But agree on main point: different versions depending on the relevant classes (all observed F? All past Fs? All past and future Fs? All Fs that would occur where an infinite number of Fs to be generated...)

>They would be statements about what has already happened.

Why? Just statements about proportions. When science tells you 'All electrons have spin', you don't restrict this only to present and past electrons - future ones are meant to be included too. Same idea with proportion statements - nothing in the idea meant to restrict it to past things only - nothing in my formulation either. :)


>I think the "standard" version claims that an assignment of a probability P to a possible result of an >experiment should be interpreted a counterfactual statement of the form "If we were to perform this exact
>experiment infinitely many times, the number of times we've had that particular result after N
>experiments divided by the number of times we've had other results after N experiments goes to P as N
>goes to infinity".

Agree: this view very problematic. We can't perform any experiment an infinite number of times - and I would argue that there are some experiments that we can't perform a billion times, there's a problem defining proportions for infinite sets and others...

>I want to make it clear that I do not support this view. I'll try to explain what my view actually is. Let's
>start with the definition of probability. A probability measure is a function LaTeX Code: \\mu:\\Sigma
>\\rightarrow[0,1] that satisfies certain conditions. (The details aren't important here. Look them up if
>you're interested). A probability is a number assigned by a probability measure. It is just that, and nothing
>more. This is the definition of what the word means, not that counterfactual relative frequency stuff.

You can define your words however you see fit. But I'd like to know what your definition has to do with the concept, what this definition deserves our normal word 'probability'.

But I take it your thought is: a probability measure on a set of events (or propositions or...is it safe to ignore the details of just what exactly probabilities are assigned to? I'll assume for present purposes that it is) is just a function from the set of all subsets of those events to the real numbers which satisfy the relevant mathematical axioms. Since these mathematical axioms do not employ the notion of probability, there is nothing circular lurking here, and so the way is open to treat this as a definition.

But it wouldn't be a relative frequency view - for relative frequencies haven't even been mentioned and, indeed, there's nothing in this account which does link this definition to relative frequencies. This is a worry: without any link to relative frequencies, what's this got to do with the concept of probability?

So - it's hard to see this as the complete story - and, indeed, you think more is needed because, so far, this has nothing to do with the "real world". I'm not sure I'd quite agree with this description - after all, this isn't quite pure mathematics, the numbers are on events or propositions, and this is not purely mathematical - but I don't know if anything turns on this so I'll probably drop this in the final draft.)

It's the next part of your story I didn't understand/couldn't reconstruct. What are these additional set of axioms that links the mathematics to the real word? What do the axioms do? How do they make the link? I think I have an idea how some others make the link: the subjectivist says: rationality dictates that we conditionalise on the evidence in certain ways, and the numbers must be correlated to our degrees of confidence in certain propositions given the relevant evidence that we have; the relative frequency theorist says the numbers should be tied to actual proportions; all parties tell a story about how this is connected to doing science - garnering evidence, doing experiments, changing our degrees of belief in the light of experimental results, and so on. What did you take to be happening in your story?

Another story: the right mathematical function that counts as the probability of an event is that assignment of numbers to propositions which makes it *most likely* that the world contain the relative frequencies that it does. This is not subjectivist. It ties probabilities to frequencies, but has leeway to allow for one off cases whose probabilities are not identical to the frequency. Sounds good! What's the catch? It uses the notion 'most likely' in picking out the relevant probability function. But it's a term which uses a probabilistic notion. In other words, the hope for a non-circular definition is lost. And all the issues of interpretation appear again: - is 'most likely' subjective, objective etc.


>Originally Posted by yossell

>> Someone (call him P) who believed that probabilities were not just relative frequencies,

>I don't find phrases like this meaningful.

Not sure what you're objecting to - a subjectivist would disagree that probabilities were relative frequencies - linking them to degrees of belief. He may be wrong, but it seems a meaningful view.

>This guy P seems to think that all useful mathematical concepts have well-defined counterparts in the
>real world and that mathematics is just a tool to calculate them. (Why else would he be talking about
>what probability "really is"?)

Did he say that? I meant to put him forward as someone who disagreed with your own view about probability. Do you have some kind of a priori argument which demonstrates that your account is the only coherent one? And I hope that P wouldn't be naive enough to think that all mathematical concepts have well defined counterparts. And I tried to avoid this 'really is' talk because it leads to confusion. As I think you do too. I'm just not sure what your complaint with P is. He thought it was a mistake to take probabilities to be actual relative frequencies - that's all. You think this too, from what I can tell. Nothing odd here, or metaphysical, or theological, or stochastic.

>Note that neither mathematics nor science tells us what something "really is". The fact that experiments
>can't tell us anything except how accurate a theory's probability assignments are, is a huge limitation of
>science. We would certainly like to know what things "really are", but there are no methods available to
>us that can give us that information.

I think this is an argument for another day and neither I nor P is trying to push hard for the general legitimacy of 'really' questions - but, insofar as I understand the 'really', I do think we can have knowledge of what something really is, and I think we do that largely through science. I agree that we can be certain of little - only overwhelmingly sure! - but you've not been using 'really' in an epistemic way so far.

>I agree, but this sort of speculation isn't scientific. If someone has an opinion about what probability >"really is", I'm not going to care much about it until he/she has stated it in the form of a theory that
>assigns my kind of probabilities to possible results of experiments, because that's how science is done.

The different views about probability do try to connect their conception of probability with scientific method and evidence. For my part, I'm not clear how you link what you say about probability - it's just a function that meets certain mathematical conditions - with scientific method and evidence.

But's let's be clear about the argumentation - you said that that science led directly to a relative frequency view of probability. So you seemed to be saying that science favoured a view. I'm not yet seeing this. Perhaps your point is just that, since we can do a lot of science without taking sides, the debate is not scientific. Science is just neutral between different conceptions.

>Even if we take probability to be a primitive concept, like a continuous range of truth values between true
>and false, it seems very strange to associate it with our beliefs.

Not strange at all - on the contrary, if I thought there were a minimal conception of probability that wasn't just a piece of pure mathematics, but was connected with the real physical world, I would associate it with our beliefs. We avoid the endlessness of interpretational discussions by stressing the pragmatic aspect of science: the useful information that science gives us that guides our actions. But most of the time, we have incomplete information - everyone should at least agree that, given a probability statement, we should adjust our expectations and behaviour accordingly. That there's a ten percent chance of getting heads tells me what to expect in the long run, and I adjust my actions and my betting behaviour accordingly. It's right here, in the link to what to expect, how to behave, what to bet on, that probability statements are useful, have consequences, are needed. Indeed, for me, any account of probability that didn't link probabilities to expectations would be deficient.

yossell
 
  • #41


yossell said:
> If we take your definition of "the" relative frequency view literally, we don't even have an approximate probability until we have performed a large number of identical experiments, and those probabilities wouldn't be predictions about what will happen. They would be statements about what has already happened.

Why? [...] nothing in the idea meant to restrict it to past things only - nothing in my formulation either. :)
You talked about actual experiments, and I took that to mean "experiments that have been performed".

yossell said:
But it wouldn't be a relative frequency view - for relative frequencies haven't even been mentioned
That's right. It's just a definition of a word at this point.

yossell said:
You can define your words however you see fit. But I'd like to know what your definition has to do with the concept, what this definition deserves our normal word 'probability'.
The word is appropriate because everyone agrees (regardless of their interpretation of probability) that a probability measure has the properties you want a function that assigns probabilites to have.

yossell said:
It's the next part of your story I didn't understand/couldn't reconstruct. What are these additional set of axioms that links the mathematics to the real word? What do the axioms do? How do they make the link?
They tell us how to intepret some part of mathematics as predictions about the results of experiments by associating purely mathematical concepts with things in the real world. Note that's no obvious connection between mathematics and the real world (it makes perfect sense to think of mathematics as just a meaningless manipulation of symbols according to a specified set of rules), so someone has to specify how to apply the mathematics to the real world. Such a specification is useless if it doesn't consist of a set of statements that meets the requirements of my definition of a theory. (My definition is the minimum requirement for statistical falsifiability). The actual statements can't be derived from anything, so they have to be considered axioms.

Each theory is defined by a different set of axioms, so I can't just tell you what all the axioms are. My standard example is "A clock measures the proper time of the curve in spacetime that represents its motion". This is an axiom in both SR and GR. Here "proper time" is a purely mathematical concept, and "clock" is something in the real world defined by a description in plain English.

yossell said:
>> Someone (call him P) who believed that probabilities were not just relative frequencies,

>I don't find phrases like this meaningful.

Not sure what you're objecting to
I'm objecting to the idea that it makes sense to talk about what an undefined concept is. That's what I tried to explain by talking about integrals. What is the area inside a circle? If we have only defined the area of rectangles, that region doesn't have an area. So we have to define the concept before we can even ask the question.

"What is probability?" is a meaningless question for the same reason. That's why we have to start with a definition of the word. And everyone agrees that the purely mathematical definition is appropriate. The disagreement is about what corresponds to it in the real world.

yossell said:
For my part, I'm not clear how you link what you say about probability - it's just a function that meets certain mathematical conditions - with scientific method and evidence.
...
you said that that science led directly to a relative frequency view of probability. So you seemed to be saying that science favoured a view. I'm not yet seeing this.
The mathematical definition is automatically related to relative frequencies in finite ensembles through the definition of science. A theory is by definition a set of statements that associates "probabilities" (which are purely mathematical at this point) with possible results of experiments, and the scientific method now tells us that the theory is a good one if the relative frequencies after a large but finite set of experiments agree well with the mathematical probabilitites. This is the connection between mathematical probabilities and relative frequencies in finite ensembles in the real world.

The more I think about this, the more I think that my view isn't in the relative frequency camp at all. (I haven't changed my view, only my thoughts on how it should be classified). Philosophers might consider it an axiomatic view, but I think that would be wrong too, at least if they define the axiomatic view the way Home and Whitaker did (no connection to the real world). I think it would be appropriate to call this the scientific interpretation of probability.

yossell said:
Perhaps your point is just that, since we can do a lot of science without taking sides, the debate is not scientific. Science is just neutral between different conceptions.
Something like that. I'm saying that since we already know one thing in the real world that corresponds to mathematical probabilities, we don't need (or want) another one. But it's more than that. These "interpretations" are statements about something in the real world, but do they qualify as theories in their present form? Definitely not. So science does take a side here, and that's to dismiss all of these interpretations as unscientific.

Another point I've been trying to make is that theories are never perfectly unambiguous since they involve operational definitions, and that because of that, the attempt to associate mathematical probabilities with relative frequencies in infinite ensembles has no advantages over my idea of using finite ensembles. The N→∞ limit is a part of the relative frequency interpretation only because these philosophers have failed to understand this.

I'll end with the two most important examples of probability measures in physics.

In classical physics, the possible states of a physical system are represented by the points in a set called "phase space". (Each point represents a value of position and momentum). Observables are represented by functions from the phase space to the real numbers. For example, "energy" is represented by a function fE that takes a state s to the energy fE(s) that the system has when it's in state s. Now consider sets of the form [itex]f_E^{-1}(A)[/itex], where A is a subset of the real numbers. Such a set consists of all the states in which the system has an energy that's a member of A. Because of this, each such set is considered a representation of a "property" of the system, or equivalently, an "experimentally verfiable statement". We can now define a probability measure [itex]\mu_s[/itex] for each state s, on the set of all such sets (constructed from all observables of course, not just energy), by [itex]\mu_s(Z)=1[/itex] if [itex]s\in Z[/itex] and [itex]\mu_s(Z)=0[/itex] if [itex]s\notin Z[/itex].

In quantum mechanics, the (pure) states are represented by the unit rays of a Hilbert space, and experimentally verifiable statements by the subspaces of that Hilbert space. The probability measure is defined by

[tex]\mu_R(S)=\sum_{i=1}^{\dim S}|\langle s_i|\psi\rangle|^2[/tex]

where [itex]|\psi\rangle[/itex] is an arbitrary vector in the unit ray R, and the [itex]|s_i\rangle[/itex] are the members of any orthonormal basis for the subspace S.
 
Last edited:
  • #42


I'm putting this in a separate post since it's unrelated to the rest.
yossell said:
I think this is an argument for another day and neither I nor P is trying to push hard for the general legitimacy of 'really' questions - but, insofar as I understand the 'really', I do think we can have knowledge of what something really is, and I think we do that largely through science. I agree that we can be certain of little - only overwhelmingly sure!
I would say that the only things we can be overwhelmingly sure about in science are statements about how accurate a theory's predictions are. We can never be overwhelmingly sure that a theory is correct. In fact, we can always be sure that all theories are wrong! A statement such as "The Earth is round" can be considered correct, but only because it's too ill-defined to really qualify as a theory. The word "round" means "approximately spherical", so two people may look at the same thing and disagree about whether it's round. To turn that ill-defined statement into a theory, we would have to say something like "The Earth is spherical". This makes the predictions unambiguous, but it also makes the theory "wrong". That's why theories can't be classified as "right" or "wrong". They're all wrong. Some are just less wrong than others, and those are the ones we consider "good".
 
  • #43


Fredrik said:
I would say that the only things we can be overwhelmingly sure about in science are statements about how accurate a theory's predictions are.

All I said was that there were some things we could be overwhelmingly sure of - and even if you think it's just a theory's predictions, that's still something. I didn't say we could be overwhelmingly sure of every single thing a theory said. And just because we can't be sure of the exact number of atoms in a chair, doesn't mean that we can't be overwhelmingly sure that it is made of atoms. And just because we don't know the speed of light down to the nth decimal place doesn't mean we can't be overwhelmingly sure of the speed of light to an incredibly large number of decimal places. We're well aware of the margins of error in our results - doesn't mean nothing is sure.
 
  • #44


Thanks Fredrik, I think I've got a better understanding of the kind of position you're putting forward.

Fredrik said:
You talked about actual experiments, and I took that to mean "experiments that have been performed"

I said 'event', not 'experiment'. I don't mean to belabour this, but I do try to choose my words carefully. :) And I was using 'actual' as a contrast with 'possible' - to distinguish it from the kind of counterfactual you were putting forward - not as a contrast with future, but I see that this may not have been clear.

Fredrik said:
The word is appropriate because everyone agrees (regardless of their interpretation of probability) that a probability measure has the properties you want a function that assigns probabilites to have.

Thanks - I now understand.

1. Though everyone may agree that such conditions are necessary for something to deserve to be a probability, it doesn't mean that such conditions are sufficient to be a probability. In this case, the axioms that you cite are very weak - merely enough to make sure that P is a measure on a certain class. Are you saying that every measure is therefore a probability? Many measures correspond more closely to notions of length or area - just because a function from subsets of the real line or an interval of time to numbers have these mathematical properties doesn't yet make it a probability function.

2. Not everyone agrees about the axioms of probability. As the article says, some authors drop countable additivity. Some think the real numbers is the wrong mathematical structure to be mapping them too - experimenting with a set that contains non-standard numbers, others (I think Bohm suggested this once) have played with the idea of negative probabilities.

Fredrik said:
Each theory is defined by a different set of axioms, so I can't just tell you what all the axioms are. My standard example is "A clock measures the proper time of the curve in spacetime that represents its motion". This is an axiom in both SR and GR. Here "proper time" is a purely mathematical concept, and "clock" is something in the real world defined by a description in plain English.

I see. Though the challenge would be in defining "clock" without using the notion of proper time.

Just so you know, the approach I favour to the issue of how physical quantities are connected to mathematical ones was outlined in Hilbert's Foundations of Geometry.

Fredrik said:
I'm objecting to the idea that it makes sense to talk about what an undefined concept is. That's what I tried to explain by talking about integrals. What is the area inside a circle? If we have only defined the area of rectangles, that region doesn't have an area. So we have to define the concept before we can even ask the question.
Yes, if we have only defined area for rectangles. The question is whether our concept of area is exhausted by such a definition. But we make and always have made judgements of comparative areas whether or not those areas are rectangular - else disputes about land would never have been resolved. And note that, if it were simply a matter of area being undefined for circles, we should be able to take the area of a circle to be anything we like. For it's just a definition. In fact, the fact that we can draw lots of little squares that are wholly within the circle, and then a lot of little squares that cover the circle, giving two regions with well defined areas, and because the circle has an area that it clearly greater than the first region, for it includes it, and smaller than the second, for it is included by it, plays, in my view, a central motivating role in accepting the mathematical extension.
 
  • #45


yossell said:
All I said was that there were some things we could be overwhelmingly sure of - and even if you think it's just a theory's predictions, that's still something. I didn't say we could be overwhelmingly sure of every single thing a theory said. And just because we can't be sure of the exact number of atoms in a chair, doesn't mean that we can't be overwhelmingly sure that it is made of atoms. And just because we don't know the speed of light down to the nth decimal place doesn't mean we can't be overwhelmingly sure of the speed of light to an incredibly large number of decimal places. We're well aware of the margins of error in our results - doesn't mean nothing is sure.

Science was entirely certain for a long time that the sun rotated around the Earth :wink:. Experiments proved this notion every day. There is still no escaping the theory-dependence of observations and the underdetermination of (scientific) theories by evidence. http://en.wikipedia.org/wiki/Confirmation_holism#Theory-dependence_of_observations:
It is always possible to resurrect a falsified theory by claiming that only one of its underlying hypotheses is false; again, since there are an indeterminate number of such hypotheses, any theory can potentially be made compatible with any particular observation. Therefore it is in principle impossible to determine if a theory is false by reference to evidence.

QM forces us to question some close held assumptions, which has led people to question anything and everything. When it comes to interpretations especially, I think it's important to remember that it's all about the assumptions. No matter how much you think evidence supports your assumptions, someone can equally validly find different assumptions that also fit the evidence. If you are going to be overwhelmingly sure of something, just remember that there are equally valid theories that contradict yours :smile:.

What we are overwhelmingly sure of today will almost assuredly be viewed as incorrect by a subsequent paradigm / operating set of assumptions.

(Not that you need to be told any of this, but the conversation reminded me of it.)
 
Last edited:
  • #46


kote said:
Science was entirely certain for a long time that the sun rotated around the Earth :wink:. Experiments proved this notion every day. There is still no escaping the theory-dependence of observations and the underdetermination of (scientific) theories by evidence. http://en.wikipedia.org/wiki/Confirmation_holism#Theory-dependence_of_observations:

True, true. There's a modicum of doubt in my mind whether I'm being fooled by a Cartesian demon. Even so, I'm overwhelmingly sure I'm not. :smile:

I think the real danger is dogmatism - and no matter how certain of anything we may be, it's often good to be willing to reconsider a supposition. But I do think it's wrong to infer from the possibility of error, or the lack of a hundred per cent certainty that we don't really know what anything is really like. Unless those really's are just a way of meaning 'with a hundred per cent certainty.' And I'm certainly not anywhere near a hundred per cent certain about the correct account of probability.

kote said:
What we are overwhelmingly sure of today will almost assuredly be viewed as incorrect by a subsequent paradigm / operating set of assumptions.

'almost assuredly'! You sound very certain... :smile:
 
  • #47


yossell said:
I think the real danger is dogmatism
Agreed. That's all I meant to point out. A lot of the zealotry over interpretations (not that I've seen any in this thread) just seems awfully silly. The quote above doesn't present any problems we didn't already have with induction.
yossell said:
'almost assuredly'! You sound very certain... :smile:
Just because science can't have certainty it doesn't follow that philosophers can't :wink:. Actually I'm not certain at all, just optimistic. I hope we can look back at QM once we've figured out something better. If we're still debating interpretations for eternity... well, that would just be unfortunate. Next paradigm please.
 
  • #48


yossell said:
Are you saying that every measure is therefore a probability?
All probability measures are measures, and all measures that satisfy [itex]\mu(X)=1[/itex] are probability measures. (The domain of a measure is a [itex]\sigma[/itex]-algebra of subsets of some set, and I call that set X here).
yossell said:
Many measures correspond more closely to notions of length or area - just because a function from subsets of the real line or an interval of time to numbers have these mathematical properties doesn't yet make it a probability function.
I'm not sure what you mean by this. If you mean that the definition of a probability measure doesn't imply that it has anything to do with something in the real world that we would like to make probabilistic statements about, then I agree. I think I made that clear in #41, if not earlier. The definition is pure mathematics, and therefore can't tell us anything about the real world. We need a theory for that.

yossell said:
2. Not everyone agrees about the axioms of probability. As the article says, some authors drop countable additivity. Some think the real numbers is the wrong mathematical structure to be mapping them too - experimenting with a set that contains non-standard numbers, others (I think Bohm suggested this once) have played with the idea of negative probabilities.
The real numbers couldn't possibly be the wrong choice. There could be another choice that gives us a definition with a wider range of applications, but that doesn't make this one wrong.

yossell said:
Just so you know, the approach I favour to the issue of how physical quantities are connected to mathematical ones was outlined in Hilbert's Foundations of Geometry.
What did Hilbert say? I don't see anything in the table of contents.
 
  • #49


Fredrik said:
All probability measures are measures, and all measures that satisfy [itex]\mu(X)=1[/itex] are probability measures. (The domain of a measure is a [itex]\sigma[/itex]-algebra of subsets of some set, and I call that set X here).

I'm not sure what you mean by this. If you mean that the definition of a probability measure doesn't imply that it has anything to do with something in the real world that we would like to make probabilistic statements about, then I agree. I think I made that clear in #41, if not earlier. The definition is pure mathematics, and therefore can't tell us anything about the real world. We need a theory for that.

Ah - it's not clear to me why any measure which satisfies [itex]\mu(X)=1[/itex] deserves to be called a probability measure. In classical theories, mass is an additive function on the set of parts of an object, which maps disjoint parts onto the sum of numbers that the individual parts are mapped onto etc. etc. If it's a unit mass, the set of all parts will be mapped onto 1. Such a measure would represent the mass of any collection of parts of the object - not a probability.

The question I thought we were discussing was whether we had something that deserved the name 'probability' - there's then the further question of whether the function tracks actual probabilities - the probability that this will decay in the next five minutes. That's what I took the 'physics' part of your answer to be modelling. But it may be that you meant to build this connection with relative frequencies into what a mathematical function had to do in order to be called a probability.

Fredrik said:
The real numbers couldn't possibly be the wrong choice. There could be another choice that gives us a definition with a wider range of applications, but that doesn't make this one wrong.

In an earlier post you wrote
"The word is appropriate because everyone agrees (regardless of their interpretation of probability) that a probability measure has the properties you want a function that assigns probabilites to have."

The example was to show that not everyone does agree that probabilities had these properties thus to put pressure on this claim.

Fredrik said:
What did Hilbert say?

(agghhh! why do I always write \quote rather than /quote and screw up my attempts at multiquote?)

It's not the issue that Hilbert is addressing directly, but what he offers in the book is a way of axiomitising geometry without using numbers or coordinates. Using predicates whose physical interpretation is supposed to be unproblematic - though whether they are supposed to be operationally or contextually defined or simply primitive is, I think, not discussed - such as 'same length' and 'between', he shows how to write down axioms in logical but non-mathematical vocabulary which guarantee that, say, a line of spatial points is isomorphic to the mathematical real line of space. It is in virtue of this shared structure between the mathematical and the non-mathematical that real numbers manage to represent facts about congruence length and the like.

I think in the series of books Foundations of Measurement, by Krantz, Suppes, etc. the ideas are extended to things other than geometry, such as mass, temperature and the like.
 
  • #50


yossell said:
The question I thought we were discussing was whether we had something that deserved the name 'probability'
...
But it may be that you meant to build this connection with relative frequencies into what a mathematical function had to do in order to be called a probability.
What I'm trying to say is that science by definition associates numbers assigned by probability measures with relative frequencies in some large but finite set of almost identical experiments performed in the real world. Probability measures are connected to relative frequencies by the definition of science!

If you don't think that this is also a good reason to use the word "probability", I don't think anything could convince you. I just hope you understand that if someone were to make a claim that probability is something else (I'm not saying you are), they wouldn't be making sense. It doesn't make sense to make claims about an undefined concept, or to ask questions about it. It would however make sense to change the definition of the word, but that would also change the definition of science, since the concept of a probability measure is a part of it. That's another point I've been trying to make.

yossell said:
- there's then the further question of whether the function tracks actual probabilities
Without a definition of "actual probabilities", I don't think that's a meaningful question. If you're saying that there is such a thing as actual probabilities, you're making a statement about the real world. If you can state it in the form of a theory, it's science. If not, it's pseudo-science.
 
Last edited:
  • #51


Fredrik said:
OK, that one isn't weird by itself, because if (for example) the hidden variables aren't observables, there's no conflict with Bell's theorem. But these statements look like they would very much be in conflict with Bell:

For example, he states [3, p. 361], “a momentum eigenstate. . . represents the ensemble whose members are single electrons each having the same momentum, but distributed uniformly over all positions”. Also on p. 361 of ref. [3], he says, “the Statistical Interpretation considers a particle to always be at some position in space, each position being realized with relative frequency u/i(r)~2in an ensemble of similarly prepared experiments”. Later [3, p. 379] he states, “there is no conflict with quantum theory in thinking of a particle as having definite (but, in general, unknown) values of both position and momentum”.​

I'm very surprised by this. Could it be that in 1970, when this was written, Ballentine still didn't understand Bell's theorem? (Bell's theorem was published in 1964).

From what I've read in Bohm interpretation both momentum and position of a particle are precisely defined at all times, we just cannot *know* both of them at the same time for practical reasons, since Bohm interpretation agrees with Bell I think the statement you quote is fine.

As for ensemble interpretation (my personal favorite (without PIV) since it doesn't postulate anything beside what can be directly verified by experiments) as was stated the main difference from CI is that according to ensemble QM cannot say anything about individual events and is only applicable to ensembles. And yes it is open to hidden variables (which I also think is the way to go - the next successful theory of matter will be based on contextual hidden variables ;)
 
  • #52


PTM19 said:
From what I've read in Bohm interpretation both momentum and position of a particle are precisely defined at all times, we just cannot *know* both of them at the same time for practical reasons, since Bohm interpretation agrees with Bell I think the statement you quote is fine.

I think Bohm formulation agrees with Bell's test because it's nonlocal.

What surprises me in Ballentine's book is that he thinks that the violation of Bell's inequality rules out only locality, due to Stapp's analysis.
 

Similar threads

Replies
84
Views
4K
Replies
3
Views
2K
Replies
91
Views
6K
Replies
14
Views
2K
Replies
21
Views
3K
Replies
115
Views
12K
Back
Top