Does anyone actually use the term ensemble interpretation ?

In summary, people are asking about the "ensemble interpretation" and whether or not it's actually used. Some people think it's the interpretation advocated by Bohr, while other people think it's not. The Copenhagen interpretation carries its name due to Bohr, but I don't think that the ensemble interpretation is completely the interpretation advocated by Bohr.
  • #1
Fredrik
Staff Emeritus
Science Advisor
Gold Member
10,877
423
Does anyone actually use the term "ensemble interpretation"?

People have been asking questions about the "ensemble interpretation" here lately. It's mentioned on the Wikipedia page "interpretations of quantum mechanics" and has its own page. But is that term actually used in books and articles, or was it invented by the guy who wrote those Wikipedia entries? I don't think Ballentine gives it a name. Isham just describes this position as "anti-realist".
 
Physics news on Phys.org
  • #2


I once heard a talk by a Professor who used this term, so I think it is used by more than just the guy who made the wiki article.
 
  • #3


I think even Balentine does not use this term in his textbook.
 
  • #4


What many people call the "ensemble interpretation" I always thought was a part of good old Copenhagen. I had never heard a teacher or read a book where they used "ensemble interpretation" before. I had heard of the Born statistical interpretation of the wavefunction, but thought that was just a part of the normal Copenhagen interpretation.
 
  • #5


That's one of the reasons why I'm asking. I think the "ensemble interpretation" is what the Copenhagen interpretation was until it was redefined by people who had misunderstood it.
 
  • #6


Fredrik said:
That's one of the reasons why I'm asking. I think the "ensemble interpretation" is what the Copenhagen interpretation was until it was redefined by people who had misunderstood it.
The Copenhagen interpretation carries its name due to Bohr, right?
And I think that the ensemble interpretation is NOT (or at least not completely) the interpretation advocated by Bohr.

Having this in mind, perhaps it would be better to say that ensemble interpretation is the orthodox interpretation.
 
  • #7


Demystifier said:
The Copenhagen interpretation carries its name due to Bohr, right?
Bohr and Heisenberg apparently, but I don't think the two of them really agreed about these things.

Demystifier said:
And I think that the ensemble interpretation is NOT (or at least not completely) the interpretation advocated by Bohr.
Maybe not, but I find it hard to find anything in the ensemble interpretation that I think Bohr would disagree with.

Regarding what Bohr actually advocated, I came across this article by Asher Peres, which I think you will find interesting too considering that you started a thread about Copenhagen a few months ago. (I hope Dmitry67 will read it too). The interestesting stuff starts on page 6:

There seems to be at least as many different Copenhagen
interpretations as people who use that term, probably there are more. For example, in
two classic articles on the foundations of quantum mechanics, Ballentine (1970) and Stapp
(1972) give diametrically opposite definitions of “Copenhagen.”
...
I shall now explain my own Copenhagen interpretation.
It relies on articles written by Niels Bohr. Whether or not you agree with Bohr, he is the
definitive authority for deciding what is genuine Copenhagen.

Peres's description of the CI sounds very much like how I've been describing the ensemble interpretation. There is however one thing that looks pretty odd today. I'll let Peres say it:
It is remarkable that Bohr never considered the measuring process as a dynamical
interaction between an apparatus and the system under observation. Measurement had
to be understood as a primitive notion.
By the way, the article by Ballentine that Peres mentioned is called "The statistical interpretation of quantum mechanics", so I guess that's what Ballentine calls it.
 
Last edited:
  • #8


I posted this in another thread.

http://www.dipankarhome.com/ENSEMBLE%20INTERPRETATIONS.pdf
This paper reviews various meanings of probability and ensemble interpretations proposed since Einstein and up to Ballentine.

In the book Compendium of Quantum Physics, Ballentine himself writes an entry on ensemble in quantum mechanics, and he cites this paper as the secondary source, if I remember correctly. So the paper shouldn't be too outdated.
 
Last edited by a moderator:
  • #9


Fredrik, regarding the Peres paper you linked, here is what Jan Faye has to say about Popper and Bohr in the accepted philosophical reference on CI (http://plato.stanford.edu/entries/qm-copenhagen/):
Bohr was definitely neither a subjectivist nor a positivist philosopher[*], as Karl Popper (1967) and Mario Bunge (1967) have claimed. He explicitly rejected the idea that the experimental outcome is due to the observer.
Faye works at Copenhagen University and is probably the leading Bohr scholar. He's published on the history of Bohr's philosophical development etc, so he's also a good source if you're looking for more.

Popper was one of the leading opponents of CI who came to define the version of it we have today. Strange how we let its opponents define it. It's no wonder that it's gone from being orthodox to somewhat taboo.

Bohr collected much of his philosophy on QM in Atomic Theory and the Description of Nature, which is very readable. Unfortunately, Bohr's own views shifted slightly through time (although less than would be implied by the changes in his terminology). So you have to keep in mind the dates of each anthologized paper. I think his Essays on Atomic Physics and Human Knowledge are also worth reading. They contain some of his later thoughts, but are spread over a longer period and are not as clear about his overall interpretation.

Also, Bohr and Heisenberg definitely disagreed. Heisenberg's Physics and Philosophy started to steal some of the thunder from Bohr. It's more dense, describes things differently, and starts off in a more observer-centric direction. I like Bohr's explanations much more than Heisenberg's, and honestly I didn't read Heisenberg as closely.

*Being a "positivist philosopher" and having positivist philosophies are different things :smile:.
 
Last edited by a moderator:
  • #10


Since you mentioned Bohr and got me reading back through my thesis, here are some more thoughts :smile:. Regarding Bohr and subjectivity (Bohr quote from ATDN):
To the confusion of this last point, Bohr did refer to the subjectivity of quantum observers. His subjectivity, however, applies no more to humans than to any type of physical system. Subjectivity, for Bohr, is not a property restricted to minds. His views on this topic are demonstrated by his discussion of relativity. He writes, “The theory of relativity reminds us of the subjective … character of all physical phenomena, a character which depends essentially upon the state of motion of the observer.”
Bohr speaks about relatively a lot and derives much of his philosophy from it without even needing quantum weirdness. Jan Faye talks about this in the SEP article too.

The defining characteristic of Bohr philosophy is not, I find, epistemological subjectivity or observer-dependence, but is illustrated in the following quote from ATDN:
[Quantum mechanics] may be regarded as a natural generalization of the classical mechanics with which in beauty and self-consistency it may well be compared. This goal has not been attained, still, without a renunciation of the causal space-time mode of description that characterizes the classical physical theories which have experienced such a profound clarification through the theory of relativity.

Bohr does not reject causality outright or give measurement or observers any special status they didn't already have. What he does do is deny that we can comprehend or model what an electron does between physical interactions. For Bohr, classical objects and particles exist, and that's all there is to it. But we cannot describe or visualize what they are doing when they aren't in the process of interacting with something (being measured). Discussions of phenomena are only intelligible when specifying the conditions of measurement, so it's simply meaningless to ask about what an electron is doing when it isn't being measured - it's like asking whether apples are red when no one is looking at them.

The sensation of redness requires both an apple and proper measurement conditions that will allow for the redness to manifest itself. Redness requires an apple, proper lighting, and a person who is looking at the apple. An apple is neither red nor not red when a person isn't looking or there isn't proper lighting. At the same time, red apples are real entities. Likewise, electrons with spin are real and basic entities - just don't ask about them without specifying the experimental conditions that will determine the properties they will manifest.

Phenomena and visualizability are also very technical terms in Bohr's writing.

As a side note, one other interesting feature of Bohr's interpretation is that it restores many of our naive/natural ontological beliefs. Redness, a property that we naively consider to be real, was shunned from the realm of intrinsic objective properties by classical physics and demoted to being an extrinsic or even subjective property. If you look at how Bohr treats atomic properties, however, they undergo this same sort of purging from the intrinsic and objective. Bohr denies there is such a thing as an intrinsic, persistent, objective property. Tables and chairs, colors and sounds, meaning, mass, and spin, are now all just as real, as our naive intuitions would have us believe (if not, admittedly, as real as we naively might want them to be).
 
Last edited:
  • #11


I was always wondering, how Bohr draw a line between quantum and classical. He did it once when he developed the first naive model of hydrogen, but I am asking - did he say anything about it later, when QM was fully developed theory.
 
  • #12


Dmitry67 said:
I was always wondering, how Bohr draw a line between quantum and classical. He did it once when he developed the first naive model of hydrogen, but I am asking - did he say anything about it later, when QM was fully developed theory.

Sure - classical is the old, obsolete, approximate model, and quantum is the correct version of things :smile:.
 
Last edited:
  • #13


kote said:
Sure - classical is the old obsolete approximate model, and quantum is the correct version of things :smile:.

If what you are saying is true, then Bohr was not a proponent of Copenhagen Int. like Newton was not a proponent of Newtonian mechanics (I've heard he believed in particle-wave duality for light)
 
  • #14


Dmitry67 said:
If what you are saying is true, then Bohr was not a proponent of Copenhagen Int. like Newton was not a proponent of Newtonian mechanics (I've heard he believed in particle-wave duality for light)

I'd say that's a fair statement. I don't know if I've ever heard a version of the CI that really matched Bohr's philosophy.

There was no demarcation between classical and quantum (or macro and micro) behavior for Bohr. Electrons and turtles both can only manifest their properties in a context-dependent way, depending on the conditions of their interactions with other systems. There's no need to differentiate, since Bohr's electrons were essentially classical except in how they behave causally and how we can talk about them between interactions.

I think it would probably be fair to say that all interactions were classical for Bohr. It's that time between interactions, that we can't talk about, that screws things up. But when two electrons collide, they do so classically, exchanging momentum and changing directions. You just can't infer anything about the next causal step they will take. It doesn't even make sense to say they exist when they aren't interacting with something. You'll never hear Bohr put it that way though, because to do so would be to talk about an electron as a thing that can exist or not independent of the context of its interactions - something he denies we can do.
 
Last edited:
  • #15


kote said:
electrons with spin are real and basic entities

Please explain about this in more detail.
"Spin" is real? As far as I know, if "spin" exists (Of course it does in QM) "spinor" must be
used because "spin" is a very strange thing which I can't understand as a real particle.
See this thread for more.
 
  • #16


ytuab said:
Please explain about this in more detail.
"Spin" is real? As far as I know, if "spin" exists (Of course it does in QM) "spinor" must be
used because "spin" is a very strange thing which I can't understand as a real particle.
See this thread for more.

Easy. Spin is the property of electrons that causes them to go one direction or another in a magnetic field. There isn't really anything else to understand. There is no quantum version of spin that you have to understand for Bohr, since quantum causality is inherently "unvisualizable." From http://plato.stanford.edu/entries/qm-copenhagen/:
Moreover, there is no further evidence in Bohr's writings indicating that Bohr would attribute intrinsic and measurement-independent state properties to atomic objects (though quite unintelligible and inaccessible to us) in addition to the classical ones being manifested in measurement (Faye 1991).
It's certainly arguable whether or not something without intrinsic properties can be considered "real," but for Bohr there was simply no other option. This gets back to the re-equalization of the ontological statuses of different types of properties though. You can reject Bohr's terminology, but within his system it can't be denied that spin is just as real as mass, temperature, color, or any other property, and vice versa.

I'm not familiar enough with the ensemble interpretation anymore to say how all of this compares, but feel free to enlighten me :smile:.
 
Last edited by a moderator:
  • #17


kote said:
It's certainly arguable whether or not something without intrinsic properties can be considered "real," but for Bohr there was simply no other option. This gets back to the re-equalization of the ontological statuses of different types of properties though. You can reject Bohr's terminology, but within his system it can't be denied that spin is just as real as mass, temperature, color, or any other property, and vice versa.

I'm not familiar enough with the ensemble interpretation anymore to say how all of this compares, but feel free to enlighten me :smile:.
I think the idea is that a system can be said to have a certain property (say a momentum in a specified range) if and only if QM assigns probability 1 to the possibility that a measurement will give us a result that's consistent with that. I think this goes back at least to EPR, who (if I remember correctly) called such a property an "element of physical reality".

Personally, I don't think such a definition adds anything of value to the theory.
 
Last edited:
  • #18


Fredrik said:
I think the idea is that a system can be said to have a certain property (say a momentum in a specified range) if and only if QM assigns probability 1 to the possibility that a measurement will give us a result that's consistent with that. I think this goes back at least to EPR, who (if I remember correctly) called such a property an "element of physical reality".

Personally, I don't think such a definition adds anything of value to the theory.
EPR:
If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity.

The problem with this criterion for Bohr is that systems only manifest real properties during their "disturbance" or interaction. Bohr requires disturbance for the existence of properties, while EPR say something is only real if we know what it is without any interaction or disturbance. If I remember correctly, and I'm not as familiar with this, but I think a lot of what Einstein and Bohr argued about was whether or not we in principle could come up with a theory that would allow properties to meet the EPR criterion. Bohr said it was impossible and Einstein said QM was simply incomplete. Nothing in Bohr's version of things would ever meet the EPR criterion though. Actually, I don't think Bohm even gives you a property that can meet the EPR criterion.

Einstein wants a visualizable causal picture, and Bohr says we can't have one.

I'm not quite sure where I stand, since I want it all like Einstein but I don't think his vision is quite possible. Ah well. It feels a little arrogant to say that in principle we can never come up with something better, so the current statistical theory must be complete. The more QM is studied though, the more I think we might not be able to come up with a better completely context-independent model for things.
 
Last edited:
  • #19


In "Veiled Reality", Bernard d'Espagnat, 1995, he devotes a section titled "The (So-Called) Ensemble Interpretations".

For this he cites two alternative namings: "statistical" and "stochastic" theory.
 
  • #20


Phrak said:
In "Veiled Reality", Bernard d'Espagnat, 1995, he devotes a section titled "The (So-Called) Ensemble Interpretations".
Peres uses the term "ensemble interpretations" too, in the article I linked to in #7. And so does this article:
Truecrimson said:
I posted this in another thread.
http://www.dipankarhome.com/ENSEMBLE%20INTERPRETATIONS.pdf
This one has a lot of interesting information actually. Section 1 starts with this:

Ensemble interpretations of quantum theory contend that the wave function describes an ensemble of identically prepared systems. They are thus in contrast to “orthodox” or “Copenhagen” interpretations, in which the wave function provides as complete a description as is possible of an individual system.
That seems like a reasonable way to define the difference between the two, but I have to say that I don't understand how this is a difference if the only way to reveal those properties is to perform a large number of measurements on an ensemble of identically prepared systems. I think this means that what really distinguishes ensemble interpretations from Copenhagen interpretations is a different interpretation of probability. The fact that the author spends 15 pages discussing interpretations of probability seems to support that idea.

My own opinion is that all this stuff about interpretations of probability is nonsense, or at least unscientific. Science leads directly to the relative frequency view of probability. I'll explain why. Every time we define a probability measure (which is a perfectly well-defined mathematical concept) and claim that it associates probabilities with things in the real world, we're leaving the domain of pure mathematics and entering the domain of science, or pseudo-science (if we fail to meet the requirements of a "theory", which I'm about to define). I define a theory as a set of statements that associates a probability with each possible result of each experiment in some set of experiments. (I also require that this set is finite, logically consistent, and doesn't contain any statements that can be removed without changing the assignment of probabilities). It's the only definition that makes any sense to me, considering the obvious fact that the only thing experiments can tell us is how accurate a theory's predictions are. How do we find out how accurate the predictions are? By performing repeated measurements on an ensemble.

I define science to be the process of finding new theories and performing experiments to test the accuracy of their predictions. So the relative frequency view of probability is built into the definition of science, or at least into my definition of science. So anyone who supports any other interpretation of probability either has a very different definition of science than me, or is ignorant of the fact that any attempt to apply mathematics to the real world is either science or pseudo-science.

At this point you may be thinking "What about probabilities assigned to single events?". If the method you use to assign such a probability can't be used to assign probabilities to other single events, the assignment is meaningless, as the method fails to define a theory. If it can assign probabilities to many other single events, then the theory can be (statistically) falsified by repeated experiments.

You may also be thinking "What about the existence of the N→∞ limit?" (The average result of a series of N measurements is often claimed to have some specific value in the limit N→∞). This question is irrelevant, since the axioms of a theory specify relationships between well-defined mathematical quantities and things in the real world that are defined operationally and can therefore never be perfectly well-defined. For example a "clock" is defined by a description in plain English of what that word means. Since the theory isn't well-defined anyway, it doesn't make sense to require that the N→∞ limit must be.

I'll quote a few more interesting passages from that article:

For both orthodox and ensemble cases, we have used the plural — “interpretations” — to emphasise that each class contains several variants, and, particularly on the ensemble side, it will be a major task to distinguish between them. The most important division will be between what we have previously called [1] PIV ensemble interpretations — ensemble interpretations with pre-assigned initial values for dynamical variables, and what Gibbins [2, p. 76] has called minimal ensemble interpretations, which carry no such superstructure.​

They use the term "PIV" a lot, and it helps to know what they mean by it. Section 5.1 explains it in more detail:

...the PIV assumption. the idea that all observables have values prior to any measurement.

The premeasurement value is available to become the measuremental result (though, of course. PIV theorists are not obliged to require that either the initial or any repeated measurement actually gives the PIV).

In section 4.4, they're quoting Ballentine as saying some things about PIV's in the ensemble intepretation that I find really weird:

His stated position is, “the Statistical Interpretation. . . is completely open with respect to hidden variables. It does not demand them, but it makes the search for them entirely reasonable.”​

OK, that one isn't weird by itself, because if (for example) the hidden variables aren't observables, there's no conflict with Bell's theorem. But these statements look like they would very much be in conflict with Bell:

For example, he states [3, p. 361], “a momentum eigenstate. . . represents the ensemble whose members are single electrons each having the same momentum, but distributed uniformly over all positions”. Also on p. 361 of ref. [3], he says, “the Statistical Interpretation considers a particle to always be at some position in space, each position being realized with relative frequency u/i(r)~2in an ensemble of similarly prepared experiments”. Later [3, p. 379] he states, “there is no conflict with quantum theory in thinking of a particle as having definite (but, in general, unknown) values of both position and momentum”.​

I'm very surprised by this. Could it be that in 1970, when this was written, Ballentine still didn't understand Bell's theorem? (Bell's theorem was published in 1964).

And finally, after reading parts of this article, I have come to the following conclusions about the terminology used by the authors of the article and the people they mention. Stapp's definition of the Copenhagen interpretation is essentially equivalent to what Ballentine calls the statistical interpretation and what the authors call an ensemble interpretation. (Ballentine claims that Stapp has redefined the CI in an attempt to save it). Murdoch on the other hand, defines "the ensemble interpretation" to be an interpretation in which the state vector represents the "state of knowledge of an object". Murdoch defines the statistical interpretation essentially the same way as Ballentine.
 
Last edited by a moderator:
  • #21


Thanks for the recap Fredrik. I agree with what you've said about probability. Regarding ensemble vs Bohr, I see some subtle differences. Bohr would disagree with PIV or hidden-variable-compatible ensemble interpretations. In order to make his radical claim that objects could both be real and not have any intrinsic objective properties, he required the completeness of his extrinsic-only model.* Completeness implies that there are no hidden, more basic, intrinsic variables to learn about.

I think that Bohr's view was probably similar to a "state of knowledge of an object" view, with one qualification. Bohr didn't allow talk of context-independent properties. He might say that the wave function represents the state of knowledge about what properties an object may manifest when measured in a certain way. He wouldn't say that an electron, a, has a 50% chance of currently being spin up. He might say that a has a 50% chance of being spin up when a measurement occurs. It didn't make sense for him to talk about observables independent of observation.

*Conceptually it was probably the reverse of this. A denial of intrinsicality and visualizability would have led to the thesis that QM is complete, and probably not vice versa as stated.
 
Last edited:
  • #22


Fredrik said:
But these statements look like they would very much be in conflict with Bell:

For example, he states [3, p. 361], “a momentum eigenstate. . . represents the ensemble whose members are single electrons each having the same momentum, but distributed uniformly over all positions”. Also on p. 361 of ref. [3], he says, “the Statistical Interpretation considers a particle to always be at some position in space, each position being realized with relative frequency u/i(r)~2in an ensemble of similarly prepared experiments”. Later [3, p. 379] he states, “there is no conflict with quantum theory in thinking of a particle as having definite (but, in general, unknown) values of both position and momentum”.​

I'm very surprised by this. Could it be that in 1970, when this was written, Ballentine still didn't understand Bell's theorem? (Bell's theorem was published in 1964).

Can you, please, be more specific ? What does Bell's theorem have to do with the quotes from Ballentine ? Was Ballentine addressing Bell's theorem in the quoted lines and I'm not getting it, or ?
 
  • #23


kote said:
I think that Bohr's view was probably similar to a "state of knowledge of an object" view, with one qualification. Bohr didn't allow talk of context-independent properties. He might say that the wave function represents the state of knowledge about what properties an object may manifest when measured in a certain way. He wouldn't say that an electron, a, has a 50% chance of currently being spin up. He might say that a has a 50% chance of being spin up when a measurement occurs. It didn't make sense for him to talk about observables independent of observation.

In section 5 of the Home and Whitaker paper they mention something similar:
Before this, though, we should clarify one point. A similar, though less extreme, position to PIVs is where observables do not necessarily “have” values prior to measurement, but the value that any observable will take, if it is measured, is fixed. This would, of course, usually be described as a hidden-variables theory, which one might wish to be local and/or deterministic.
The difference still is that for Bohr there is no preset property of the system, since there are no real properties independent of measurement. You still wouldn't get hidden variables. In this case Bohr is still closer to a subjective knowledge interpretation.
 
  • #24


bigubau said:
Can you, please, be more specific ? What does Bell's theorem have to do with the quotes from Ballentine ? Was Ballentine addressing Bell's theorem in the quoted lines and I'm not getting it, or ?
If I haven't misunderstood something fundamental, what he said is exactly the sort of thing that's proved wrong by Bell inequality violations. He said that each member of the ensemble can have a well-defined position and a well-defined momentum at the same time, even though the position and momentum operators don't commute. If he's right about this, then the same thing should hold for any pair of operators that don't commute, in particular any two of the operators [itex]S_x, S_y, S_z[/itex] for a spin-1/2 particle. (I don't think it's really necessary to change the discussion to be about spin components instead of position and momentum, but it will be much easier for me to explain my point if we do).

If there are observables corresponding to two of these operators that have well-defined values at all times, then by rotational invariance, there must exist an observable with a well-defined value for each operator of the form [tex]\vec a\cdot\vec S[/tex], where [itex]\vec a[/itex] is a unit vector. But the assumption that this operator corresponds to an observable that has a well-defined value [itex]a_n[/itex] in ensemble member n, which will be the result of a measurement of [tex]\vec a\cdot\vec S[/tex] on ensemble member n, leads directly to a Bell inequality called the CHSH inequality, which is violated by QM. See pages 215, 216 in Isham's book for a derivation.
 
Last edited:
  • #25


Fredrik,
I wasn't sure I followed your very interesting argument about how science leads directly to the relative frequency view of probability.
Fredrik said:
My own opinion is that all this stuff about interpretations of probability is nonsense, or at least unscientific. Science leads directly to the relative frequency view of probability.
...
I define a theory as a set of statements that associates a probability with each possible result of each experiment in some set of experiments... It's the only definition that makes any sense to me, considering the obvious fact that the only thing experiments can tell us is how accurate a theory's predictions are. How do we find out how accurate the predictions are? By performing repeated measurements on an ensemble.

I define science to be the process of finding new theories and performing experiments to test the accuracy of their predictions. So the relative frequency view of probability is built into the definition of science, or at least into my definition of science.

I understand the relative frequency view of probability to be the view that probabilities *just are* relative frequencies of actual events - at least, when one is arguing about what probability is, I think this is how the view is understood. I find the view attractive as it would makes probabilities wholly unproblematic, if correct. But I was unable to see how this view followed from your view about science and scientific theories.

Someone (call him P) who believed that probabilities were not just relative frequencies, but were something more fundamental or primitive, irreducible properties of objects or events or whatever (not defending this view note!) could, it seems to me, agree what you say. Yes, theories are just assignments of probabilities to possible results of experiments. Yes, we discover probabilities through repeated experiments and testing, just as you say. But this is just because, given the probabilities, certain actual relative frequencies are the *most likely*. The probabilities are thus inferred on the basis of the frequencies, but not identified with them. The greater the number of experiments done where the proportion holds, the more likely it is that the probabilities of events match that proportion. So we should change and update our beliefs about the probabilities in a rational and scientific way. There's nothing non-empirical going on here and nothing which goes against your definition of science, as far as I can see here.

I didn't understand your point about single events. Normally, single events are taken to be a challenge for the view that probabilities just are relative frequencies. The interpretational worry is that: Identifying probabilities with actual relative frequencies can seem too strong. A fair coin *can* keep landing heads. The more tosses we do, the less rational it is to believe it's fair. But it still may be that way. We'd have to be awfully unlucky - but that can happen. But if probabilities just are actual relative frequencies, then if the coin lands heads every time, we forced to say that its chances are 100 percent. That seems wrong. The relationship between frequency and probability does not appear to be one of entailment; the frequencies are evidence for probabilities,not logical proof. On a single event, the problem is very acute - the chance is either 1 or 0 - but if we believe QM, say, we may have very good reason for thinking that the probability is actually between these two values.

You seemed to say that, unless you could assign probabilities to other single events, you weren't being scientific. But well established theories do assign probabilities to events irrespective of how often they occur - imagine some very complex collection of quantum particles in some very peculiar and unusual arrangement - theoretically, the S' equation will tell us what the likelihood of P happening will be - but maybe the universe only throws up one of these things (yeah, it's big - but there's got to be some arrangement of quantum things it only occasionally throws up). QM assigns a clear probability - if we could solve the equation, this is the probability we *ought* to believe - but it's not the relative frequency.
 
  • #26


Fredrik said:
Bohr and Heisenberg apparently, but I don't think the two of them really agreed about these things.
But Heisenberg didn't live in Copenhagen. Or did he? :confused:

Fredrik said:
Maybe not, but I find it hard to find anything in the ensemble interpretation that I think Bohr would disagree with.
The ensemble interpretation explicitly claims that QM is not a theory of individual particles. I am not sure that Bohr would agree with that.

Fredrik said:
Regarding what Bohr actually advocated, I came across this article by Asher Peres, which I think you will find interesting too considering that you started a thread about Copenhagen a few months ago.
Thanks!
 
  • #27


Fredrik said:
I'm very surprised by this. Could it be that in 1970, when this was written, Ballentine still didn't understand Bell's theorem? (Bell's theorem was published in 1964).
I wouldn't be surprised at all. Not many physicists understood Bell's theorem at that time.

Besides, there are aspects of QM that Ballentine does not understand even today. Even though his modern textbook on QM is my favoured one, his discussion of the quantum Zeno effect in this book is wrong.
 
  • #28


Fredrik said:
My own opinion is that all this stuff about interpretations of probability is nonsense, or at least unscientific.

I see that there is discussion of several details here and I just want to add some more input specifically about the "subjective probability vs frequentists views"

Fredrik said:
I define a theory as a set of statements that associates a probability with each possible result of each experiment in some set of experiments.
...
How do we find out how accurate the predictions are? By performing repeated measurements on an ensemble.

This sounds reasonable, but the problem is that we really do not find ensembles in nature, do we?

Instead, if we can agree (?) that an ensemble is something that is constructed or emergent to an observer that "performs repeated measurements" or equivalently that ust has been "in interaction with" the system in question.

...then the frequentist interpretation that results from this construction, given the constraints of a real observer, does end up dependent on the observer. And after some "interaction time" or repetition this emergent "frequency" IS observer dependent. So there is not real contradiction between frequentism and subjectivism here. I think the main point is the additional insight that the process of counting and retention is important.

So in my view, the frequentists interpretation is not wrong in any way, it's just that the physicall process of counting and retention of counting states IS observer dependent, so the end result IS subjective.

Now, I'm not really talking about scientist.1 counting different than scientists.2, but even though in principle this can also occur, the process of science will question which is right, and the emergent consensus is what the community as a whole can reproduce.

But an analogous situation can be envisioned at generic systemlevel, since (like rovelli's RQM argues nicely) the ONLY way for two systems, observers, (or scientists for that matter) to compare their results is to interact/communicate.

Fredrik said:
So the relative frequency view of probability is built into the definition of science, or at least into my definition of science. So anyone who supports any other interpretation of probability either has a very different definition of science than me, or is ignorant of the fact that any attempt to apply mathematics to the real world is either science or pseudo-science.

I sympathize with you, but suggest that if you look in detail what your frequentists counting actually means, the physical process of counting and retention of the counts, is subjective, or conditional to the observing system.

Counting is very central even to me (as someone often talking aboute subjective probability), so that is't the question for me. The question is if you take a close at look at the counting process, it is constrained and biased depending on the "counting device".

/Fredrik
 
  • #29


yossell said:
I understand the relative frequency view of probability to be the view that probabilities *just are* relative frequencies of actual events...
...
I find the view attractive as it would makes probabilities wholly unproblematic, if correct.
I wouldn't say that. I think I should have said that science leads to a relative frequency view of probability rather than the relative frequency view, since there's clearly more than one. If we take your definition of "the" relative frequency view literally, we don't even have an approximate probability until we have performed a large number of identical experiments, and those probabilities wouldn't be predictions about what will happen. They would be statements about what has already happened. (That may not be what you meant to say, but that's how you said it :smile:).

I think the "standard" version claims that an assignment of a probability P to a possible result of an experiment should be interpreted a counterfactual statement of the form "If we were to perform this exact experiment infinitely many times, the number of times we've had that particular result after N experiments divided by the number of times we've had other results after N experiments goes to P as N goes to infinity".

I want to make it clear that I do not support this view. I'll try to explain what my view actually is. Let's start with the definition of probability. A probability measure is a function [itex]\mu:\Sigma\rightarrow[0,1][/itex] that satisfies certain conditions. (The details aren't important here. Look them up if you're interested). A probability is a number assigned by a probability measure. It is just that, and nothing more. This is the definition of what the word means, not that counterfactual relative frequency stuff.

Now the question is "What does this have to do with the real world"? The answer is "Nothing". Every time we want to apply mathematics to things in the real world, we're going to need something more than just mathematics. We need an additional set of axioms that tells us which events in the real world these probabilities are assigned to. This set of axioms either meets the requirements of my definition of a theory, or it doesn't. If it does, we're now doing science, not mathematics. If it doesn't, we're doing pseudo-science and we should stop wasting our time.

In science, we have a procedure that let's us distingush good theories from bad ones, and it involves performing repeated experiments. Unfortunately I have a rather poor understanding of the statistical methods used to analyze the results of experiments, so I won't try to describe that part of the procedure in detail, but I think I need to make at least one comment. I didn't realize this until now, but we need to include the usual rules for probabilities in the axioms of the theory (e.g. that the probability of two independent events is the product of the probabilities of the single events), so that the theory can assign probabilities not only to the possible results of one experiment, but also to e.g. the possibility that the average result after N experiments would differ from the expectation value by at least the amount it did. Probabilities like that can be used to assign "scores" to theories, which we can use to distinguish the good theories from the bad ones. The calculation of those scores involves the relative frequencies of the possible results.

Again, note that the relative frequency stuff isn't a definition of probability. It's just a part of the standard procedure use to distinguish good theories from bad, and it doesn't require the existence of the N→∞ limit. The definition of probability is purely mathematical and has nothing to do with the real world until we state a set of axioms that defines a scientific theory.

yossell said:
But I was unable to see how this view followed from your view about science and scientific theories.
I hope it will be easier now that I've made it more clear which relative frequency view I'm actually talking about. It's also possible that someone who's more familiar with the philosophical debate about this than I am, wouldn't classify my view as "relative frequency", and instead describe it as "axiomatic". The article by Home and Whitaker dismisses the axiomatic view rather quickly, saying that this view doesn't even state a connection between mathematical probabilities and things in the real world. Duh, that's what theories are for.

yossell said:
Someone (call him P) who believed that probabilities were not just relative frequencies,
I don't find phrases like this meaningful. This guy P seems to think that all useful mathematical concepts have well-defined counterparts in the real world and that mathematics is just a tool to calculate them. (Why else would he be talking about what probability "really is"?) I don't share that view at all. For example, I don't think of a Riemann integral as a way to calculate areas. It's a way to define what we mean by "area" of a region that isn't rectangular. It doesn't make sense to talk about what the area under a curve really is. It is what we have defined it to be.

Note that neither mathematics nor science tells us what something "really is". The fact that experiments can't tell us anything except how accurate a theory's probability assignments are, is a huge limitation of science. We would certainly like to know what things "really are", but there are no methods available to us that can give us that information.

yossell said:
...but were something more fundamental or primitive, irreducible properties of objects or events or whatever (not defending this view note!) could, it seems to me, agree what you say.
I agree, but this sort of speculation isn't scientific. If someone has an opinion about what probability "really is", I'm not going to care much about it until he/she has stated it in the form of a theory that assigns my kind of probabilities to possible results of experiments, because that's how science is done.

Edit: I should probably have been more clear about the fact that I'm not trying to explain what probability "really is". That wouldn't even make sense to me, because of how I think of mathematics. What I'm trying to do is to explain how I think of science and mathematics, and how the relationship between them makes all this stuff about interpretations of probability completely pointless. The relationship between science and mathematics makes it natural to define probability as a purely mathematical concept, which is then related to the relative frequencies in a finite ensemble in the real world through the definition of a theory and the empirical methods that are the foundation of science.

yossell said:
Yes, theories are just assignments of probabilities to possible results of experiments. Yes, we discover probabilities through repeated experiments and testing, just as you say. But this is just because, given the probabilities, certain actual relative frequencies are the *most likely*. The probabilities are thus inferred on the basis of the frequencies, but not identified with them.
We're still talking about P's view, right? In that case, I'll just add that I don't find this view illogical or "clearly wrong". I just find it less interesting since it consists of statements about the real world that fail to meet the requirements of a theory.

yossell said:
I didn't understand your point about single events.
..
You seemed to say that, unless you could assign probabilities to other single events, you weren't being scientific. But well established theories do assign probabilities to events irrespective of how often they occur - imagine some very complex collection of quantum particles in some very peculiar and unusual arrangement...
Single predictions can't be classified as good or bad according to the "score" assigned by a series of experiments. The scoring system only applies to the theory as a whole, not to the individual predictions. The situation you describe may be an event that only occurs once, or not even that, but the method you used to calculate that probability is part of a theory that assigns probabilities to many other events as well. That allows us to keep testing its predictions, and to keep adjusting the "score".

yossell said:
QM assigns a clear probability - if we could solve the equation, this is the probability we *ought* to believe - but it's not the relative frequency.
Why should we believe anything? Even if we take probability to be a primitive concept, like a continuous range of truth values between true and false, it seems very strange to associate it with our beliefs.
 
Last edited:
  • #30


Demystifier said:
But Heisenberg didn't live in Copenhagen. Or did he? :confused:
I have no idea. I just know that Wikipedia attributes the Copenhagen interpretation to the two of them. Oh, and the articles I've been talking about do too, but they both consider Bohr to be the main guy and Heisenberg to be a secondary character.

Demystifier said:
The ensemble interpretation explicitly claims that QM is not a theory of individual particles.
Yes, I got that from the first text I quoted in #20. Still, as I said there, I don't see how this is anything more than a different choice of language, considering that the only way to reveal the properties of an individual system is to perform a large number of measurements on an ensemble of identically prepared systems.
 
Last edited:
  • #31


Fredrik said:
... the only way to reveal the properties of an individual system is to perform a large number of measurements on an ensemble of identically prepared systems.
Does it also refer to properties of macroscopic objects which are theoretically well described by classical mechanics? In other words, are you saying that the above is the basis of the scientific method, or that the above is just a peculiar property of quantum phenomena?
 
  • #32


Demystifier said:
But Heisenberg didn't live in Copenhagen. Or did he? :confused:
http://plato.stanford.edu/entries/qm-copenhagen/:
In fact Bohr and Heisenberg never totally agreed on how to understand the mathematical formalism of quantum mechanics, and none of them ever used the term “the Copenhagen interpretation” as a joint name for their ideas. In fact, Bohr once distanced himself from what he considered to be Heisenberg's more subjective interpretation (APHK, p.51). The term is rather a label introduced by people opposing Bohr's idea of complementarity, to identify what they saw as the common features behind the Bohr-Heisenberg interpretation as it emerged in the late 1920s.

...

Don Howard (2004) argues, however, that what is commonly known as the Copenhagen interpretation of quantum mechanics, regarded as representing a unitary Copenhagen point of view, differs significantly from Bohr's complementarity interpretation. He holds that "the Copenhagen interpretation is an invention of the mid-1950s, for which Heisenberg is chiefly responsible, [and that] various other physicists and philosophers, including Bohm, Feyerabend, Hanson, and Popper, hav[e] further promoted the invention in the service of their own philosophical agendas." (p. 669)

...

The Copenhagen interpretation is not a homogenous view. This is still not generally recognized. Both James Cushing (1994) and Mara Beller (1999) take for granted the existence of a unitary Copenhagen interpretation in their social and institutional explanation of the once total dominance of the Copenhagen orthodoxy; a view they personally find unconvincing and outdated partly because they read Bohr's view on quantum mechanics through Heisenberg's exposition. But historians and philosophers of science have gradually realized that Bohr's and Heisenberg's pictures of complementarity on the surface may appear similar but beneath the surface diverge significantly. Don Howard (2004, p. 680) goes as far as concluding that "until Heisenberg coined the term in 1955, there was no unitary Copenhagen interpretation of quantum mechanics." The term apparently occurs for the first time in Heisenberg (1955).
 
  • #33


Demystifier said:
Does it also refer to properties of macroscopic objects which are theoretically well described by classical mechanics? In other words, are you saying that the above is the basis of the scientific method, or that the above is just a peculiar property of quantum phenomena?
I'm just trying to figure out what it means to say that a state vector "represents the properties of a system" instead of "represents the properties of an ensemble of identically prepared systems". I mean, what's a "property"? There's been some discussion about this already in this thread:
Fredrik said:
I think the idea is that a system can be said to have a certain property (say a momentum in a specified range) if and only if QM assigns probability 1 to the possibility that a measurement will give us a result that's consistent with that. I think this goes back at least to EPR, who (if I remember correctly) called such a property an "element of physical reality".
kote said:
I think that Bohr's view was probably similar to a "state of knowledge of an object" view, with one qualification. Bohr didn't allow talk of context-independent properties. He might say that the wave function represents the state of knowledge about what properties an object may manifest when measured in a certain way. He wouldn't say that an electron, a, has a 50% chance of currently being spin up. He might say that a has a 50% chance of being spin up when a measurement occurs. It didn't make sense for him to talk about observables independent of observation.
These are two different ways to think about "properties", but it seems to me that both of them (independently) imply that a property of a system is also a property of an ensemble, and vice versa. For example, suppose we think of a state vector as a representation of the properties of an individual system, and a property as "if we perform repeated measurements, we will always get a result in the specified range". This forces us to consider an ensemble of identically prepared systems, which can be said to have the property that we had attributed to the individual systems.
 
  • #34


Well you could envision what the ensemble view means for the individual state by looking at a more familiar classical example from statistical mechanics. Consider something like a gas where you measure statistical quantities macroscopically. Now you know that if you measure the temperature, it tells you about the average kinetic energy of the atoms. For an individual atom, you can talk about the probability of that atom having a certain kinetic energy. The quantum state in the Copenhagen Interpretation is a lot like talking about our individual gas atom. The best I can do is talk about the probability that a particle in that state will have a particular value of a certain observable quantity when I measure it. The ensemble tells me how to predict probabilities for measurements on individual states...the uncertainties in the uncertainty principle are interpreted as standard deviations when looking at an ensemble.
 
  • #35


Just to add, regarding Bohr's treatment of probability in QM, Jan Faye attributes the following view to Bohr (http://plato.stanford.edu/entries/qm-copenhagen/):
The quantum mechanical formalism does not provide physicists with a ‘pictorial’ representation: the ψ-function does not, as Schrödinger had hoped, represent a new kind of reality. Instead, as Born suggested, the square of the absolute value of the ψ-function expresses a probability amplitude for the outcome of a measurement. Due to the fact that the wave equation involves an imaginary quantity this equation can have only a symbolic character, but the formalism may be used to predict the outcome of a measurement that establishes the conditions under which concepts like position, momentum, time and energy apply to the phenomena.

...

Bohr accepted the Born statistical interpretation because he believed that the ψ-function has only a symbolic meaning and does not represent anything real.

Born is probably a good source for what Bohr thought here. I don't think it's much more complicated than what it says above though, and this is probably what we've already been talking about.
 
Last edited by a moderator:

Similar threads

Replies
84
Views
4K
Replies
3
Views
2K
Replies
91
Views
6K
Replies
14
Views
2K
Replies
21
Views
3K
Replies
115
Views
12K
Back
Top