Can the Born Rule Be Derived Within the Everett Interpretation?

In summary: But this claim is not justified. There are many other possible probability rules that could be implemented in this way, and it is not clear which one is the "most natural."In summary, the paper presents an alternative projection postulate that is consistent with unitary symmetry and with measurements being defined in terms of projection operators. However, it does not seem to add sufficiently to the criticisms of Deutsch's proposal to justify publication.
  • #1
vanesch
Staff Emeritus
Science Advisor
Gold Member
5,117
20
Hi,
A while ago I discussed here about a paper I wrote, which you can find on the arxiv: quant-ph/0505059
Proponents of the Everett interpretation of Quantum Theory have made efforts to show that to an observer in a branch, everything happens as if the projection postulate were true without postulating it. In this paper, we will indicate that it is only possible to deduce this rule if one introduces another postulate that is logically equivalent to introducing the projection postulate as an extra assumption. We do this by examining the consequences of changing the projection postulate into an alternative one, while keeping the unitary part of quantum theory, and indicate that this is a consistent (although strange) physical theory.
I submitted it to the Royal Society, and I received a notification of rejection, with the following comments from the referees, might be of interest for those who participated in the discussion. The emphasis is mine.
First referee:
The paper critically assesses the attempt (Proc Roy Soc Lond 1999) by David Deutsch (followed up by various authors in later work) to derive the Born rule within the Everett interpretation via considerations of decision theory.
The author interprets Deutsch as claiming that QM - whether or not the Everett interpretation is assumed - may be decomposed into a unitary part and a "projection postulate" part. He then proposes an "alternative projection postulate" (APP) which, he argues, is compatible with the unitary part of QM but which does not entail the Born rule. He claims that, since his APP is a counterexample, Deutsch's proposal and any variants of it must be rejected.
A very similar project was undertaken by Barnum et al in a paper in Proc Roy Soc Lond in 2000. The author's APP has some mild technical advantages over Barnum et al's proposal, but these do not (in my view) merit a separate paper, especially since neither Barnum et al nor the author are proposing a viable alternative to the PP but simply making a logical point.
More importantly, the post-2000 literature on Deutsch's argument has not attempted to criticize the details of Barnum et al's counterexample. Rather, it has claimed that Barnum et al, treating measurement as a black-box process, misread Deutsch. Deutsch sets out to analyse measurement as one more physical process (realised within unitary dynamics - as such, any rival proposal to the Born rule which is couched (as is the author's) in terms of measurement observables taken as primitive will not be relevant within the context of the Everett interpretation.
It is fair to say that this point was somewhat obscure in Deutsch's 1999 paper, but it has been made explicitly in subsequent discussions, including some (by Wallace and Greaves) which the author cites. However, the author does not engage with this issue but continues to work in the Barnum et al tradition without further discussion.
In conclusion: if the Barnum et al framework is valid then the author's paper does not seem to add sufficiently to their existing criticisms of Deutsch to justify publication. And if it is not valid, then it is at best unclear how the author's paper relates to Deutsch.
The second referee:
The paper reviews an alternative projection postulate (APP) and contrasts it with the standard projection postulate (PP). Under the APP probabilities are uniform, instead of being proportional to the relative measure of vector components. APP is shown to be consistent with unitary symmetry and with measurements being defined in terms of projection operators, and it agrees with PP regarding results predicted with certainty. The paper also does a decent job of describing some of the strange empirical results that APP predicts. The main point, that we must rely on empirical data to favor PP over APP, is worth making.
The paper, however, purports to do more than this. The abstract and introduction claim to deal a blow to the Everett programme, by showing that "there is no hope of deriving the PP directly from the rest of the machinery of quantum theory." Beyond the review of APP described above, however, the paper itself says very little about this subject. The introduction ends by promising "we will then examine where exactly it is in disagreement with Deutsch's `reasonable assumptions,' or with Gleason's theorem." But the section at the end of the paper that is supposed to make good on this promise consists of only thirteen lines -- far too little to provide much exact examination.
Worse, the paper does not mention or discuss any of the many other approaches that have been suggested for deriving the PP from the rest of quantum theory, within the Everett programme. The paper claims "APP is in fact the most natural probability rule that goes with the Everett interpretation: on each `branching' of an observer due to a measurement, all of its alternative `worlds' receive and equal probability." However, many authors do not accept that equal probability per world is the most natural. Furthermore, many other authors do accept an equal probability rule, but then try to derive the PP from it, instead of the APP. For example, the review article at http://plato.stanford.edu/entries/qm-manyworlds/ says
"Another idea for obtaining a probability law out of the formalism is to state, by analogy to the frequency interpretation of classical probability, that the probability of an outcome is proportional to the number of worlds with this outcome. This proposal immediately yields predictions that are different from what we observe in experiments. Some authors, arguing that counting is the only sensible way to introduce probability, consider this to be a fatal difficulty for the MWI, e.g., Belifante 1975. Graham 1973 suggested that the counting of worlds does yield correct probabilities if one takes into account detailed splitting of the worlds in realistic experiments, but other authors have criticized the MWI because of the failure of Graham's claim. Weissman 1999 has proposed a modification of quantum theory with additional non-linear decoherence (and hence even more worlds than standard MWI), which can lead asymptotically to worlds of equal mean measure for different outcomes."
(Hanson 2003, which you incorrectly cite as discussing the Deutsch proof, is another such attempt.)
I cannot recommend the paper for publication as it is, but I can hold out hope that the author could make an acceptable revision. Such a revision could simply be a review of the APP, including its implications. Such a review should mention many of the previous authors who have considered such a posulate. Alternatively, a revision could critique some of the attempts to derive PP from quantum theory.
To accomplish this second goal, the author must first choose a set of previous papers that it is responding to. (It may not be feasible to respond to all previous papers on this topic.) Second, the author must explain exactly where there purported demonstration is claimed to fail. That is, at what point does a key assumption of theirs go beyond the basic machinery of quantum theory. Third, the author must explain why this key assumption is no more plausible than simply assuming the PP directly. This is what it would take to successfully show that such attempts to derive the PP from the machinery of quantum theory has failed.
Here are two minor comments. The paper switches its notation from from APP to AQT and PP to SQT, for no apparent reason. It would make more sense to stick with one notation. Also, as there may be other alternatives proposed someday, it might be better to call APP a "uniform projection postulate" (UPP). Finally, the title should more specifically refer to this alternative projection posulate, however named.
On a personal note, although this paper was a bit outside of my field and thus "for fun", in my field too, I had several rejections of similar kind, which always make me think that the referee has missed the point I was trying to make (which must be due to the way I wrote it up, somehow).
The only point I tried to make was a logical one, as seems to be recognized by the first referee only, but then he seems to miss the point that in the end of the day, we want a theory that spits out results that are given by the PP, whether or not we take that "as primitive". So I don't see why considering the PP "as primitive" makes the reasoning "not relevant". The second referee seems to have understood this (that we have to rely on empirical data to endorse the PP), but he seems to have missed the point I was making a logical claim, and seems to concentrate on the minor remark when I said that "this APP seems to be the most natural probability rule going with MWI".
The very argument that some have tried to MODIFY QM introducing non-linear decoherence is *exactly what I claim*: that you need an extra hypothesis with unitary QM if you want to derive the PP. Finally, the proposition of revision, namely to limit myself to the consequences of the APP, take away the essential point of the paper which simply stated: since two different probability rules, the APP, and the PP, are both compatible with unitary QM, you cannot derive the PP logically from unitary QM without introducing an extra hypothesis.
The only truly valid critique I find here, is the one of the first referee who finds that my paper is not sufficiently different from Barnum's paper (something I ignored) - which is of course a valid reason of rejection (which I emphasised in red).
Most other points seem to miss the issue of the paper, I have the impression, and focus on details which are not relevant to the main point made. This often happens to me when I receive referee reports. Do others also have this impression, or am I such a terrible author ?
 
Physics news on Phys.org
  • #2
All attempt to derive the PP from unitary theory is condemed to failure.

It is a simple mathematical (and physical) question. The information contained into a nonunitary evolutor is more rich that informaiton contained into a unitary evolutor. 'More' cannot be derived from 'less'.

It took near 100 years that physicists understood that measurement problem CANNOT be solved via QM of closed systems. During 50 years or so had an intensive research in open systems and decoherence. Finally decoherence is in a dead way.

I wait that in some 100 or 200 years physicists will understand that the old unitary Schrödinger equation is an approximation to realistic nonunitary evolutions.

In fact, in some other fields this is known for decades...

See page 17 of

Nobel Lecture, 8 December, 1977

http://nobelprize.org/chemistry/laureates/1977/prigogine-lecture.pdf

The measurement process is an irreversible process generating entropy. QM conserves entropy and is reversible, therefore QM cannot explain the PP without invoking it adittionally. But then one is invoking a postulate that violates others postulates of the axiomatic structure, doing QM both incomplete and internally inconsistent.
 
  • #3
vanesch said:
Hi,
A while ago I discussed here about a paper I wrote, which you can find on the arxiv: quant-ph/0505059
I submitted it to the Royal Society, and I received a notification of rejection, with the following comments from the referees, might be of interest for those who participated in the discussion. The emphasis is mine.
First referee:
The second referee:
On a personal note, although this paper was a bit outside of my field and thus "for fun", in my field too, I had several rejections of similar kind, which always make me think that the referee has missed the point I was trying to make (which must be due to the way I wrote it up, somehow).
The only point I tried to make was a logical one, as seems to be recognized by the first referee only, but then he seems to miss the point that in the end of the day, we want a theory that spits out results that are given by the PP, whether or not we take that "as primitive". So I don't see why considering the PP "as primitive" makes the reasoning "not relevant". The second referee seems to have understood this (that we have to rely on empirical data to endorse the PP), but he seems to have missed the point I was making a logical claim, and seems to concentrate on the minor remark when I said that "this APP seems to be the most natural probability rule going with MWI".
The very argument that some have tried to MODIFY QM introducing non-linear decoherence is *exactly what I claim*: that you need an extra hypothesis with unitary QM if you want to derive the PP. Finally, the proposition of revision, namely to limit myself to the consequences of the APP, take away the essential point of the paper which simply stated: since two different probability rules, the APP, and the PP, are both compatible with unitary QM, you cannot derive the PP logically from unitary QM without introducing an extra hypothesis.
The only truly valid critique I find here, is the one of the first referee who finds that my paper is not sufficiently different from Barnum's paper (something I ignored) - which is of course a valid reason of rejection (which I emphasised in red).
Most other points seem to miss the issue of the paper, I have the impression, and focus on details which are not relevant to the main point made. This often happens to me when I receive referee reports. Do others also have this impression, or am I such a terrible author ?

I always find that I have to ask an "outsider" to read my manuscript before I submit it. This is because what I find to be rather obvious, is really isn't. Authors have a clear idea in their heads what they're writing. Other people don't. So we tend to write things as if the reader already has an insight into our punch line. If you find that most people seem to miss the main point you're trying to make, chances are that you are not emphasizing it in the clearest fashion. This has happened even to the best of us.

I find that the most effective means to emphasize the main points I'm trying to get across is by clearly numbering them. I've been known on here to list the points one at a time:

(i) Point 1

(ii) Point 2

.. etc. Unfortunately, if you're writing to PRL, or trying to save publication costs, those take a lot of valuable spaces, so I also have listed them in line. As a referee, I also find them to be easier to focus on. I can easily go back and look at them again while I'm reading the rest of the paper to keep reminding myself that these are the points the authors are trying to make. It is no secret that most of us start a paper by reading the abstract, intro, and conclusion first (well, I certainly do). So you have to keep in mind that you literally have to reveal to the reader in the most direct way the message you are trying to get across in those sections of the paper.

Zz.
 
  • #4
ZapperZ said:
I always find that I have to ask an "outsider" to read my manuscript before I submit it. This is because what I find to be rather obvious, is really isn't. Authors have a clear idea in their heads what they're writing. Other people don't. So we tend to write things as if the reader already has an insight into our punch line.
This reminds me of a simple paper I wrote once with a student, about how the signal generating process should be included in a reliable simulation of the behaviour of the front end electronics of a neutron detector, because assuming that the detector just "sent out a delta-pulse" was giving results which deviated by a factor of 2 from observations, while including the signal formation did predict this factor 2. I found this maybe worth publishing - even though not big news - because other papers omitted exactly that: they only took into account the electronics, and supposed a deltafunction for the signal coming from the detector (which might have been justified in their application, no matter - but it was not mentioned in their papers).
So I carefully described the setup, and gave an explicit calculation of how the signal was generated in the detector, to show that this was the relevant part which allowed us to explain the discrepancy of a factor of 2. My point being that it was necessary to include this part in the description.
I got a rather nasty referee report, in which he explained me that I must be pretty naive to think that I was the first one explaining how signals were generated in radiation detectors :bugeye:
 
  • #5
vanesch said:
This reminds me of a simple paper I wrote once with a student, about how the signal generating process should be included in a reliable simulation of the behaviour of the front end electronics of a neutron detector, because assuming that the detector just "sent out a delta-pulse" was giving results which deviated by a factor of 2 from observations, while including the signal formation did predict this factor 2. I found this maybe worth publishing - even though not big news - because other papers omitted exactly that: they only took into account the electronics, and supposed a deltafunction for the signal coming from the detector (which might have been justified in their application, no matter - but it was not mentioned in their papers).
So I carefully described the setup, and gave an explicit calculation of how the signal was generated in the detector, to show that this was the relevant part which allowed us to explain the discrepancy of a factor of 2. My point being that it was necessary to include this part in the description.
I got a rather nasty referee report, in which he explained me that I must be pretty naive to think that I was the first one explaining how signals were generated in radiation detectors :bugeye:

I think it highly depends on WHERE you sent that in. If you sent it to, let's say, PRL, then I'd say you might get something like that. However, journals like EJP, or AJP, routinely publish pedagogy and techniques, especially when it is something relevant in physics education, be it at the undergraduate or graduate level.

I don't know what you submitted that paper to, but honestly, where you send your manuscript is almost as important as what you wrote.

Zz.
 
  • #6
ZapperZ said:
I don't know what you submitted that paper to, but honestly, where you send your manuscript is almost as important as what you wrote.
Zz.

It was Nuclear Instruments and Methods, quite appropriate, I'd think :smile:
 
  • #7
vanesch said:
It was Nuclear Instruments and Methods, quite appropriate, I'd think :smile:

Well, I'm not sure about that.

NIM is supposed to be a journal on new techniques, or an improvement of a technique. Your paper, from your description, is simply clarifying the missing piece that isn't commonly mentioned. In other words, there's nothing new or a new extension on an existing technique. If this is the case, then the referee is correct in asking you if you think that what you're describing is not known.

I still think AJP or EJP might have been more suitable. You could emphasize the point that what you're describing is important and often omitted in the details of the experiment being reported in many papers that use the same technique. Such a paper would have been appropriate for those two journals.

Zz.
 
  • #8
ZapperZ said:
Well, I'm not sure about that.
NIM is supposed to be a journal on new techniques, or an improvement of a technique. Your paper, from your description, is simply clarifying the missing piece that isn't commonly mentioned. In other words, there's nothing new or a new extension on an existing technique.

This is an interesting comment ! Nobody ever made that, and it explains several other problems I had with NIM ; indeed, each time I erred more on the explanatory part than the "here's a new method" part, I got rebiffed (or one asked me to remove or reduce the explanatory part and to emphasize the practical application). It is true that amongst my collegues, I'm by far the most "explanation" oriented.
 
  • #9
vanesch said:
This is an interesting comment ! Nobody ever made that, and it explains several other problems I had with NIM ; indeed, each time I erred more on the explanatory part than the "here's a new method" part, I got rebiffed (or one asked me to remove or reduce the explanatory part and to emphasize the practical application). It is true that amongst my collegues, I'm by far the most "explanation" oriented.

I tend to be quite verbose too in some of the things I write. But as a referee, when I pick up a paper that I'm reviewing, I would like to be hit right off the bat with the punch line. Tell me in no uncertain terms what you are trying to say here, and why it is important. I tend to pay attention to statements such as these:

"To be best of our knowledge, this is the first report on... "

"This results contradicts an earlier report..."

"This is the most accurate result so far on... "

"This is a new result... "

etc. These should be either in the abstract, or somewhere in the intro or the 1st 2 paragraph. If not, I will lose track of what you're trying to say, or why it is important. (Ironically, I've just finished reposting in my Journal an article I wrote a while back titled "It may be interesting, but is it important?") :)

If you write a paper in such a way that the referee has to put an effort to find the point you are making, or why it is important, then you are just making it more difficult for that referee to recommend your paper to be published. It is that simple.

Zz.
 
  • #10
vanesch said:
Hi,
A while ago I discussed here about a paper I wrote, which you can find on the arxiv: quant-ph/0505059
I submitted it to the Royal Society, and I received a notification of rejection ...

Drat! I have my follow-up to your paper nearly ready for submission. Every weekend for the past several weeks now, I've told myself I'm going to make the final revisions and send it out, and then I run across something else that I need to incorporate. Like the Weissman paper, for instance ... In fact, I should probably make it at least evident that I'm aware of Weissman, Deutsch, Barnum, Hanson, and all the other authors mentioned in the reviews.

So Patrick, do you think you're going to resubmit? I hope you do - I think (obviously) that it is a very important topic. I'll try to throw out my comments on the reviewers' comments on this thread, as I go through the literature (may be a slow process ...)

BTW, does it normally take that long for review? Hasn't it been, what, 5 months?

David
 
  • #11
straycat said:
... In fact, I should probably make it at least evident that I'm aware of Weissman, Deutsch, Barnum, Hanson, and all the other authors mentioned in the reviews.
So Patrick, do you think you're going to resubmit? I hope you do - I think (obviously) that it is a very important topic.

First I'll check out the Barnum paper. If (according to referee 1) my paper contains the same argument as his, well I conclude that 1) I'm in not a bad company (just 5 years late ) and 2) I won't resubmit.

If not, well, I wouldn't really know where to submit. Maybe foundations of physics.

BTW, does it normally take that long for review? Hasn't it been, what, 5 months?
David

It's usually a bad sign when it takes that long But it is strongly dependent on the journal. Some journals first as one referee, and if that one doesn't give positive returns, they ask a second one for a second opinion. Others do it in parallel.
 
  • #12
vanesch said:
First I'll check out the Barnum paper. If (according to referee 1) my paper contains the same argument as his, well I conclude that 1) I'm in not a bad company (just 5 years late ) and 2) I won't resubmit.

A review article might not be such a bad idea. You could review the motivation behind the APP, review the various attempts to implement it, and perhaps include your own contribution in a separate section.

vanesch said:
If not, well, I wouldn't really know where to submit. Maybe foundations of physics.

What exactly is the reputation of FoP? Is it a lesser tier than the Royal Society?

DS
 
  • #13
Hey Patrick,

I've been trying to make sense of some of the comments made by your first referree:

vanesch said:
A very similar project was undertaken by Barnum et al in a paper in Proc Roy Soc Lond in 2000. The author's APP has some mild technical advantages over Barnum et al's proposal, but these do not (in my view) merit a separate paper, especially since neither Barnum et al nor the author are proposing a viable alternative to the PP but simply making a logical point.

More importantly, the post-2000 literature on Deutsch's argument has not attempted to criticize the details of Barnum et al's counterexample. Rather, it has claimed that Barnum et al, treating measurement as a black-box process, misread Deutsch. Deutsch sets out to analyse measurement as one more physical process (realised within unitary dynamics - as such, any rival proposal to the Born rule which is couched (as is the author's) in terms of measurement observables taken as primitive will not be relevant within the context of the Everett interpretation.

It is fair to say that this point was somewhat obscure in Deutsch's 1999 paper, but it has been made explicitly in subsequent discussions, including some (by Wallace and Greaves) which the author cites.

I looked at one of Greaves' papers, "Understanding Deutsch's probability in a deterministic multiverse" which is archived at the PhilSci archives at http://philsci-archive.pitt.edu/archive/00001742/ . Section 5.1 "Measurement neutrality" and section 5.2: "Measurement Neutrality versus Egalitarianism" really helped me to understand the above point. Basically, Greaves explains that one of the essential assumptions in Deutsch-Wallace decision theory is the postulate of "measurement neutrality," which is "the assumption that a rational agent should be indifferent between any two quantum games that agree on the state |phi> to be measured, measurement operator X and payoff function P, regardless of how X is to me measured on |phi>." afaict, this means that if we think of the measurement process as a "black box," then Deutsch assumes that a rational agent should in principle be indifferent to the details of the innards of this black box.

In sec 5.2, Greaves very clearly argues that measurement neutrality automatically *excludes* the APP (where the APP = egalitarianism) as a possible probability rule. Therefore, measurement neutrality, as innocuous as it may appear at first glance, is not so innocuous at all.

I've referenced Greaves (among others) in the revised introduction to my paper [1] on the probability interpretation of the MWI. I'm glad you posted your referree comments, Patrick -- they've helped me on my paper!

-- David

[1] To be submitted to Foundations of Physics Letters -- latest draft available at
http://briefcase.yahoo.com/straycat_md
Folder: Probability interpretation of the MWI
archived (slightly older) versions also at:
http://philsci-archive.pitt.edu/
http://www.sciprint.org/
 
Last edited by a moderator:
  • #14
straycat said:
which is "the assumption that a rational agent should be indifferent between any two quantum games that agree on the state |phi> to be measured, measurement operator X and payoff function P, regardless of how X is to me measured on |phi>."

Yes, that's exactly the point. As I showed in my paper, that's NOT the case with the APP, (as I explicitly show with the example of X and Y where one is a refinement of the other) where the probabilities are dependent on context (on the other variables that are being measured).


In sec 5.2, Greaves very clearly argues that measurement neutrality automatically *excludes* the APP (where the APP = egalitarianism) as a possible probability rule. Therefore, measurement neutrality, as innocuous as it may appear at first glance, is not so innocuous at all.

Ok, that's exactly my argument too. So I have some extra homework to make with this as reference.

Thanks for pointing that out!
 
  • #15
Juan wrote:

All attempt to derive the PP from unitary theory is condemed to failure.

I cannot agree with that statement, altough I recognize a conceptual difficulty there.
For me, this problem is similar to the problem of irreversibility seen from the classical mechanics point of view. Non-unitary evolution might be a good approximation (maybe even *exact*!) when an interaction with a huge system (huge freedom) is involved.

My favorite example is the decay of atomic states: clearly the interaction of the discrete atomic system with the continuum system of electromagnetic radiation brings the decay. This decay is very conveniently represented by a "non hermitian" hamiltonian: this allows modeling of an atom (for the Stark effect e.g.) without including the whole field. This represents correctly the reality, altough the fundamental laws are unitary.

For many people, the interaction with a 'classical' or 'macroscopic' system is all that is needed to derive the PP. I think this is the most probable explanation for the PP. Landau considered this so obvious that it comes in the first chapters in his QM book.
 
Last edited:
  • #16
lalbatros said:
Juan wrote:
I cannot agree with that statement, altough I recognize a conceptual difficulty there.
For me, this problem is similar to the problem of irreversibility seen from the classical mechanics point of view.

The irreversibility in classical statistical mechanics comes about from the very specific initial condition, which is highly improbable.

Non-unitary evolution might be a good approximation (maybe even *exact*!) when an interaction with a huge system (huge freedom) is involved.

I don't see how this can come about. The hamiltonian gives rise to a unitary operator, no matter how complicated the system. Especially the EM radiation field can always be considered as a discrete system with a huge number of degrees of freedom (it shouldn't make any difference if you put your system in a box with diameter one hundred billion lightyears or not, should it).

My favorite example is the decay of atomic states: clearly the interaction of the discrete atomic system with the continuum system of electromagnetic radiation brings the decay. This decay is very conveniently represented by a "non hermitian" hamiltonian: this allows modeling of an atom (for the Stark effect e.g.) without including the whole field. This represents correctly the reality, altough the fundamental laws are unitary.

No, it is a shortcut, where you *apply* already the PP in its derivation.

For many people, the interaction with a 'classical' or 'macroscopic' system is all that is needed to derive the PP. I think this is the most probable explanation for the PP. Landau considered this so obvious that it comes in the first chapters in his QM book.

This is the standard "explanation". But it is *postulated* and not *derived* from unitary QM. What qualifies a system as "macroscopic" and "classical" (without making circular reasoning ?) and why shouldn't it obey standard quantum theory ?
Or is there an upper limit to the number product hilbert spaces (number of particles) before the exponentiation of a hermitean operator suddenly doesn't become unitary anymore ?
 
  • #17
The APP and extremum principles

Hey Patrick et al,

I'm posting an idea on this thread that has occurred to me on a potential consequence of the APP.

Suppose that Alice is doing two Aspect-like experiments, one with Bob, and another simultaneously with Bert. Alice and Bob are 1 km apart, and Bert is 0.1 km farther away than Bob. Otherwise the experiments are the same, done at the same time. Bob and Bert flash the results of their measurements to Alice as soon as they get them. Before Alice receives these messages (which we suppose travel at the speed of light), Bob and Bert each exist in a superposition of the "B-- sees up"/"B-- sees down" state. Because of the general relativistic restriction on the speed of light, from the point of view of Alice, Bob's state will collapse prior to Bert's state. Pretty elementary.

The point I wish to make is that relativity imposes a restriction on the order with which collapse occurs, from the point of view of Alice. So let's take this point and extrapolate. Suppose now that we have an observer Amandra who observes some variable X characteristic of a particle. But imagine that the value of X is not communicated to Amandra all at once, but rather in little chunks. That is, suppose that X_min is the lowest allowable value of X, and that it is quantized, ie it takes values in [X_min, X_min +1, X_min + 2, ...]. Imagine furthermore that Amandra's observation of X comes in a series of steps, like this: she observes either X = X_min, or X \in [X_min+1, X_min +2, ...]; if the latter, she next observes either X = X_min + 1 or X \in [X_min+2, X_min +3, ...]; if the latter, she next observes either X = X_min + 2, or X \in [X_min+3, X_min +4, ...]; and so on. If you draw the MWI-style world-splitting diagram to characterize this process **and apply the APP**, then it is apparent that lower possible values of X are *more probable* than higher values. In effect, X is MINIMIZED. We could equally well suppose that X might be maximized, if the information regarding the value of X were propagated to Amanda in the reverse order.

So here's the Big Idea: the APP, perhaps, offers a mechanism by which Nature enforces various extremum laws. If X is the action, for instance, then we have the principle of least action.

What d'ya think?

David
 
  • #18
straycat said:
That is, suppose that X_min is the lowest allowable value of X, and that it is quantized, ie it takes values in [X_min, X_min +1, X_min + 2, ...]. Imagine furthermore that Amandra's observation of X comes in a series of steps, like this: she observes either X = X_min, or X \in [X_min+1, X_min +2, ...]; if the latter, she next observes either X = X_min + 1 or X \in [X_min+2, X_min +3, ...]; if the latter, she next observes either X = X_min + 2, or X \in [X_min+3, X_min +4, ...]; and so on. If you draw the MWI-style world-splitting diagram to characterize this process **and apply the APP**, then it is apparent that lower possible values of X are *more probable* than higher values. In effect, X is MINIMIZED.

Yes, that is the lack of non-contextuality the APP suffers from, and which I tried to make clear in my paper...

The problem I see with your approach is of course, that if we now get the information *in the other way* (she first gets the biggest intervals, not the lowest ones) we would make the HIGHER values of X more probable. And if we apply yet another order, we'd make *those* values more probable... It's hard to make sense of a measurement theory that is not non-contextual...
 
  • #19
vanesch said:
Yes, that is the lack of non-contextuality the APP suffers from, and which I tried to make clear in my paper...

The problem I see with your approach is of course, that if we now get the information *in the other way* (she first gets the biggest intervals, not the lowest ones) we would make the HIGHER values of X more probable. And if we apply yet another order, we'd make *those* values more probable... It's hard to make sense of a measurement theory that is not non-contextual...

Umm, so is the glass half full, or half empty? :confused:

It is certainly true that if you play around with the "order" with which information regarding X is progagated to the observer, you can make the most probable value of X come out to be, well, *anything*. But I would argue that Nature would not be so fickle as to do it differently every time! There *must* be some rules that govern how any given variable type is propagated. And the reason that no one has figured out these rules is that over the last 80 years, the most brilliant minds in physics have spent a total of, I dunno, 10 seconds contemplating this problem. How many papers exist that even investigate the APP in the first place? Not many that I've found. And I've never seen any paper to suggest that a minimization principle could arise from it. It's fertile ground, uninvestigated, imho. (That's why I wrote my paper!)

So here's how the glass is half full: by playing around with how the tree diagram (ie the MWI world-splitting diagram) is constructed, there must be a way to make quantum statistics pop out. The only extra postulates needed for this to work will be whatever is needed for the tree diagram to take whatever configuration produces the right statistics. Then we have to explain *why* the tree diagram is constructed the way it is. The point is that this ought to be doable without postulating any extra unphysical assumptions, or doing damage to some aspect of the essential ontology of the MWI, as is the problem imho with all other attempts so far that I have seen to make the APP work. (I discuss all of this in my paper.) So this should be seen as an *opportunity*, not a roadblock!

Consider this. Suppose X represents some global property of an extended (not pointlike) object, like the total mass of a composite particle. At time t_0 (in Amanda's FoR), the particle is outside of Amanda's past light cone - she has not yet observed it. At time t_2, the entire particle is in the past light cone -- she has observed it. Sine the particle has spatial extension, there must be some intermediate time t_1 such that the particle is partially in Amanda's light cone -- ie, she has observed part of it, but not all of it. So she does not observe X all at once; she receives info regarding X over some small but nonzero amount of time. Her observation of the mass of the particle might be analogous to a person's observation of the size of a mountain, which rises slowly over the horizon as you are driving down a road. First you see the tip, and you know the size is at least so-big. Then you see some more, so you know the size is at least that much bigger. And so on, until you see the entire mountain. Applying the APP to this would amount to a "minimization" of the size of the mountain.

Note that in the previous scenario, relativity plays a central role in figuring which pieces of information have or have not yet reached the observer. Thus, GR *must* play a central role in determining the configuration of the tree-diagram. So if the tree diagram configuration gives rise to quantum statistics, and relativity gives rise to the tree diagram, then voila! we have derived QM from GR.

Now I would argue that the line of reasoning that brings us to this point is *inescapable*. First off, you have to decide whether you like the APP. And I know that deep down, Patrick, you *know* the APP is better than the Born rule. Search your feelings, young Jedi, you know it to be true. :devil: Next, it is impossible to discount the above relativity-based argument in the analysis of the observation of a variable that is not encoded in any single point of spacetime, but rather is a global property of a *spatially extended* object. (If you think a particle is truly poin-like, then just consider a composite particle in the above analysis.) The only questions that remain: what are the "rules" that govern the configuration of the tree diagram? We know that GR must be involved in establishing these rules. And: how does quantum statistics pop out? We know that they do, somehow, some way. So buried down deep here is a connection between GR and QM that no one has figured out, because smart people like Deutsch are wasting their time with things like decision theory that will get us all nowhere. :cry: But we can be smarter than that!:smile:

How can you *not* be seduced by this line of reasoning?

David
 
  • #20
I moved all posts related to the "arrow of time" from this thread into the "arrow of time" thread...
 
  • #21
vanesch said:
First I'll check out the Barnum paper. If (according to referee 1) my paper contains the same argument as his, well I conclude that 1) I'm in not a bad company (just 5 years late ) and 2) I won't resubmit.
Well, I checked finally the Barnum paper, it is available here:
http://www.journals.royalsoc.ac.uk/media/n9kmrgwwyr8226vrfl1y/contributions/a/r/x/p/arxp7pkn699n63c9.pdf
I really don't see what the referee meant when he said that we had identical critiques. This is a totally different critique of Deutsch' work in particular, where the authors try to establish that Gleason's theorem ALREADY ESTABLISHED Deutsch' results.
Nowhere I see an indication to a kind of axiomatic reasoning (which I tried to do in my paper quant-ph/0505059) where I try to show that Gleason, as well as Deutsch, as well as others, make an extra assumption that eliminates the APP.
So I think I will rethink the way I wrote up my paper as a function of the comments of the referees - which I now see more as the result of not having been clear enough of what I wanted to state and resubmit it somewhere else.
 
Last edited by a moderator:
  • #22
straycat said:
In sec 5.2, Greaves very clearly argues that measurement neutrality automatically *excludes* the APP (where the APP = egalitarianism) as a possible probability rule. Therefore, measurement neutrality, as innocuous as it may appear at first glance, is not so innocuous at all.

This is true, but in section 5.3, Greaves argues that egalitarianism (which is the APP, but phrased in terms of utilities instead of objective probabilities) is *incoherent*, whether or not you accept measurement neutrality, because in a real-world setting where branch-splitting happens through decoherence, there is no well-defined number of branches. So I would say this actually works against Patrick van Esch's case. If his intent is to prove that the APP is a consistent theory that "could have been true" (not just in an idealized model of a measurement/branching, but in messy statistical mechanics), then he needs to address these arguments.

I don't think people like Wallace would dispute that the assumption of measurement neutrality is logically equivalent to the projection postulate itself. The question is whether you can justify measurement neutrality (or some equivalent assumption like equivalence or branching indifference or whatever they were called); for example, by showing that alternatives are incoherent, or require a huge amount of arbitrary input, or correspond to rationality principles that aren't even workable in theory. Wallace has a lot of philosophical discussion in his papers about this; for example, see section 9 in http://users.ox.ac.uk/~mert0130/papers/decprob.pdf .

By the way, Wallace has written http://users.ox.ac.uk/~mert0130/evprob.html since last time; they're partly about Everett and probabilities.
 
Last edited by a moderator:
  • #23
Ontoplankton said:
This is true, but in section 5.3, Greaves argues that egalitarianism (which is the APP, but phrased in terms of utilities instead of objective probabilities) is *incoherent*, whether or not you accept measurement neutrality, because in a real-world setting where branch-splitting happens through decoherence, there is no well-defined number of branches. So I would say this actually works against Patrick van Esch's case. If his intent is to prove that the APP is a consistent theory that "could have been true" (not just in an idealized model of a measurement/branching, but in messy statistical mechanics), then he needs to address these arguments.

I think you are right that it would be good for Patrick to address this argument. Basically, Greaves has pointed out that there are certain mathematical hurdles that must be overcome if one is to implement the APP. But it is premature imho to jump from this to the conclusion at the beginning of sec 5.3 that "Egalitarianism is not, in fact, a tenable position."

I'd say that the best way to address this particular argument against the APP [that there is no well-defined number of branches] is to point out that the exact same argument could be applied against the Feynman path integral technique. But the FPI works, right? So the APP (= egalitarianism) could perhaps be made to work too, perhaps using a similar strategy.

The FPI assumes an infinite number of paths from source to detector. However, a common approximation technique is to rely instead on a finite number K of "representative" paths, basically a sampling of the infinite "all possible" paths. It turns out that if you take the limit as K approaches infinity, the calculated probabilities remain stable. Is there any reason to suppose that the same strategy cannot be used to calculate probabilities using egalitarianism?

David
 
  • #24
vanesch said:
So I think I will rethink the way I wrote up my paper as a function of the comments of the referees - which I now see more as the result of not having been clear enough of what I wanted to state and resubmit it somewhere else.

You should perhaps make detailed comments on the Barnum paper and state explicitly how your critique differs from (complements, improves upon ... ) Barnum's.

Here's another idea. One of the strongest arguments against a proof, in general, is to show a counterexample. So you could review all of the proposed variations of the MWI that replace the Born rule with the APP and discuss their strengths and weaknesses.

Of course, the only worthwhile ones to mention would be ones that recover quantum statistics. I know of three such proposed schemes: Graham, Robin Hanson's "mangled worlds," and Weissman. (And my paper-in-progress will hopefully be a fourth :cool: . Also, "many minds" could be based on the APP, I think.) I think that all of them -- with the exception of mine, of course :wink: -- do damage to some other essential ontological aspect of Everett's original proposal. Nevertheless, they do pose a threat to Deutsch's proof. Does anyone know of any other such schemes?

David
 
  • #25
Ontoplankton said:
This is true, but in section 5.3, Greaves argues that egalitarianism (which is the APP, but phrased in terms of utilities instead of objective probabilities) is *incoherent*, whether or not you accept measurement neutrality, because in a real-world setting where branch-splitting happens through decoherence, there is no well-defined number of branches. So I would say this actually works against Patrick van Esch's case.

Thanks a lot for that comment.

I've been thinking about that when I wrote up my paper, and I don't think it is a problem, in the following sense. I'm not discussing the APP in the context you cite, where I'm going to analyse the very messy and complicated state of system + observer + environment where it is granted that there's no clear indication that we have a finite number of terms in the wavefunction.

I'm arguing about the number of outcomes of an experiment, with a finite number of outcomes, and it is to THESE outcomes that I assign probabilities in two ways: via the Born rule (PP) or via a uniform distribution (APP). So you can see that as a finite number of SUBSPACES which slice up the overall hilbert space of "system + observer + environment", and it is to these subspaces that we have to assign probabilities. All well-defined measurements have a finite number of outcomes - this is not the same as talking about "the number of states the system + observer + environment" might be in after a decoherence interaction.

It is THIS probability rule that we want to derive from unitary QM (the assignment of probabilities to the eigenspaces corresponding to the *measurement* operator) and not a probability rule corresponding to the decohered state's individual terms ; after all it is the formerly cited probability that we can compare to experimental outcomes only. I indicate that, given the finite number of eigenspaces there are, the TWO probability assignments are entirely compatible with QM. The "remnant" of the problem you cite of the instability of the APP with respect to the number of outcomes is illustrated in the paper, and shows up as the lack of non-contextuality.

If the number of orthogonal decohered states is infinite, that simply means that we cannot assign a probability to EACH INDIVIDUAL STATE under the APP if the number of them is infinite. But the point is: *we don't have to*. We don't have to assign probabilities to these individual states in order for us to obtain probabilities of the outcomes of measurements. IF we can assign probabilities to them, all the better, then this will generate probabilities for the corresponding eigenspaces by summation. But if we can't there is no problem BECAUSE ALL THESE STATES ARE OPERATIONALLY INDISTINGUISHABLE (otherwise we would have to have an infinite amount of information in order to distinguish them). So the only probabilities that have to be assigned are to operationally distinguishable eigenspaces (distinguished by different measurement results), and there's always a finite number of them - so that the APP is well-defined in all cases that have an operational meaning.

Any comments ?
 
  • #26
Hmmm. I'll have to think about this... Doesn't it bring you back to a sort of measurement problem (where you have to define what constitutes a measurement not because you need to know when the wave function collapses, but because you need to know when to assign uniform probabilities)?
 
  • #27
Ontoplankton said:
Hmmm. I'll have to think about this... Doesn't it bring you back to a sort of measurement problem (where you have to define what constitutes a measurement not because you need to know when the wave function collapses, but because you need to know when to assign uniform probabilities)?

Sure, but that is not a "measurement problem". I only wanted to show that, when you have:

1) unitary dynamics
2) a set of "measurement outcomes" which corresponds to a finite set of eigenspaces slicing up hilbert space;

and you want to establish a rule which generates a Kolmogorov set of probabilities on 2 from 1, that (at least) TWO schemes work: the APP and the PP.

In my paper you can see many arguments why the APP gives "totally crazy" results, but nevertheless it is a logically possible rule.

This was, to me, sufficient as a demonstration that 1) ALL BY ITSELF will never be able to single out the PP, because the APP is logically compatible with it.

Because of the "craziness" of the APP, it is very easy to make a very innocent-looking assumption which eliminates it. But the original Everett idea was that NO such extra assumption was needed. Because ANY such assumption is "outside physics" (physics being *entirely* described by 1).

Mind you, in 2), the finiteness of the measurement outcome can be seen in a restricted, or in a large context. For instance, you could consider 2) as "all possible knowledge I can have" - it should still be a finite set of potential datapoints (If I'm not going to suffer gravitational collapse :-).
 
  • #28
PP, APP, and non-unitarity

I haven't had a chance to read the van Esch paper yet, but from this thread it sounds like all its points are reasonable, whether or not they were anticipated by Barnum, whose paper I also haven't read. They certainly are not points widely acknowledged in the Everett community.
What's puzzling is why the second referee would cite my paper as an argument AGAINST van Esch, since my point was that unitary dynamics don't give the Born rule. It sounds like van Esch is elaborating on all the bizarre things which would arise from the obvious APP rule- things ruled out ONLY by observation, not by the structure of unitary QM. In 1999, I would have agreed that there is no hope that the standard formalism can generate Born probabilities. Thanks to Hanson, there may be a small thread of such hope.
more later
Ok. it's a little later and I've had a chance to read the van Esch paper. It makes precisely the points which are ignored in the standard arguments that PP is the inevitable implication of QM. All of those arguments sneak the result into the assumptions somewhere. The ideas of this paper have been informally discussed before, and there should be some reference to Ballentine, Graham, etc who noticed the problem early on, but I think that it's a very timely formal answer to Wallace, Deutch, Zurek ...
There's a bit of a historical analogy to the arguments of Deutch at al, which simply assume that our beloved unitary QM couldn't possibly be predicting all those horrible context-dependent unstable probabilities of the APP. Historically, it was appealing to ignore that classical stat mech implied infinite BB radiation via equipartition. Just assuming the result was finite, the sort of assumption that one can easily sneak by any reasonable person, it's easy to derive the T^4 law. Of course, the origin of QM depended on facing the incompatibility of the assumption that the BB radiation was finite with the structure of classical mechanics, not papering it over by reading familiar results back into the axioms.
 
Last edited:
  • #29
M. Weissman,

First of all, welcome to PF !

and... thank you for your comments, they seem to adress exactly what I wanted to say with my little paper. As I told you by PM, it is nice to have a professional on the board (I only do these things amateur-wise).

This encourages me to rewrite my paper (clearly, it is not clear for the reader - that's what I make up from the referee's reports now, which really seem to miss the content).

I expected 3 possibilities (apart from "and here is your ticket to Stockholm :smile: "):

- 1. your argument is overly-well known, but contain an error.
- 2. your argument is not well known, but obviously wrong.
- 3. your argument is correct, but well-known.

But it seems that nobody saw my argument :cry:
 
  • #30
APP, Graham, CPT

I'm afraid that the people who really are paid to do this wouldn't consider me a professional in this area. My day job involves experimental work on disordered materials. FPL's first response to my submission was something like 'dear sir or madam: your paper or papers has or have been judged unsuitable for refereeing because of being obviously not new and/or meaningless and/or wrong..." It took a big fight and a lot of changes to get it published.
Your paper seemed extremely clear to me. Although the ideas are not new, the careful formal write-up is, and deserves to be published. One of your referees seemed to have a stake in misunderstanding it, since the desire to believe that the current axioms suffice is very strong in some circles.The other ref mostly had useful cosmetic comments, despite their slightly scrambled reference to my paper.
BTW, on some real issues, does anybody understand how Graham (1973) managed to get from APP to standard PP? I just can't follow his argument.
Also, on the discussion w Juan on CPT and the 2nd law: It's possible that there are two separate history-based time arrows, one for QM measurement and another for 2nd law. It would be more natural to tie them together. If some modification to QM is already needed to get the right probabilities, that modification would probably violate CPT and thus provide a free source for 2nd law irreversibility. David Albert (Time and Chance) has discussed that idea in the context of GRW collapse pictures, although for some time I've been pushing it in the modified many worlds context.
 
  • #31
vanesch said:
Well, I checked finally the Barnum paper, it is available here:
http://www.journals.royalsoc.ac.uk/media/n9kmrgwwyr8226vrfl1y/contributions/a/r/x/p/arxp7pkn699n63c9.pdf
I really don't see what the referee meant when he said that we had identical critiques. This is a totally different critique of Deutsch' work in particular, where the authors try to establish that Gleason's theorem ALREADY ESTABLISHED Deutsch' results.
Wallace's response to the Barnum et al. paper can be found in section 6 of the following paper. (Later sections are also relevant.)

http://arxiv.org/abs/quant-ph/0303050

He interprets Barnum et al. as giving three different criticisms of Deutsch's paper, only one of which is the one about Gleason's theorem. One of the others is that Deutsch's conclusions don't follow from his assumptions unless an extra assumption is introduced. It seems to me this is the same criticism as yours, except that you're giving an explicit counterexample rather than just saying it doesn't follow. According to Wallace, though, this criticism doesn't work if you assume a decoherence-based Everett interpretation, because having to justify the measurement model from lower-level physics rather than taking it as basic puts extra constraints on what probabilities you can assign.

It seems to me now that referee-1 got it right. (Though I actually have no real technical competence to comment on all this. :) ) Papers like those of Barnum et al., Wallace, and Greaves already acknowledge that, when working in a measurement model like the one you're using, without making an extra assumption, you can't prove the PP, and alternatives like the APP are possible. The real question is whether it's possible to avoid making that extra assumption while keeping the reductionistic Everett interpretation, i.e. without introducing measurements or experiments as basic elements of your theory. To show that the decision theory program fails, you would need to show in your paper that the arguments made by Wallace, Saunders, Greaves, and so on are wrong. (As I think you've tried to do in this thread, by talking about how it doesn't matter operationally speaking.)
 
Last edited by a moderator:
  • #32
Measurement Neutrality and Laplace's Principle of Indifference

straycat said:
Basically, Greaves explains that one of the essential assumptions in Deutsch-Wallace decision theory is the postulate of "measurement neutrality," which is "the assumption that a rational agent should be indifferent between any two quantum games that agree on the state |phi> to be measured, measurement operator X and payoff function P, regardless of how X is to me measured on |phi>." afaict, this means that if we think of the measurement process as a "black box," then Deutsch assumes that a rational agent should in principle be indifferent to the details of the innards of this black box.

Hi! I think something like this assumption was probably first made explicit by Wallace.
Based on email exchanges between Deutsch and the authors of "Quantum probability from decision theory?", I'd say Deutsch had it in mind as well, although even in the email exchange it only became gradually apparent, though not formalized as Wallace has done, what he had in mind... At first I thought Deutsch
thought his argument might apply whether one had a many-worlds or a definite-outcomes view of measurements, and that's why his paper was so unclear on this point. Now I'm not sure. But anyway, the crucial thing is that the measurement
neutrality assumption is a kind of quantum version of Laplace's Principle of Insufficient
Reason (PIR). In our paper, "QP from DT?", we argued Deutsch's implicit assumption was a kind of analogue of the PIR. Measurement neutrality is a more sophisticated one, but an analogue nonetheless. It seems "nonprobabilistic" because it isn't on the face of it about probabilities, whereas Laplace's PIR *is* explicitly about probabilities---but if one accepts a subjective, decision-theoretic view
of probabilities (which I have no problem with, in this context), then assumptions about
preferences may encode assumptions about probabilities, and I think that's so here. It's simply not a principle of "pure rationality" that whatever differences---physical differences, they'd likely be---between two ways of measuring a Hermitian operator, those differences should not affect our preferences between the result. Suppose
the differences have no intrinsic value to us: still, we could imagine having different beliefs about the likelihoods of the outcomes given different measurement processes,
and thus valuing the games differently. Measurement neutrality rules this out: therefore, it has substantive physical content (to the extent that physics is a theory
that guides action). Sure, it might seem crazy to think that the color of lettering
we use on the dials of our measuring device, or whatever, could affect the probabilities. But that it is crazy is part of our phenomenological theory of the world, acquired at least in part through experience and inference --- not a pure *principle* of rationality...and is also supported by arguments concerning the physics of the measuring device. No doubt we can't make do without some such prior predispositions to dismiss such possibilities as highly unlikely----but that doesn't mean invoking them is harmless in an effort to derive the probability rules solely from the assumption that the rest of quantum mechanics is "a fact" (whatever it would mean for the rest of QM to be "a fact" without the probability rules that are an important part of what ties it to the world and gives it content), plus "pure rationality".

Maybe I should
back off a little on that last parenthetical remark: there are things other than the probability rule that get QM in contact with the world: in fact, QM arrived a little earlier than the Born rule, as a theory explaining, among other things, some atomic spectra, by determining energies (of e.g. the hydrogen atom). Nevertheless, I tend to think that
Many-Worlds (depite my having spent a lot of effort in my life playing devil's advocate
for it) gets things backwards: stuff happens, we do scientific (i.e. some variety of roughly Bayesian in a very general, not necessarily conscious, sense) inference about the stuff that happens, the definite results we experience for measurements, we come up with a theory that systematizes the resulting beliefs (as evidenced by our willingess to bet, in a very generalized sense, on the outcomes of experiments and such). This systematization of our betting behavior faced with experiments can be represented in terms of probabilities given by the Born rule. Rederiving the Born probabilities from a part of the formalism that was cooked up, and especially, further developed and held onto, in part to give a good representation of just these probabilities, seems somewhat backwards. Without the probabilities, and the terrific guidance they give to our actions, who would have bothered with quantum mechanics anyway? I guess one can say that the rederivation is a sophisticated attempt to keep the probabilities and solve other problems that came along with quantum mechanics. But it still raises, for me, a serious problem of: what then of the formal and informal scientific reasoning, based on measurments having definite results, that brought us to the QM formalism and Born rule in the first place? Must we reconstruct it all in terms of Everettian branchings, with never a definite result?

Patrick's detailed exploration of an alternative probability rule (which happens to be a rule we devoted two sentences to on page 1180 of our paper, noting that it was contextual
but not obviously ruled out by Deutsch's other assumptions) is quite worthwhile, I think. I have only just read it, a couple of times through, but it looks basically right to me. FoP might be a good place for it. I think maybe Wallace, or somebody else (there is related work by Simon Saunders...) devoted some effort to ruling it out explicitly (I'll post it if I find a reference)... maybe just through establishing noncontextuality given certain assumptions. But any such effort is likely to be based on measurement neutrality or something similar.

Cheers!

Howard Barnum
 
  • #33
Quote (from lalbatros, quoted by vanesch):
For many people, the interaction with a 'classical' or 'macroscopic' system is all that is needed to derive the PP. I think this is the most probable explanation for the PP. Landau considered this so obvious that it comes in the first chapters in his QM book.

Quote from vanesch (replying to lalbatros)
This is the standard "explanation". But it is *postulated* and not *derived* from unitary QM. What qualifies a system as "macroscopic" and "classical" (without making circular reasoning ?) and why shouldn't it obey standard quantum theory ?
Or is there an upper limit to the number product hilbert spaces (number of particles) before the exponentiation of a hermitean operator suddenly doesn't become unitary anymore ?

My comments:

It's not that a large isolated system behaves nonunitarily, but that a smaller system interacting with a larger one may undergo a nonunitary effective evolution. That's one way of understanding entropy-increasing evolutions, like equilibration of a system with a larger "heat bath." Of course, it's true that in computing things like the entropy of the open system, one is implicitly using the Born probabilities, e.g. via the reduced density matrix (whose eigenvalues' interpretation as probablities relies on the Born rule at the level of combined system and reservoir).

Howard
 
  • #34
mbweissman said:
BTW, on some real issues, does anybody understand how Graham (1973) managed to get from APP to standard PP? I just can't follow his argument.

I have taken several stabs at trying to understand Graham's argument, which I recall relies in some manner on some sort of "two-step" process of measurement, but I never could piece together how his argument worked. The first part of his paper is actually pretty good, I think, in which he argues against the PP -- or rather, I suppose he argues FOR the APP. I try to recapitulate this argument in the first part of my paper. BTW, I agree with Dr. Weissman's statement in the abstract of his paper when he states that counting outcomes, ie the APP, is "the obvious algorithm for generating probabilities." To my mind, the APP is the only projection postulate that I think could legitimately be justified by a symmetry argument. In fact, I am absolutely flabbergasted that there are so few attempts in the literature to make the APP "work." The only published attempts that I know of are Weissman's and Hanson's -- perhaps many minds might count as well. Given the obviousness of the APP, why has it been paid so little attention? Is it the non-contextuality that Patrick speaks of?

David
 
  • #35
hbarnum said:
Nevertheless, I tend to think that
Many-Worlds (depite my having spent a lot of effort in my life playing devil's advocate
for it) gets things backwards: stuff happens, we do scientific (i.e. some variety of roughly Bayesian in a very general, not necessarily conscious, sense) inference about the stuff that happens, the definite results we experience for measurements, we come up with a theory that systematizes the resulting beliefs (as evidenced by our willingess to bet, in a very generalized sense, on the outcomes of experiments and such). This systematization of our betting behavior faced with experiments can be represented in terms of probabilities given by the Born rule. Rederiving the Born probabilities from a part of the formalism that was cooked up, and especially, further developed and held onto, in part to give a good representation of just these probabilities, seems somewhat backwards.

But doesn't science often progress that way: we make observations, we then make a theory to describe the observations, and we then come up with a way to derive the theory from something more fundamental and completely different. Example: we see things in motion, we come up with Newtonian mechanics, and then we find that we can derive Newton's laws from general relativity. Why not try to do with the Born rule what we did with Newton's laws -- derive it from something deeper?

David
 
Back
Top