Local realism ruled out? (was: Photon entanglement and )

In summary, the conversation discussed the possibility of starting a new thread on a physics forum to discuss evidence for a specific perspective. The topic of the thread was related to the Bell theorem and its potential flaws on both theoretical and experimental levels. The original poster mentioned that their previous posts on this topic had been criticized, but their factual basis had not been challenged until recently. They also noted that the measurement problem in quantum mechanics is a well-known issue and cited a paper that they believed supports the idea that local realism has not been ruled out by existing experiments. The other participant in the conversation disagreed and stated that the paper did not rule out local realism and provided additional quotes from experts in the field. Ultimately, the conversation concluded with both parties holding differing views
  • #631
Ruta,

first off, thank you very much for agreeing with these points in principle, it's quite a relief after I have been trying hard (and in vain) to explain those points to otherwise knowledgeable people.

RUTA said:
1) Yes, but the loop holes that exist, if realized in Nature, would mean Nature is extremely contrived -- a giant conspiracy to "trick" us. No one that I know in the foundations community believes this is the case.

I am sure you can separate facts from opinions. In this case you are talking about opinions. As I said, this matter cannot be resolved by popular vote.

RUTA said:
2) Yes, but the measurement problem is a problem for QM as a whole and does not allow for the selective dismissal of any particular QM result without impugning all of QM. And, QM works very well even though it's not a rigorously self-consistent formal system (same can be said of QFT).

You just cannot reasonably demand that I embrace mutually contradicting postulates.
 
Physics news on Phys.org
  • #632
akhmeteli said:
I am sure you can separate facts from opinions.

You apparently cannot, as you put forth your opinions as fact. Further, you apparently cannot tell the difference between ad hoc speculation and evidence based opinions. To reasonable people, there is a difference.

There is a huge difference in your speculation on loopholes (notice how you cannot model the behavior of these despite your unsupported claims) and RUTA's opinions (which he can model nicely using both standard and original science).
 
  • #633
akhmeteli said:
I insist that I offered an LR model having the same unitary evolution as a quantum field theory (QFT). It is certainly important how well or badly this model describes experimental results, but I think the model is important anyway, because it shows how a seemingly nonlocal QFT may be just a disguise for an LR model, so even if “my” LR is not very good at describing experimental results, some of its modifications may fare much better.

So basically, this version is useless as is (since it cannot predict anything new and cannot explain existing well); but you want us to accept that a future version might be valuable. That may be reasonable, I can see the concept and that is certainly a starting point for some good ideas. But it is a far cry to go from here to saying your point is really made. Santos, Hess and many others have gone down similar paths with similar arguments for years. Where did they end up?

It is clear to a lot of people that it is possible to construct models that emulate *some* of the predictions of QM in a local realistic manner. Cleaning up one tiny item (which I guess is your perceived inconsistency in QM) but breaking 2 more major ones (such as HUP, entanglement) is not a profitable start, in my opinion.

Please keep in mind that you should not expect to post speculative ideas in this forum with impunity. This forum is for generally accepted science.
 
  • #634
akhmeteli said:
Then maybe you are drawing a distinction that is too fine for me:-). Indeed, your rephrasing of their phrase can be successfully applied to my statement about Euclidian geometry:-) Until you have an actual geometry in your possession, you can also argue that a theory “making use of both loopholes would be very contrived-looking”.
Well, here I suppose I must appeal to mathematical and physical intuitions--I don't in fact think it's plausible that a smart mathematician living in the days before Euclidean and non-Euclidean geometry would believe the fact that a quadrangle on a plane and a triangle on a sphere have angles adding up to other than 180 should imply that only a "contrived" theory of geometry would agree with the conjecture that triangles in a plane have angles that sum to 180. In contrast, I think lots of very smart physicists would agree with the intuition that a local realist theory consistent with all past experiments but which predicted no Bell inequality violation in ideal loophole-free experiments would have to be rather "contrived". Perhaps one reason for this is that we know what is required to exploit each loophole individually--exploiting the detector efficiency loophole requires that in some pairs of particles, one of the pair has a hidden variable that makes it impossible to detect (see billschnieder's example in posts #113 and #115 on this thread), whereas exploiting the locality loophole requires that whichever member of the pair is detected first will send out some sort of signal containing information about what detector setting was used, a signal which causes the other particle to change its own hidden variables in just the right way as to give statistics that agree with QM predictions. Does your model contain both such features?
JesseM said:
You addressed it by suggested your own model was non-contrived, but didn't give a clear answer to my question about whether it can actually give statistical predictions about experiments so far like the Innsbruck experiment and the NIST experiment
akhmeteli said:
I did not give you a clear answer because I don’t have it and don’t know how to obtain it within a reasonable time frame.
OK, but then when I said "still I think most experts would agree you'd need a very contrived local realist model to get correct predictions (agreeing with those of QM) for the experiments that have already been performed, but which would fail to violate Bell inequalities (in contradiction with QM) in an ideal experiment", why did you respond by saying (in post #579) "I agree, "most experts would agree" on that. But what conclusions am I supposed to draw from that? That the model I offer is "very contrived"? After all, the question of whether your model is "contrived" is only relevant to my own statement if in fact your model can "get correct predictions ... for those experiments that have already been performed". If you don't yet know whether your model does this, then you can't offer it as a counterexample to the claim that any model that did do it would have to be very contrived.
akhmeteli said:
You want me to emulate the above experiments in “my” model.
Yes, that would be needed to show that you have a model that's a counterexample to the "contrived" claim. And even if you can't yet apply your model to existing experiments in all their precise details, you could at least start by seeing what it predicts about some simplified Aspect-type experiment that closes the locality loophole but not the detector efficiency loophole, and another simplified experiment that closes the efficiency loophole but not the locality loophole, and see if it predicts Bell inequality violations here. As an even more basic step, you could just explain whether it has the two features I noted above: 1) hidden variables which ensure that some particles aren't detected, no matter how good the detectors, and 2) some sort of signal from the first measured particle that contains information about the detector setting, and a way for the other particle to alter its own hidden variables in response to this signal.
akhmeteli said:
Therefore, so far my reasoning is different. Let me ask you this: if I offered a model that would have the same unitary evolution as quantum electrodynamics, not just “a” quantum field theory, would that suggest that the actual results of past experiments may be successfully emulated in this model? I’ll proceed (or not, depending on your answer) when I have your answer.
Unitary evolution only predicts complex amplitudes, not real-valued probabilities. If you have some model that predicts actual statistics in a local way, and whose predictions agree with those of unitary evolution + the Born rule, then say so--but of course unitary evolution + the Born rule predicts violations of Bell inequalities even in loophole-free experiments, and you said earlier that you weren't claiming your model could give BI violations even in loophole-free experiments. So your claims about your model are rather confusing to say the least.
akhmeteli said:
As I said, the model gives predictions for probabilities the same way Bohmian mechanics does that
When did you say that?
akhmeteli said:
– you yourself described the relevant procedure.
I don't remember describing a procedure for getting probabilities in Bohmian mechanics, what post are you talking about? Bohmian mechanics treats the position variable as special, its equations saying that particles have a well-defined position at all times, and measurement results all depend on position in a fairly straightforward way (for example, spin measurements can be understood in terms of whether a particle is deflected to a higher position or a lower position by a Stern-Gerlach apparatus). The equations for particle behavior are deterministic, but for every initial quantum state Bohmian mechanics posits an ensemble of possible hidden-variable states compatible with that measured quantum state, so probabilities are derived by assuming each hidden state in the ensemble is equally probable (this is analogous to classical statistical mechanics, where we consider the set of possible 'microstates' compatible with a given observed 'macrostate' and treat each microstate as equally probable). Does all of this also describe how predictions about probabilities are derived in your model? If not, where does the procedure in your model differ?
akhmeteli said:
So let me ask you another question: do you think that Bohmian mechanics offers expressions for probabilities? If yes, then how “my” model is different from Bohmian mechanics that it cannot give expressions for probabilities?
I'll answer that question based on your answer to my questions above.
JesseM said:
(you may be able to derive probabilities from amplitudes using many-worlds type arguments, but as I said part of the meaning of 'local realism' is that each measurement yields a unique outcome)
akhmeteli said:
Again, as I said, “local realism” does not necessarily require that “each measurement yields a unique outcome” (see also below), and I don’t need any “many-worlds type arguments”.
I think you may be misunderstanding what I mean by "unique outcome". Suppose the experimenter has decided that if he sees the result "spin-up" on a certain measurement he will kill himself, but if he sees the result "spin-down" he will not. Are you saying that at some specific time shortly after the the experiment, there may not be a unique truth about whether the experimenter is alive or dead at that time? If you do think there should be a unique truth, then that implies you do think that "each measurement yields a unique outcome" in the sense I meant. If you don't think there is a unique truth, then isn't this by definition a "many-world type argument" since you are positing multiple "versions" of the same experimenter?
JesseM said:
Suppose we do a Wigner's friend type thought-experiment where we imagine a small quantum system that's first measured by an experimenter in an isolated box, and from our point of view this just causes the experimenter to become entangled with the system rather than any collapse occurring. Then we open the box and measure both the system and the record of the previous measurement taken by the experimenter who was inside, and we model this second measurement as collapsing the wavefunction. If the two measurements on the small system were of a type that according to the projection postulate should yield a time-independent eigenstate, are you claiming that in this situation where we model the first measurement as just creating entanglement rather than collapsing the wavefunction, there is some nonzero possibility that the second measurement will find that the record of the first measurement will be of a different state than the one we find on the second measurement? I'm not sure but I don't think that would be the case--even if we assume unitary evolution, as long as there is some record of previous measurements then the statistics seen when comparing the records to the current measurement should be the same as the statistics you'd have if you assumed the earlier measurements (the ones which resulted in the records) collapsed the wavefunction of the system being measured according to the projection postulate.
akhmeteli said:
Sorry, JesseM, I cannot accept this argument. The reason is as follows. If you take unitary evolution seriously (and I suspect you do), then you may agree that unitary evolution does not allow irreversibility, so, strictly speaking, no “record” can be permanent, so a magnetic domain on a hard disk can flip, furthermore, ink in a lab log can disappear, however crazy that may sound.
I agree, but I think you misunderstand my point. Any comparison of the predictions of the "standard pragmatic recipe" with another interpretation like the MWI's endless unitary evolution must be done at some particular time--what happens in the future of that time doesn't affect the comparison! My point is that if we consider any series of experiments done in some finite time window ending at time T1, and at T1 we look at all records existing at that time in order to find the statistics, then both of the following two procedures should yield the same predictions about these statistics:

1) Assume that unitary evolution applied until the very end of the window, so any measurements before T1 simply created entanglement with no "wavefunction collapse", then take the quantum state at the very end and use the Born rule to see what statistics will be expected for all records at that time

2) Assume that for each measurement that left an (error-free) record which survived until T1, that measurement did collapse the wavefunction according to the projection postulate, with unitary evolution holding in between each collapse, and see what predictions we get about the statistics at the end of this series of collapses-with-unitary-evolution-in-between.

Would you agree the predicted statistics would be the same regardless of which of these procedures we use? If you do agree, then I'd say that means the standard pragmatic recipe involving the projection postulate should work just fine for any of the types of experiments physicists typically do, including Aspect-type experiments. The only time the projection postulate may give incorrect statistical predictions about observations is if you treat some measurement as inducing a "collapse" even though the information about that measurement was later "erased" in a quantum sense (not just burning the records or something, which might make the information impossible to recover in practice but not necessarily in principle), but in any case the rules for using the projection postulate are not really spelled out and most physicists would understand that it wouldn't be appropriate in such a case.
akhmeteli said:
If you challenge that, you challenge unitary evolution, if you challenge unitary evolution, there’s little left of quantum theory. Furthermore, in our previous discussion, I argued that even death (we were talking abot Schroedinger’s cat), strictly speaking, cannot be permanent because of unitary evolution and the quantum recurrence theorem.
Quantum recurrence isn't really relevant, the question is just whether there was a unique truth about whether the cat was alive or dead at some specific time, not whether the cat may reappear in the distant future. As long as there is some record of whether the cat was alive or dead at time T it's fine for us to say there was a definite truth (relative to our 'world' at least), but if the records are thoroughly erased we can't say this.
akhmeteli said:
It is not so important for this thread how the “pragmatic recipe” is used in general, it is important how the projection postulate is used in the proof of the Bell theorem: it is supposed that as soon as you measure the spin projection of one particle, the spin projection of the other particle becomes definite immediately, according to the projection postulate. So the projection postulate is not "only" used here “at the very end of the complete experiment”, so you have highlighted an important point.
Well, see my point about the agreement in statistical predictions between method 1) and 2) above.
 
  • #635
DrChinese said:
It is clear to a lot of people that it is possible to construct models that emulate *some* of the predictions of QM in a local realistic manner. Cleaning up one tiny item (which I guess is your perceived inconsistency in QM) but breaking 2 more major ones (such as HUP, entanglement) is not a profitable start, in my opinion.

I cannot address all your comments right now, but why do you think I am breaking HUP and entanglement? HUP is valid for scalar electrodynamics, and the projections of the generalized coherent states on (say) two-particle subspace of the Fock space are entangled states, so your statement is at least not obvious.
 
  • #636
akhmeteli said:
I cannot address all your comments right now, but why do you think I am breaking HUP and entanglement? HUP is valid for scalar electrodynamics, and the projections of the generalized coherent states on (say) two-particle subspace of the Fock space are entangled states, so your statement is at least not obvious.

That is a reasonable comment.

1. I am guessing that for you, entangled particles have states in common due to their earlier interaction. Further, that entangled particles are in fact discrete and are not in communication with each other in any ongoing manner. And yet, it is possible to entangle particles that have never existed in a common light cone. My point is that won't go hand in hand with any local realistic view.

2. EPR argued that the HUP could be beaten with entangled particles. You could learn the value of position on Alice and the momentum of Bob. And yet, a subsequent observation of Alice's momentum cannot be predicted using Bob's value. (Of course this applies to all non-commuting pairs, including spin). So EPR is wrong in that regard. That implies that the reality of Alice is somehow affected by the nature of the observation of Bob. I assume you deny this.
 
  • #637
RUTA said:
I'm willing to spend time trying to figure out what you're saying unless it involves Many Worlds. The reason I dismiss Many Worlds is that if it's true, there's no way to do science. That is, if all possible outcomes are always realized, there are universes in which the participants don't get the right statistics, i.e., those that are dictating the split rates. And, there's no way any participant in any of the splits can know whether his results are the "correct" results or not. Therefore, you can't do science.
This seems more like a philosophical objection than a scientific one. Besides, according to the frequentist view of probability, it is always possible for the statistics seen on a finite number of trials to differ from the "true" probabilities that would obtain in the limit of an infinite number of trials (which are, if QM is correct, the probabilities given by applying the Born rule to the state of the wavefunction at the time of measurement), so the problem you point to isn't specific to a many-worlds framework. For example, if I run an experiment with 100 trials to collect statistics, if we looked at all trials of an experiment of this type that will ever be performed in human history, the number might be millions or billions, which means there will be a few cases where experimenters did a run of 100 or more trials and got statistics which differed significantly from the "correct" ones--how do I know my run wasn't one of those cases? The problem is even worse if we assume the universe is spatially infinite (as many cosmological models suppose), in which case it seems reasonable to postulate an infinite number of planets where intelligent life arises and performs the same sort of experiment--in this case even if we consider every trial of this type of experiment that has been done in the history of our planet, there should be some (very rare) civilizations where the statistics in every trial of the same type of experiment are badly off from the "correct" ones due to random statistical fluctuations, how can we know that we don't happen to be one of these? (philosophically I think the solution lies in adopting something along the lines of the self-sampling assumption) Do you think that the mere assumption of an infinite universe with an infinite number of civilizations makes it impossible to "do science"? If not, how is the many-worlds interpretation different?
 
  • #638
DrChinese said:
And yet, it is possible to entangle particles that have never existed in a common light cone. My point is that won't go hand in hand with any local realistic view.
Just curious, how would this work?
 
  • #639
Me too! I’m looking for it on PF but can’t find it!?
 
  • #640
akhmeteli said:
but let me ask you, DevilsAvocado, what is your personal opinion?


Okay, you asked for it. But first let’s make it perfectly clear: I’m only a layman. I trust people who are smarter than me 99% of the time. The last 1% is reserved for human errors, nobody is perfect, and even Einstein made mistakes. When it comes to the http://en.wikipedia.org/wiki/Scientific_community" , these numbers naturally diverse even more.

My personal advice to an independent researcher:
1) Question your own work more than others, every day, especially if you are working alone.

2) Write down this quote and read it at least once every day:
"One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision." -- Bertrand Russell

3) Make sure you have read and understand every word in the article http://en.wikipedia.org/wiki/Scientific_method" and especially the understanding of the four essential elements:
  • Characterizations
  • Hypotheses
  • Predictions
  • Experiments

I’m not saying this to be rude, just to tell you the truth – it looks like you have to work very hard on every advice.

Now, what’s my personal opinion on EPR-Bell experiments and loopholes? Well, I think you are presenting a terrible biased picture of the situation. You want us to believe that current experts in EPR-Bell experiments have the same bizarre valuation of their experiments as you have. Namely, that every performed EPR-Bell experiment so far is worth nothing?? Zero, zip, nada, zilch, 0!? :eek:

By your logic, Anton Zeilinger is going to work every day, and starts working on new experiments, which he already knows is not going to prove anything, once again, year after year...?:bugeye:?

This tells me that either your logic or Anton Zeilinger’s logic is extremely obtuse. And I already know where to place my bet...

Please, read http://en.wikipedia.org/wiki/Dunning–Kruger_effect" and reevaluate your conclusion.

You are also trying to apply this faulty logic on RUTA:
akhmeteli said:
Again, Ruta is no fan of local realism either, but he also admits that there are no such experiments.

Yes, RUTA is an honest scientist and he would never lie and say that a 100% loophole-free Bell experiment has been performed, when it hasn’t yet.

But where do you see RUTA saying that performed Bell experiments so far is worth absolutely nothing, nil?? Your twist is nothing but a scam:
RUTA said:
Given the preponderance of experimental evidence and the highly contrived nature by which loop holes must exist to explain away violations of Bell inequalities, the foundations community has long ago abandoned any attempt to save local realism. But, you're right, there are no truly "loop hole free" experiments, so die hard local realists can cling to hope.
RUTA said:
1) Yes, but the loop holes that exist, if realized in Nature, would mean Nature is extremely contrived -- a giant conspiracy to "trick" us. No one that I know in the foundations community believes this is the case.


This is of course exactly the same standpoint as Zeilinger et al. has, that you are quoting to "prove" something completely different!

These are honest scientist that you are exploiting in a dishonest way to "prove" the opposite. What’s your excuse??

I can guarantee you that RUTA, Zeilinger or any other real scientist in the community all agree that all performed EPR-Bell experiments so far has proven with 99.99% certainty that all local realistic theories are doomed. But they are fair, and will never lie, and say 100%, until they are 100%.

You are exploiting this fact in a very deceive way, claiming that they are saying that there is 0% proof of local realistic theories being wrong.

And then comes the "Grand Finale", where you use a falsification of Anton Zeilinger’s standpoint, as the "foundation" for this personal cranky statement:
"there are some reasons to believe these inequalities cannot be violated either in experiments or in quantum theory, EVER"


Outrageous :mad:
 
Last edited by a moderator:
  • #641
JesseM said:
This seems more like a philosophical objection than a scientific one. Besides, according to the frequentist view of probability, it is always possible for the statistics seen on a finite number of trials to differ from the "true" probabilities that would obtain in the limit of an infinite number of trials (which are, if QM is correct, the probabilities given by applying the Born rule to the state of the wavefunction at the time of measurement), so the problem you point to isn't specific to a many-worlds framework. For example, if I run an experiment with 100 trials to collect statistics, if we looked at all trials of an experiment of this type that will ever be performed in human history, the number might be millions or billions, which means there will be a few cases where experimenters did a run of 100 or more trials and got statistics which differed significantly from the "correct" ones--how do I know my run wasn't one of those cases? The problem is even worse if we assume the universe is spatially infinite (as many cosmological models suppose), in which case it seems reasonable to postulate an infinite number of planets where intelligent life arises and performs the same sort of experiment--in this case even if we consider every trial of this type of experiment that has been done in the history of our planet, there should be some (very rare) civilizations where the statistics in every trial of the same type of experiment are badly off from the "correct" ones due to random statistical fluctuations, how can we know that we don't happen to be one of these? (philosophically I think the solution lies in adopting something along the lines of the self-sampling assumption) Do you think that the mere assumption of an infinite universe with an infinite number of civilizations makes it impossible to "do science"? If not, how is the many-worlds interpretation different?

In the Single World, the predicted distribution is what each experimentalist should find and, indeed, our QM predictions match said distributions. In the Single World, a scientifically predicted distribution that didn't match experimentally obtained results would not be accepted. No scientist would say,"Hey, maybe we're just in that weird spot in an infinite universe?" No way, the theory is toast.

But, in Many Worlds, you're saying any unobserved outcomes in our universe are observed in other universes, so you automatically create aberrant distributions. In Many Worlds, unrealized outcomes aren't mere counterfactuals, they're instantiated. So, if you REALLY believe in Many Worlds, the best you can do is believe we're in that special universe where the REAL QM distributions obtain. But, you'd have to admit that there's no way to know.

Why would any scientist buy into a philosophy like that?
 
  • #642
RUTA said:
You can prevent or destroy entangled states very easily -- making and keeping them entangled is the difficult part. There is no getting around violations of Bell inequalities by entangled states in certain situations unless you destroy the situations, which is, again, easy to do.
The point was not about that entangled states can be destroyed. The point is what you get after you destroy entangled state in a certain way. And it is not complete absence of any correlation but rater purely classical correlation that obeys local realism (not really predicted by QM prediction a la Bell).

Your point was that it would be contrived to assume that entangled state would disappear as we extend inefficient detection case toward efficient detection.
I gave you an example how this can happen in quite elegant way.

RUTA said:
You used the phrase "in the case of measurement." That is the problem, we don't have a definition for what constitutes a "measurement." We know it when we see it, so we know how to use QM, that's not the problem.
Sorry, poor formulation. Let me rewrite it.
If all particles from ensemble are preserved that is unitary evolution.
If ensemble is reduce to subensemble this is measurement or decoherence (depends from analysis performed by experimenter).
 
  • #643
akhmeteli said:
If "Unitary evolution and projection postulate are not contradicting", then the results for the subensemble should not contradict the results for the ensemble, however, they do contradict, as, unlike unitary evolution, the projection postulate destroys superposition and introduces irreversibility.
You consider ensemble as statistical ensemble of completely independent members where each member possesses all the properties of ensemble as a whole, right?
Otherwise I do not understand how you can justify your statement.
 
  • #644
RUTA said:
Why would any scientist buy into a philosophy like that?

RUTA, since no one has showed me a "postcard" from one of the other myriads of MWI worlds, I’m on "your side" – but I think the answer is: Yes

And it’s not a bunch of unknown geniuses in Mongolia – it’s http://en.wikipedia.org/wiki/Stephen_Hawking" !

This makes me wonder if (maybe) I could be wrong... :smile:
 
Last edited by a moderator:
  • #645
DevilsAvocado said:
RUTA, since no one has showed me a "postcard" from one of the other myriads of MWI worlds, I’m on "your side" – but I think the answer is: Yes

And it’s not a bunch of unknown geniuses in Mongolia – it’s http://en.wikipedia.org/wiki/Stephen_Hawking" !

This makes me wonder if (maybe) I could be wrong... :smile:

My question was rhetoric, of course. Most physicists subscribe to Mermin's "Shut up and calculate!" but of those (few) physicists who care about foundational issues, most subscribe to some variant of Many Worlds (no-collapse models). The reason is simple -- they're more concerned with issues of formalism and no-collapse models solve the measurement problem.
 
Last edited by a moderator:
  • #646
zonde said:
Sorry, poor formulation. Let me rewrite it.
If all particles from ensemble are preserved that is unitary evolution.
If ensemble is reduce to subensemble this is measurement or decoherence (depends from analysis performed by experimenter).

When you understand the measurement problem, come back and we'll talk about it.
 
  • #647
RUTA said:
When you understand the measurement problem, come back and we'll talk about it.

Could this maybe be helpful to zonde?

[PLAIN]http://upload.wikimedia.org/wikipedia/en/thumb/b/b0/Observer-observed.gif/350px-Observer-observed.gif
Observer O measures the state of the quantum system S

:rolleyes:
 
Last edited by a moderator:
  • #648
RUTA said:
My question was rhetoric, of course.

Okidoki, thanks.
 
  • #649
Dmitry67 said:
MWI has a problem with Born rule.
It is not clear why, while ALL weird world exist, the ones with low 'intensity' are somehow less important.

JesseM said:
... For example, if I run an experiment with 100 trials to collect statistics, if we looked at all trials of an experiment of this type that will ever be performed in human history, the number might be millions or billions, which means there will be a few cases where experimenters did a run of 100 or more trials and got statistics which differed significantly from the "correct" ones--how do I know my run wasn't one of those cases?


The problem I have with MWI is that: Yes, most of the times we will of course see the "correct" statistics. But to me it’s not clear how the "weird stuff" is always split into the few "weird worlds". This "weird stuff" should be "distributed" evenly among all worlds... if we stick to QM probability.

Then we should see some really unbelievable and crazy stuff now and then – but we don’t...??
 
  • #650
Could someone please explain what’s "mutually contradicting" in this?
cos^2(a-b)​

Even I can solve this 'equation' without any contradictions...
 
  • #651
RUTA said:
In the Single World, the predicted distribution is what each experimentalist should find and, indeed, our QM predictions match said distributions.
That's just not true, a finite number of trials will not always yield exactly the same statistics as the ideal probability distribution, statistical fluctuations are always possible. If you flip a coin 100 times, and the coin's physical properties are such that it has a 50% chance of landing heads or tails on a given flip, that doesn't imply you are guaranteed to get exactly 50 heads and 50 tails! In fact, if you have a very large collection of series of 100 flips, a small fraction of the series will have statistics that differ significantly from the true probabilities (the greater the 'significant' difference, the smaller the fraction)--eventually you might see some series where 100 flips of a fair coin yielded 80 heads and 20 tails. Similarly, if space is infinite and there are an infinite number of different civilizations doing the same type of quantum experiment, there will be some small fraction of the civilizations where the statistics they get on every trial of that experiment throughout their history differ significantly from the true probabilities. This might be a very tiny fraction, but then it's no smaller than the fraction of "worlds" that see the same type of departure from the true probabilities.

Do you disagree with any of the statements above? If so, what's the first one you would disagree with? If you don't disagree with any, perhaps you would indeed say it's impossible to "do science" in a universe where the number of civilizations is infinite (assuming there is indeed a random element to experiments that can't be eliminated with better experimental techniques, which would even be true in a deterministic hidden-variable model like Bohmian mechanics if it's impossible to measure/control the hidden variables). But I think this would be a pretty strange position to take, philosophically.
 
  • #652
JesseM said:
That's just not true, a finite number of trials will not always yield exactly the same statistics as the ideal probability distribution, statistical fluctuations are always possible. If you flip a coin 100 times, and the coin's physical properties are such that it has a 50% chance of landing heads or tails on a given flip, that doesn't imply you are guaranteed to get exactly 50 heads and 50 tails! In fact, if you have a very large collection of series of 100 flips, a small fraction of the series will have statistics that differ significantly from the true probabilities (the greater the 'significant' difference, the smaller the fraction)--eventually you might see some series where 100 flips of a fair coin yielded 80 heads and 20 tails. Similarly, if space is infinite and there are an infinite number of different civilizations doing the same type of quantum experiment, there will be some small fraction of the civilizations where the statistics they get on every trial of that experiment throughout their history differ significantly from the true probabilities. This might be a very tiny fraction, but then it's no smaller than the fraction of "worlds" that see the same type of departure from the true probabilities.

Do you disagree with any of the statements above? If so, what's the first one you would disagree with? If you don't disagree with any, perhaps you would indeed say it's impossible to "do science" in a universe where the number of civilizations is infinite (assuming there is indeed a random element to experiments that can't be eliminated with better experimental techniques, which would even be true in a deterministic hidden-variable model like Bohmian mechanics if it's impossible to measure/control the hidden variables). But I think this would be a pretty strange position to take, philosophically.

You obtain results with an uncertainty in experimental physics, so you only need the result to agree with theory within a certain range (that's the source of statements having to do with "confidence level"). For an introductory paper on how QM statistics are obtained (they even supply the data so you can reproduce the results yourself) see: "Entangled photons, nonlocality, and Bell inequalities in the undergraduate laboratory," Dietrich Dehlinger and M. W. Mitchell, Am. J. Phys. v70, Sep 2002, 903-910. Here is how they report their result in the abstract, for example:

"Bell’s idea of a hidden variable theory is presented by way of an example and compared to the quantum prediction. A test of the Clauser, Horne, Shimony, and Holt version of the Bell inequality finds S = 2.307 +/- 0.035, in clear contradiction of hidden variable theories. The experiments described can be performed in an afternoon."

According to your view, they can't say "in clear contradiction," but that's standard experimental physics. And, if you were right, we couldn't do experimental physics. Thankfully, you're wrong :-)
 
  • #653
RUTA said:
You obtain results with an uncertainty in experimental physics, so you only need the result to agree with theory within a certain range (that's the source of statements having to do with "confidence level").
Yes, and no matter how many trials you do, as long as the number is finite there is some small probability that your results will differ wildly from the "true" probabilities determined by the laws of QM. For example, if in a particular experiment the QM prediction is that there is a 25% chance of seeing a particular result, then even if the experiment is done perfectly and QM is a correct description of the laws of physics, and even if you did a huge number of trials, there is some nonzero probability you would get that particular result on more than 90% of all trials, due to nothing but a statistical fluctuation. If the number of trials is large enough the probability of such a large statistical fluctuation may be tiny--say, one in a googol--but as long as the number of trials is finite the probability is nonzero.

If continue to disagree with what I'm saying about the situation with the MWI being no worse than the situation with an infinite universe containing an infinite number of civilizations, I'd appreciate an answer to my question about what specific statement in the chain of argument you disagree with:
That's just not true, a finite number of trials will not always yield exactly the same statistics as the ideal probability distribution, statistical fluctuations are always possible. If you flip a coin 100 times, and the coin's physical properties are such that it has a 50% chance of landing heads or tails on a given flip, that doesn't imply you are guaranteed to get exactly 50 heads and 50 tails! In fact, if you have a very large collection of series of 100 flips, a small fraction of the series will have statistics that differ significantly from the true probabilities (the greater the 'significant' difference, the smaller the fraction)--eventually you might see some series where 100 flips of a fair coin yielded 80 heads and 20 tails. Similarly, if space is infinite and there are an infinite number of different civilizations doing the same type of quantum experiment, there will be some small fraction of the civilizations where the statistics they get on every trial of that experiment throughout their history differ significantly from the true probabilities. This might be a very tiny fraction, but then it's no smaller than the fraction of "worlds" that see the same type of departure from the true probabilities.

Do you disagree with any of the statements above? If so, what's the first one you would disagree with?
RUTA said:
For an introductory paper on how QM statistics are obtained (they even supply the data so you can reproduce the results yourself) see: "Entangled photons, nonlocality, and Bell inequalities in the undergraduate laboratory," Dietrich Dehlinger and M. W. Mitchell, Am. J. Phys. v70, Sep 2002, 903-910. Here is how they report their result in the abstract, for example:

"Bell’s idea of a hidden variable theory is presented by way of an example and compared to the quantum prediction. A test of the Clauser, Horne, Shimony, and Holt version of the Bell inequality finds S = 2.307 +/- 0.035, in clear contradiction of hidden variable theories. The experiments described can be performed in an afternoon."
Presumably there was some confidence interval they used to get the error bars of +/- 0.035. For example, they might have calculated that the probability that S is greater than 2.307 + 0.035 or less than 2.307 - 0.035 is less than 5 sigma, or about an 0.00005% chance (perhaps based on considering a null hypothesis where S was outside of that range, and finding an 0.00005% that the null hypothesis would give a result of 2.307 in their experiment).
RUTA said:
According to your view, they can't say "in clear contradiction," but that's standard experimental physics.
Whatever gave you the idea that I would say they can't say "in clear contradiction"? If the probability of getting statistics that depart appreciably from the true probabilities is miniscule, then we can be very confident our results are close to the true probabilities. This is true in an infinite universe with an infinite number of civilizations (you haven't told me what you think about this scenario), and it's just as true in the MWI.
 
Last edited:
  • #654
JesseM said:
Yes, and no matter how many trials you do, as long as the number is finite there is some small probability that your results will differ wildly from the "true" probabilities determined by the laws of QM. For example, if in a particular experiment the QM prediction is that there is a 25% chance of seeing a particular result, then even if the experiment is done perfectly and QM is a correct description of the laws of physics, and even if you did a huge number of trials, there is some nonzero probability you would get that particular result on more than 90% of all trials, due to nothing but a statistical fluctuation. If the number of trials is large enough the probability of such a large statistical fluctuation may be tiny--say, one in a googol--but as long as the number of trials is finite the probability is nonzero.

If continue to disagree with what I'm saying about the situation with the MWI being no worse than the situation with an infinite universe containing an infinite number of civilizations, I'd appreciate an answer to my question about what specific statement in the chain of argument you disagree with:

I disagree with this statement:

"Similarly, if space is infinite and there are an infinite number of different civilizations doing the same type of quantum experiment, there will be some small fraction of the civilizations where the statistics they get on every trial of that experiment throughout their history differ significantly from the true probabilities. This might be a very tiny fraction, but then it's no smaller than the fraction of "worlds" that see the same type of departure from the true probabilities."

In science, we expect every experiment to realize the proper distribution. You would never hear someone present aberrant data as "to be expected, based on the fact that an infinite number of civilizations are doing this very experiment." Most scientists would take this as a reductio against your particular interpretation of statistics in science.

Many Worlds is de facto in agreement with your interpretation. That's why I said (rhetorically), "Why would any scientist subscribe to Many Worlds?"
 
  • #655
RUTA said:
I disagree with this statement:

"Similarly, if space is infinite and there are an infinite number of different civilizations doing the same type of quantum experiment, there will be some small fraction of the civilizations where the statistics they get on every trial of that experiment throughout their history differ significantly from the true probabilities. This might be a very tiny fraction, but then it's no smaller than the fraction of "worlds" that see the same type of departure from the true probabilities."

In science, we expect every experiment to realize the proper distribution.
Are you saying we expect every experiment to exactly realize the proper distribution? If the probability of detecting some result (say, spin-up) is predicted by QM to be 0.5, would you expect that a series of 100 trials would yield exactly 50 instances of that result?

I would say instead that in science, we recognize that, given a large enough number of trials, the probability is very tiny that the statistics will differ significantly from the proper distribution (the law of large numbers). This "very tiny" chance can be quantified precisely in statistics, and it is always nonzero for any finite number of trials. But with enough trials it may become so small we don't have to worry about it, say a 1 in 10^100 chance that the observed statistics differ from the true probabilities by more than some amount epsilon (and in that case, we should expect that 1 in 10^100 civilizations that do the same number of trials will indeed observe statistics that differ from the true probabilities by more than that amount epsilon). From a purely statistical point of view (ignoring what assumptions we might make pragmatically for the purposes of doing science), do you think what I say here is incorrect?
RUTA said:
You would never hear someone present aberrant data as "to be expected, based on the fact that an infinite number of civilizations are doing this very experiment."
No, but that's because it's extremely unlikely that our civilization would happen to be one of those that gets the aberrant result. That doesn't change the fact that in a universe with an infinite number of civilizations doing scientific experiments, any aberrant result will in fact occur occasionally.
RUTA said:
Most scientists would take this as a reductio against your particular interpretation of statistics in science.
I disagree, I think this is a rather idiosyncratic perspective that you hold. Most scientists would not say that an infinite universe with an infinite number of civilizations, a very small fraction of which will see aberrant results throughout their history due to random statistical fluctuations, presents any problem for normal science, because again it's vanishingly unlikely that we happen to be living in one of those unlucky civilizations.
 
  • #656
JesseM said:
Are you saying we expect every experiment to exactly realize the proper distribution? If the probability of detecting some result (say, spin-up) is predicted by QM to be 0.5, would you expect that a series of 100 trials would yield exactly 50 instances of that result?

I assume that's rhetorical.

JesseM said:
No, but that's because it's extremely unlikely that our civilization would happen to be one of those that gets the aberrant result. That doesn't change the fact that in a universe with an infinite number of civilizations doing scientific experiments, any aberrant result will in fact occur occasionally.

Just not here, right? Suppose X claims to have a source that produces 50% spin up and 50% spin down and X reports, "I have a 50-50 up-down source that keeps producing pure up results." If you REALLY believe that your interpretation of statistics in science is correct, then you would HAVE to admit that perhaps X is right. But, what will MOST scientists say? Of course, X is mistaken, he doesn't have a 50-50 source. Why? Because our theory is empirically driven, not the converse.

JesseM said:
I disagree, I think this is a rather idiosyncratic perspective that you hold. Most scientists would not say that an infinite universe with an infinite number of civilizations, a very small fraction of which will see aberrant results throughout their history due to random statistical fluctuations, presents any problem for normal science, because again it's vanishingly unlikely that we happen to be living in one of those unlucky civilizations.

If your view is correct, you could find me a paper published with aberrant results. Can you find me a published paper with claims akin to X supra? Why not? Because the weird stuff only happens in "other places?" Not here?
 
  • #657
RUTA said:
I assume that's rhetorical.
Yes, but a literal interpretation of your statement "In science, we expect every experiment to realize the proper distribution" would imply every experiment should yield precisely the correct statistics. I was illustrating that this statement doesn't really make any sense to me. If you didn't mean it in the literal sense that the observed statistics should precisely equal the correct probabilities, what did you mean?
RUTA said:
Just not here, right? Suppose X claims to have a source that produces 50% spin up and 50% spin down and X reports, "I have a 50-50 up-down source that keeps producing pure up results." If you REALLY believe that your interpretation of statistics in science is correct
Are you saying my statements are incorrect on a purely statistical level? If so, again, can you pinpoint which statement in the second paragraph of my last post is statistically incorrect?
RUTA said:
then you would HAVE to admit that perhaps X is right.
If "perhaps" just means "the probability is zero", then yes. But if I can show the chance X is right is astronomically small, say only a 1 in 10^100 chance that a 50-50 source would actually produce so many up results in a row, then on a pragmatic level I won't believe him. Do you deny that statistically, the probability of getting N heads in a row with a fair coin is always nonzero, though it may be astronomically small if N is very large? If not, do you deny that in an infinite universe with an infinite number of civilizations flipping fair coins, there will be some that do see N heads in a row? These aren't rhetorical questions, I am really having trouble what part of my argument, specifically, you object to.
RUTA said:
But, what will MOST scientists say? Of course, X is mistaken, he doesn't have a 50-50 source.
Of course, I'll say that too. Why wouldn't I? If it would require a statistical fluctuation with a probability of 1 in 10^100 for his theory to be right, then we can say his theory is wrong beyond all reasonable doubt, even if we can't have perfect philosophical certainty that his theory is wrong.
RUTA said:
If your view is correct,
My view of what? Again, on a purely statistical sense are any of my statements incorrect? If so, which ones?
RUTA said:
you could find me a paper published with aberrant results.
It depends what you mean by "aberrant". If you mean the sort of massive statistical fluctuation that probability theory would say has an astronomically small probability like 1 in 10^40 or whatever, then this is much larger than the number of scientific experiments that have been done in human history so I wouldn't expect any such aberrant results. If you just mean papers with good experimental design found some result to a confidence of two sigma or something, but later experiments showed the result was incorrect, I don't think it'd be that hard to find such a paper.
 
  • #658
JesseM said:
Yes, but a literal interpretation of your statement "In science, we expect every experiment to realize the proper distribution" would imply every experiment should yield precisely the correct statistics.

Your argument does not make any sense to me so I am hoping you could clarify your understanding of the meaning of "probability". If a source has a 0.1-0.9 up-down probability, what does that mean in your understanding according to MWI. Does it mean 10% of the worlds will obtain 100% up and 90% of the worlds will obtain 100% down, or does it mean in every world, there will be 10% up and 90% down? It is not clear from your statements what you mean and what it has that got to do with "correct statistics"?

If I calculate the probability that the sun will explode tomorrow to be 0.00001, what does that mean in MWI. Is your understanding that I am calculating the probability of the sun in "my" world, exploding, or all the suns in the multiverse exploding or what exactly? Or do you think such a probability result does not make sense in science.

I think after attempting to respond to this issues you may appreciate why many wonder "Why would any scientist subscribe to Many Worlds?"
 
  • #659
JesseM said:
But there are two aspects of this question--the first is whether local realism can be ruled out given experiments done so far, the second is whether local realism is consistent with the statistics predicted theoretically by QM. Even if you don't use the projection postulate to generate predictions about statistics, you need some real-valued probabilities for different outcomes, you can't use complex amplitudes alone since those are never directly measured empirically. And if we understand local realism to include the condition that each measurement has a unique outcome, then it is impossible to get these real-valued statistics from a local realist model.
But I certainly don’t “understand local realism to include the condition that each measurement has a unique outcome”, not necessarily. You may believe that my understanding of local realism is not reasonable, but you may agree that “my” model is local realistic within common understanding of this term. I already said that you can define probability density in the model using the expression for the charge density.
JesseM said:
No idea where you got the idea that I would be talking about "approximate" locality from anything in my posts. I was just talking about QM being a "pragmatic" recipe for generating statistical predictions, I didn't say that Bell's theorem and the definition of local realism were approximate or pragmatic. Remember, Bell's theorem is about any black-box experiment where two experimenters at a spacelike separation each have a random choice of detector setting, and each measurement must yield one of two binary results--nothing about the proof specifically assumes they are measuring anything "quantum", they might be choosing to ask one of three questions with yes-or-no answers to a messenger sent to them or something. Bell's theorem proves that according to local realism, any experiment of this type must obey some Bell inequalities. So then if you want to show that QM is incompatible with local realism, the only aspect of QM you should be interested in is its statistical predictions about some experiment of this type, all other theoretical aspects of QM are completely irrelevant to you. Unless you claim that the "pragmatic recipe" I described would actually make different statistical predictions about this type of experiment than some other interpretation of QM like Bohmian mechanics or the many-worlds-interpretation, then it's pointless to quibble with the pragmatic recipe in this context.
I don’t quite get it. First off, I concede that the Bell inequalities cannot be violated in local realistic theories. I don’t question this part of the Bell theorem. The second part of the Bell theorem states that the inequalities can be violated in QM. I don’t question the derivation of this statement, but I insist that its assumptions are mutually contradictory, making this statement questionable. You tell me that measurements typically involve environmental decoherence. I read the following implication from that (may be I was wrong): so there is no contradiction between unitary evolution (UI) and the projection postulate (PP). If you say that the difference between UI and PP has its root in environmental decoherence, I don’t have problems with that, but that does not eliminate the difference, or contradiction, between them. What I tried to emphasize, is you cannot declare this decoherence or any other root cause of the contradiction negligible, you cannot use any approximations to rule out local realism.
JesseM said:
But that won't produce a local realist theory where each measurement has a unique outcome. Suppose you have two separate computers, one modeling the amplitudes for various measurements which could be performed in the local region of one simulated experimenter "Alice", another modeling the amplitudes for various measurements which could be performed in the local region of another simulated experimenter "Bob", with the understanding that these amplitudes concerned measurements on a pair of entangled particles that were sent to Alice and Bob (who make their measurements at a spacelike separation). If you want to simulate Alice and Bob making actual measurements, and you must assume that each measurement yields a unique outcome (i.e. Alice and Bob don't each split into multiple copies as in the toy model I linked to at the end of my last post), then if the computers running the simulation are cut off from communicating with one another and neither computer knows in advance what measurement will be performed by the simulated experimenter on the other computer, then there is no way that such a simulation can yield the same Bell-inequality-violating statistics predicted by QM, even if you program the Born rule into each computer to convert amplitudes into probabilities which are used to generate the simulated outcome of each measurement. Do you disagree that there is no way to get the correct statistics predicted by any interpretation of QM in a setup like this where the computers simulating each experimenter are cut off from communicating? (which corresponds to the locality condition that events in regions with a spacelike separation can have no causal effect on one another)
Again, I don’t need unique outcomes – no measurement is final
JesseM said:
The problem is that there is no agreement on how the many-worlds interpretation can be used to derive any probabilities. If we're not convinced it can do so then we might not view it as being a full "interpretation" of QM yet, rather it'd be more like an incomplete idea for how one might go about constructing an interpretation of QM in which measurement just caused the measuring-system to become entangled with the system being measured.
Well, I don’t know much about many worlds, but anyway – it seems this problem does not prevent you from favoring many worlds.
JesseM said:
See my comments above about the Wigner's friend type thought experiment. I am not convinced that you can actually find a situation where a series of measurements are made that each yield records of the result, such that using the projection postulate for each measurement gives different statistical predictions then if we just treat this as a giant entangled system which evolves in a unitary way, and then at the very end use the Born rule to find statistical expectations for the state of all the records of prior measurements. And as I said there as well, the projection postulate does not actually specify whether in a situation like this you should treat each successive measurement as collapsing the wavefunction onto an eigenstate or whether you should save the "projection" for the very last measurement.
I already said, first, that I disagree with your reading of this experiment, second, it is important how the projection postulate is used to prove violations in QM.


JesseM said:
I wasn't guessing what he said, I was guessing what he meant by what he said. What he said was only the very short statement "Yes, it is an approximation. However, due to decoherence, this is an extremely good approximation. Essentially, this approximation is as good as the second law of thermodynamics is a good approximation." I think this statement is compatible with my interpretation of what he may have meant, namely "in Bohmian mechanics the collapse is not 'real' (i.e. the laws governing measurement interactions are exactly the same as the laws governing other interactions) but just a pragmatic way of getting the same predictions a full Bohmian treatment would yield." Nowhere did he say that using the projection postulate will yield different statistical predictions about observed results than those predicted by Bohmian mechanics.

If it’s an approximation, it is not precise, if it is not precise, there must be difference.

JesseM said:
I think they are different only if you assume multiple successive measurements, and understanding "the projection postulate" to imply that each measurement collapses the wavefunction onto an eigenstate, and assuming that for some of the measurements the records of the results are "erased" so that it cannot be known later what the earlier result was. If you are dealing with a situation where none of the measurement records are erased, I'm pretty sure that the statistics for the measurement results you get using the projection postulate will be exactly the same as the statistics you get if you model the whole thing as a giant entangled system and then use the Born rule at the very end to find the probabilities of different combinations of recorded measurement results. And once again, the "projections postulate" does not precisely define when projection should occur anyway, you are free to interpret the projection postulate to mean that only the final measurement of the records at the end of the entire experiment actually collapses the wavefunction.

I don’t quite see what the status of all these statements is. Anyway, I don’t see any reason to agree with them until they are substantiated. substantiation.
 
  • #660
JesseM said:
(continued from previous post)
I think you misunderstood what I meant by "any" above, I wasn't asking if your model could reproduce any arbitrary prediction made by the "standard pragmatic recipe" (i.e. whether it would agree with the standard pragmatic recipe in every possible case, as I think Bohmian mechanics does). Rather, I was using "any" in the same sense as it's used in the question priests used to ask at weddings, "If any person can show just cause why they may not be joined together, let them speak now or forever hold their peace"--in other words, I was asking if there was even a single instance of a case where your model reproduces the probabilistic predictions of standard QM, or whether your model only deals with complex amplitudes that result from unitary evolution.

I got that about "any" the first time:-) Probabilities can be introduced in "my" model using the expression for current density, the same way it is done in the Bohm interpretation - so it's pretty much the Born rule, but again, it should be used just as an operational rule.


JesseM said:
The reason I asked this is that the statement of yours I was responding to was rather ambiguous on this point:

If your model does predict actual measurement results, then if the model was applied to an experiment intended to test some Bell inequality, would it in fact predict an apparent violation of the inequalites in both experiments where the locality loophole was closed but not the detector efficiency loophole, and in experiments where the efficiency loophole was closed but not the locality loophole?

I hope and think so, but I am not sure - as I said, I am not sure to what extent it describes experimental results correctly.

JesseM said:
I think you said your model would not predict violations of Bell inequalities in experiments with all loopholes closed--would you agree that if we model such experiments using unitary evolution plus the Born rule (perhaps applied to the records at the very end of the full experiment, after many trials had been performed, so we don't have to worry about whether applying the Born rule means we have to invoke the projection postulate), then we will predict violations of Bell inequalities even in loophole-free experiments?

I am not sure - you need correlations, so you need to use the Born rule twice in each event, and this is pretty much equivalent to the projection postulate. You said very well (I hope I understood you correctly) that the Born rule should be applied at the end of each experiment - that means, I think, you cannot use it twice in each experiment.

JesseM said:
Likewise, would you agree that Bohmian mechanics also predicts violations in loophole-free experiments, and many-worlds advocates would expect the same prediction even if there is disagreement on how to derive it?

I have nothing to say about many worlds, and I am not sure about Bohmian mechanics - Demystifier said that it does predict violations in ideal experiments, but then it seemed he was less categorical about that (see his post 303 in this thread). So I don't know. My guess you cannot prove violations in Bohmian mechanics using just unitary evolution, otherwise the relevant proof could be "translated" into a proof in standard QM.
 
  • #661
DrChinese said:
Disagree, as we have already been through this many times. There is nothing BUT evidence of violation of Bell Inequalities. To use a variation on your 34 year old virgin example:

Prosecutor: "We found the suspect over the victim, holding the murder weapon. The victim's last words identified the suspect as the perp. The murder weapon was recently purchased by the suspect, and there are witnesses who testified that the suspect planned to use it to kill the victim." Ah, says the defense attorney, but where is the photographic evidence of the crime itself? This failure is proof of the suspect's innocence!

The problem is you could write an equally winning speech for the prosecutor, proving that the sum of the angles of a planar triangle is not 180 degrees.

There is another thing. There is a huge difference between " beyond reasonable doubt" in court and in science. You know that DNA testing led to acquittal of maybe hundreds people or more. It's not so different to imprison or execute an innocent. I even heard that prosecutors try to exclude mathematicians from their future juries because mathematicians' requirement for "beyond any doubt" is much stricter than that of the nation on the average (whether this is true or not is not important, it's a good illustration anyway).

I'd say there is some sound reason between this difference: crime is not reproducible, and science is supposed to be. However, 46 years of looking for violations of the genuine inequalities have demonstrated no such violations.

The difference is especially clear in this case, as elimination of local realism is an extremely radical idea, so the burden of proof is very high.

DrChinese said:
You can always demand one more nail in the coffin. In fact, it is good science to seek it. But the extra nail does not change it from "no experimental evidence" (as you claim) to "experimental evidence". It changes it from "overwhelming experimental evidence" (my claim) to "even more overwhelming experimental evidence".

I fail to see how total absence of violations of the genuine Bell inequalities can serve as "overwhelming experimental evidence" of such violations, but obviously you have no such problems.


DrChinese said:
As to the second of your assertions: how QM arrives at its predictions may be "inconsistent" in your book. But it does not cause a local realistic theory to be any more valid. If QM is wrong, so be it. That does not change the fact that all local realistic theories are excluded experimentally.

All local realistic theories can only be ruled out by a demonstration of violation of the genuine Bell inequalities, period. Sorry to disappoint you, but no such demonstration available. Furthermore, the proof of such violations in quantum theory requires mutually contradicting assumptions. Therefore, violations of the Bell inequalities are on shaky grounds, to put it mildly, both theoretically and experimentally.
 
  • #662
GeorgCantor said:
Do you know of a totally 100% loophole-free experiement from anywhere in the universe?

I can just repeat what I said several times: for some mysterious reason, Shimony is not quite happy about experimental demonstration of violations, Zeilinger is not quite happy... You are quite happy with it? I am happy for you. But that's no reason for me to be happy about that demonstration. Again, the burden of proof is extremely high for such radical ideas as elimination of local realism.
 
  • #663
DrChinese said:
Great point. So there is no evidence for GR either. :biggrin:

There is another issue with akhmeteli's line of reasoning IF CORRECT: there is a currently unknown local force which connects Alice and Bob. This kicks in on Bell tests like Rowe et al which closes the detection loophole. But not otherwise as far as we know.

There is also a strong bias - also previously unknown and otherwise undetected - which causes an unrepresentative sample of entangled pairs to be detected. This kicks in on Bell tests such as Weihs et al, which closes the locality loophole. Interestingly, this special bias does NOT appear when all pairs are considered such as in Rowe, however, the effect of the unknown local force is exactly identical. What a happy coincidence!

And so on for every loophole when closed individually. All the loopholes have exactly the same effect at every angle setting! And if you leave 2 open instead of 1, you also get the same effect! (I.e. if you leave the locality and detection loopholes open simultaneously, the effect is the same as either one individually.)

Great sample of eloquence and logic. Again, just a tiny problem: it's no sweat to rewrite your post to prove that the sum of the angles of a planar triangle is not 180 degrees. But maybe it isn't?

DrChinese said:
Stangely, the entanglement effect (remember that this is just a coincidence per Local Realism) completely disappears if you learn the values of Alice and Bob. Just as QM predicts, but surprisingly, quite contrary to the ideals of Local Realism. After all, EPR thought that you could beat the HUP with entangled particle pairs, and yet you can't!

I don't challenge HUP at all.

DrChinese said:
So to summarize: akhmeteli is essentially asserting that a) 2 previously unknown and otherwise undetected effects exist (accounting for the loopholes); b) these effects are not only exactly equal to each other but are also equal to their combined impact; and c) an expected ability to beat the HUP (per EPR's local realism) has not materialized.

See above
 
  • #664
DrChinese said:
You apparently cannot, as you put forth your opinions as fact. Further, you apparently cannot tell the difference between ad hoc speculation and evidence based opinions. To reasonable people, there is a difference.

I cannot comment on something lacking any specifics.

DrChinese said:
There is a huge difference in your speculation on loopholes (notice how you cannot model the behavior of these despite your unsupported claims) and RUTA's opinions (which he can model nicely using both standard and original science).

Again, how about specifics? What unsupported claims, exactly? I did not claim I can model loopholes within a reasonable time frame.
 
  • #665
DrChinese said:
So basically, this version is useless as is (since it cannot predict anything new and cannot explain existing well); but you want us to accept that a future version might be valuable. That may be reasonable, I can see the concept and that is certainly a starting point for some good ideas. But it is a far cry to go from here to saying your point is really made.

What point related to the model I claimed I made and in fact did not?

You call my model useless. I respectfully disagree. Irrespective of any interpretation of quantum theory, it adds some rigorous, and therefore valuable, results to mathematical physics, for example, it demonstrates a surprising and simple result: matter field can be naturally eliminated from scalar electrodynamics.

No, I don't have time to "explain existing well" using the model, but the model does not belong to me anymore, so those who wish can find out whether it's good or bad at explaining. I think the model adds a meaningful and specific material for discussions of interpretation of quantum theory. Anybody can use it to support their own points or question other people's points. For example, it can be used to analyze such no-go theorems as the Bell theorem. For example, it shows that not all quantum field theories are "non-local-realistic". I guess this is a new and interesting result, no matter what interpretation you favor.

No, the model perhaps cannot predict anything new. However, if it had the same unitary evolution as quantum electrodynamics, rather than "a" quantum field theory, it would be much more valuable, although in that case it would certainly could not predict anything new. Therefore, the inability to predict something new may be the least of the model's problem.

DrChinese said:
Santos, Hess and many others have gone down similar paths with similar arguments for years. Where did they end up?

For example, according to you, Santos "ended up" "convincing a few good people that "all loopholes should be closed simultaneously"", you call it a "questionable conclusion", I see that a genuine contribution to our body of knowledge.

DrChinese said:
Please keep in mind that you should not expect to post speculative ideas in this forum with impunity. This forum is for generally accepted science.

What speculative ideas? Until recently, practically everything I said was published in peer-reviewed journals by others. Now that my article was accepted for publication, I just added a discussion of some of my mathematical results from that article.

With all due respect, I believe you post speculative ideas in this forum with impunity when you question the mainstream fact (not opinion) that there have been no experimental demonstrations of the genuine Bell inequalities.
 
Back
Top