EPR Experiment with Pool Balls

In summary, the conversation discusses the idea of applying the same logic of pool balls to the EPR experiment, which involves two particles in an entangled state. However, this approach does not match the results of real experiments, and quantum mechanics has been shown to better predict the outcomes. The non-locality of the experiment is a quirk of the theory and there are other interpretations, such as the Consistent Histories approach, which suggest that the EPR paradox can be explained without non-locality.
  • #36
Originally posted by nightlight
For more discussion on the "fair" sampling hypothesis and the proposed simple additional experiment to test it for the existent EPR-Bell setups check the paper by G. Adenier, A. Khrennikov. I haven't seen as yet any of the several active quantum optics groups, who are claiming to have established Bell inequality violations, checking the assumption on their setup. Since the additional tests proposed are quite simple on the existent setup, it is suprising that no one has yet picked the clear cut open challenge of the above paper, especially considering that the verification of the fair sampling as proposed would eliminate all known plausible LHV theories (they all rely on "unfair" sampling). Or maybe some have tried it and the data didn't come out the way they wished, and they didn't want to be the first with the "bad" news. We'll have to wait and see.

PS: After writing the above, I contacted the authors of the cited paper and the status is that even though they had contacted all the groups which have done or plan to do EPR-Bell experiments, oddly no one was interested in testing the 'fair sampling' hypothesis.

I wonder if that is because many do not accept the "fair sampling" critique as valid?

I have read the Santos paper now and I admit I don't agree with a lot of what he is saying. The approach is to blast existing experiment as if it shows nothing, when clearly they show a lot. Perhaps they are not perfect, true enough, but the perspective - and facts - are off in my view. Bell's Inequality is not a test of QM. The question is whether any local realistic theory can make the same predictions as QM. From my perspective the Bell paper conclusively shows it cannot.

If someone wants to test the predictions of QM, great. That is the point of science, after all, and I certainly agree that nothing should be held too sacred to question. However, given the body of experimental knowledge, I don't see QM falling anytime soon - at least as regards electrodynamics. And I certainly don't see any experimental evidence here that QM is wrong.

I have also been looking over the paper by Caroline Thompson which raises the issue of timing in EPR type experiments. Like the Santos papers, interesting issues are raised. I am continuing to digest the all of the criticisms so I can address them directly a bit more.

I mean, let's get real here. The repeatable experimental results as adjusted just happen to exactly agree with QM and rule out LHV theories. What an odd coincidence out of all of the possible results we might see!
 
Physics news on Phys.org
  • #37
Originally posted by DrChinese
I wonder if that is because many do not accept the "fair sampling" critique as valid?
The "fair sampling" hypothesis is part of all photon experiments so far (often labeled euphemistically as "detection loophole"). It is necessary because the actual accepted pairs represent only a few percent of all pairs produced by the source. There is also nothing to dispute about what kind of LHVs are eliminated by "fair sampling" assumption all by itself (before anyone had set a foot in the lab) -- it eliminates upfront all LHV theories in which (hidden) variables determining whether a valid pair is detected are independent from hidden variables determining the +/- outcome. That means the experiments test only a subset of LHVs for which these two sets of variables are separate and independent. Experiment says nothing about remaining LHV theories.

The above is commonly understood and there is nothing to dispute about it. The only question where taste/judgment has some room is whether the class of LHV theories which have not been tested so far (those LHVs where the two sets of HVs are mutually dependent) have any chance of developing into more useful and economical/elegant (in postulates) theories than existent QM/QED.

It has nothing to do with the "validity of the critique." No serious paper on the subject (exclude here most popular and pedagogical expositions) takes head on the "fair sampling" critique and proves it is invalid. The critique either doesn't get mentioned at all (often a case in recent years) or it gets acknowledged and dismissed as "loophole" (i.e. like tax loopholes, only cheats would look for those).

I have read the Santos paper now and I admit I don't agree with a lot of what he is saying. The approach is to blast existing experiment as if it shows nothing, when clearly they show a lot.
He simply states more directly and without euphemisms the factual situation -- what kind of LHVs are being tested by the experiments. He and Marshall (and few other critics) have been saying that for over a decade. There has been no direct counter-argument to refute it. Only silence (including the rejection of submitted papers as "nothing new, it is all well known, nothing to see here, move on") or euphemistic rephrasing of the well known unpleasant facts (as in recent Aspect's paper).

...The question is whether any local realistic theory can make the same predictions as QM. From my perspective the Bell paper conclusively shows it cannot.

It does show that. But keep in mind that not even a single experiment agrees with the predictions of the QM measurement model used by Bell.

The actual data, of course, agrees well with the quantum optics predictions, which provides the correct model for these experiments (which takes into account the non-sharp/Poissonian or super-Poissonian photon numbers from lasers or from spontaneous emissions in atomic gas, detection efficiency, dark current, appertures,...etc). But the quantum optics also agrees in this domain with the semi-classical theories (such as stochastic optics, a pure LHV theory; there is a Sudarshan-Glauber theorem from 1963 proving this equivalence). And neither agrees with the Bell's QM prediction.

The Bell's QM prediction is based on QM Measurement theory (the projection postulate). That prediction is not what the actual data shows or what the proper quantum optics model predicts. Only data adjusted and highly extrapolated under additional assumptions, assumptions untested and outside of the theory/postulates, finally matches the Bell's QM prediction.

On the theoretical side, the critics reject QM measurement theory -- historically its mathematical basis and the core motivation was the faulty von Neumann's proof of impossibility of hidden variables. Without that "impossibility" there is no reason to attach postulates about observer and non-physical collapse of wave function to the theory (just as Maxwell's or Newton's theories didn't need to speculate about "observer's conscousness" or any such). Unfortunately, once it became clear that that proof was no good (as early as 1950s when Bohm developed his theory), there were decades of invested work and armies of "experts" on the (suddenly obsoleted) "measurement theory," teaching and publishing. Bohm's result was mostly ignored (or hand-waved away), its implications not talked about, until a new weaker "proof" (it doesn't eliminate all HV but only LHVs) was produced by Bell. With this new fallback defense line, suddenly the flaws of the von Neumman's proof became somehow obvious to everyone and have made it into textbooks.

If someone wants to test the predictions of QM, great. That is the point of science, after all, and I certainly agree that nothing should be held too sacred to question. However, given the body of experimental knowledge, I don't see QM falling anytime soon - at least as regards electrodynamics. And I certainly don't see any experimental evidence here that QM is wrong.

You need a finer distinctions among the parts of QM. There is QM/QED dynamics which is certainly well tested. Born's probability postulate provides operational interpretation of the wave function/state vector -- that part is solid and necessary, as well.

Then there is a QM measurement theory (originated by von Neumann), the core of which is the projection postulate -- the non-physical collapse of the wave function, which somehow suspends the dynamical evolution and performs its magic (non-local, instant, non-dynamical) state transformation (by observer's consiousness, as von Neumann suggested, by "irreversible process" by Bohr and others, by universe branching by Everett, by "subdynamics" of "dissipative, far from equilibrium systems" by Prigogine,...), then the newly produced state is released from its momentary spell and returned back to the dynamical postulates to carry on evolving as before.

That whole "theory" is what the critics reject. And that "theory" is also the key ingredient of the Bell's "QM prediction". There is no other practical use for the "measurement theory" (von Neumann's proof having been shown as invalid) or other direct test than the EPR-Bell experiments. It props the Bell's "QM prediction," which in turn is all that props any need for the "measurement theory" (a kind of mutual back-patting society). Otherwise It is entirely gratuitous, sterile add-on to QM (since the original reason/prop, von Neumann's "HV impossibility," is now acknowledged as invalid). QED, QCD,... etc. don't use "measurement theory" (collapse/projection postulate) but merely dynamics and Born's postulate. You can drop the closed loop of Bell's theorem and "QM measurement theory" and no harm would be done to predictive power of quantum physics.

I mean, let's get real here. The repeatable experimental results as adjusted just happen to exactly agree with QM and rule out LHV theories. What an odd coincidence out of all of the possible results we might see!
The results are not merely "adjusted" but are almost entirely product of extrapolation -- well over 90% of the "data" which reproduce the Bell's "QM prediction" (the "idealized" measurement theory prediction) are not obtained by the measurement but are added in by hand (under the "fair sampling" assumption). With that much freedom, 90%+ to add by hand, you can match anything. There are astronomically many ways to extrapolate the 90%+ missing data points.
 
Last edited:
  • #38
BTW, I'm enjoying the interchange, Nightlight. Regarding Santos' paper "Optical Tests of Bell's Inequalities Not Resting on Absurd Fair Sampling Assumption", I wish to pursue a few points.

1. For me, the crucial concept he is selling is found in equation (13) from Section 3. By a little mathematical sleight of hand, a formula is produced which relates the 2 main detector efficiences V and eta (n).

An interesting concept, true, but does it really mean anything? We are supposed to accept the existence of a new effect - previously undetected - which shows up "just in time" in EPR experiments to distort the results just so as to match QM predictions. Of course it is specifically designed to have that effect, and it would need to have that effect to match with experiment.

So my question is: If "fair sampling" is absurd, what is (13)? To my eyes it is no improvement over that which it seeks to replace. And to top that off, a specific prediction is made that this effect actually holds true until detector efficiencies reach 82% from current levels of about 20%. I'm not saying that Santos isn't brilliantly right, but I wouldn't place a bet on that point.

2. Instead, in Section 4, Santos states he wants an experiment "without any possible loophole." Is there such a thing? I mean, we talk about the age of the universe, the mass of an electron, etc. What do we know anyway? For me, knowledge is utility and Santos' theory lacks any.

I guess he is at least making a prediction so give points for that. Here, I'll make a prediction for you: we will never witness unambiguous proton decay regardless of how many years we look or how large an apparatus we use to look for it. That's at least as meaningful as Santos' effective prediction that "loophole free" violation of Bell's Inequality will ever be witnessed. Not sure either prediction has a lot of utility although both have intersting possibilities.
 
  • #39
Originally posted by DrChinese
1. For me, the crucial concept he is selling is found in equation (13) from Section 3. By a little mathematical sleight of hand, a formula is produced which relates the 2 main detector efficiences V and eta (n).

An interesting concept, true, but does it really mean anything? We are supposed to accept the existence of a new effect - previously undetected - which shows up "just in time" in EPR experiments to distort the results just so as to match QM predictions. Of course it is specifically designed to have that effect, and it would need to have that effect to match with experiment.
The only specific assumption he makes to derive relation (13) is the property of the LHV model given via (10) which says that below certain threshold angle Gamma (between the polarizer and the "hidden" polarization of the wave packet), the detector has trigger probability Beta>0, otherwise (for larger angles between polarizer and hidden polarization) detector won't trigger. The rest leading to (12) & (13) is algebra based on the well known result (1) and the customary generic LHV definitions and notations (3-5).

His assumption (10) is merely a very simple instance of LHV constructed to violate explicitly the "fair sampling" hypothesis, essentially it is same as Pearle's 1970 "rejected data" model (Pearle at the time didn't have eq (1) which was deduced in late 1970s). The model specified via (10) is among the simplest semi-classical models of detection - a polarized wave packet splits its energy on the polarizer into cos^2(alpha) and sin^2(alpha) components (as discussed few messages back). The probability that the primary ray, the cos^2() component, will triger an avalanche in the detector (a detection event) is roughly proportional to the energy of that component, i.e. to cos^2(alpha) (there is also non-linearity in real detectors, depending on settings, i.e. "proportional" is not meant as linear dependency). A quick & dirty approximation (easy to integrate over) of the "detector modulated cos^2(alpha)" is the step function given via (10) by Santos.

This model also illustrates nicely how unnatural/unfair the "fair sampling" hypothesis actually is, against even the most straightforward classical model -- in (10) the packet polarization "hidden" variable, which determines +/- result, is also the essential ingredient in determining the detection probability, thus the two sets of variables are not independent (it's one and the same hidden variable). Yet, this type of theory is excluded by the "fair sampling" upfront, before any experiment is done.

In a larger perspective, what Santos is doing here is following the same trajectory that went from von Neumann --> Kochen-Specker --> Bell -- weakening the "HV impossibility" by increasing constraints on the HV theories. While von Neumann claimed all HV theories are incompatible with QM predictions (and K-S all "non-contextual HV theories"), Bell showed that QM doesn't make empirical predictions used in von Neumann's HV impossibility proof (additivity of non-commuting observables used by vN is a mathematical property of QM formalism, and not an empirical prediction of QM).

Since the Bell's QM prediction (based on collapse postulate) isn't happening in the experiments after over three decades of trying, it may well be that it, too, is not a "proper" QM prediction ("proper" = without the collapse postulate from QM), i.e. it is an non-empirical artifact of QM formalism (specifically, a consequence of the unnecessary "projection postulate"), just as von Neumann's additivity "prediction" was.

So, the role of Santos' proposal is to further restrict LHVs (to a subset of those violating "fair sampling") and to suggest that at least this smaller class of LHVs be truly tested, loophole free. This is the same idea that Adenier & Khrennikhov are suggesting (in quant-phys 0306045), although even more restrictive on LHVs than A-K's -- in the paper Santos further restricts the specific manner of "fair sampling" violation to those LHVs satisfying very specific way to violate "fair sampling" - the formula (10). With that specificity he can then deduce consequences: how exactly the "losses" and visibility will manifest in the realistic data, depending particular experiment settings (on eta and V).

... And to top that off, a specific prediction is made that this effect actually holds true until detector efficiencies reach 82% from current levels of about 20%. I'm not saying that Santos isn't brilliantly right, but I wouldn't place a bet on that point.
The 82.8%=2*(sqrt(2)-1) is a well known Garg-Mermin's limit on eta (with V=1) from 1987. Santos is conjecturing that eta and V will continue the tradeoff game (as they did for over three decades), so that Bell's inequality will not be violated. The specific way they might do it would be via constraint (13), although that is not truly necessary due to the high specificity of the model given via eq (10). A more general model of that type, and more similar to the actual photo-detectors, would replace the step-at-Gamma function (10) with a more general decreasing function, which includes (10) as special case. But higher specificity simplifies calculations of concrete predictions and if the experiment is done, one can refine / replace (10) if possible with something still consistent with the existent realistic theories (SED, self-field, etc).
2. Instead, in Section 4, Santos states he wants an experiment "without any possible loophole." Is there such a thing?
The "loophole free" is possible for a more restricted subsets of LHV theories. After all the current tests are "loophole free" for the subset of LHVs consistent with "fair sampling". No such LHV theory was ever constructed, which is why the disproving them experimentally was a red-herring (or "absurd" in his words). Santos' proposal challenges experimenters to make a "loophole free" experiment for another class of LHVs, those that are more realistic, similar to those theories that actually exist (such as their semi-classical stochastic optics).

If the experiment is done and it indeed shows the violation of "fair sampling" in the manner of Santos' or A-K suggestions, that doesn't mean QM is wrong, but merely that Bells' QM prediction and its underlying basis (collapse postulate/measurement theory) are likely invalid. There is no harm done if the two are taken out of QM altogether, since their only purpose/effect is to prop each other (as explained earlier).
 
Last edited:
  • #40
Originally posted by DrChinese
The fact is that decades of experiments have soundly supported the predictions of QM, and have failed to indicate the existence of a more complete specification of reality as discussed in EPR. To the vast majority of scientists in the area, the matter is reasonably settled by the experiments of Aspect et al. Local reality is rejected by experiment.

On the other hand, we have Bohmian mechanics, that means there exist simple realistic hidden variable theories. These theories require
a preferred frame as a hidden variable, but so what?

Certainly it is not an argument against a hidden variable that it is not observable. In this sense, the standard argument against a preferred frame is not an argument in this context.

Classical realism (that means the set of principles used by Bell
to prove his inequality except Einstein-causality) is IMHO simply common sense. To reject this type of realism means something close to give up the search for realistic explanations, that means to give up an essential part of the scientific method itself. If Einstein-causal realism is dead, it simply means that Einstein-causality is dead. Nothing really problematic, we can go back to a preferred frame with absolute causality.
 
  • #41


Originally posted by Tachyon son
So, I have imagined it with two complementary pool balls inside a bag. One is red, the other is black.
I take one without looking at it. Then I travel, let's say, 1 lightyear distance.
Now it is when I look at the ball to see its colour. 50% probability then. Obviously, if my ball is red, the remaining one in the bag is black.
If we applicate the non locality principle, it will say that both balls were on a uncertain color until being looked. I know this is very stupid concept, because we certainly know that my ball was red all the time since my election, and the remaining one of course was black.

The point of my question is why we can't apply the same logic to the EPR experiment. If I use two electrons with spin 1 and -1 to make the experiment, they were in that state all the time since the separation! Theres no any comunication nor information travel!

This would be a "local realistic theory" explaining this simple variant of an EPR experiment. The interesting situation is only a little bit more complicated, and it does not allow such an explanation. See:

http://www.ilja-schmelzer.de/realism/game.html
 
  • #42
Originally posted by Ilja
Classical realism (that means the set of principles used by Bell
to prove his inequality except Einstein-causality) is IMHO simply common sense. To reject this type of realism means something close to give up the search for realistic explanations, that means to give up an essential part of the scientific method itself. If Einstein-causal realism is dead, it simply means that Einstein-causality is dead. Nothing really problematic, we can go back to a preferred frame with absolute causality.

Common sense? These statements fly in the face of much evidence. Causality is untenable, and has little or no basis in the real world. The facts support the following statements:

1. Many events have no prior cause.
2. Some events are influenced by the future.
3. Few events have a single specific prior cause.

All of which violate the spirit of causality. See for instance my paper Hume's Determinism Refuted. Hume thought that determinism was so obvious, it was above proof.

Or answer this for me: why did you decide to eat what you did for lunch today? What "caused" that? There has never been the slightest indication that any element of human behavior is caused by anything. Think statistics, like the extremely accurate predictions of quantum physics. Your free will is a clear counter-example to causality, unless you believe in LaPlacian determinism.

On another note, Bohm's ideas were non-local. They are not generally accepted, either.
 
Last edited:
  • #43
http://www.ilja-schmelzer.de/realism/games.html

There is a 'cheat' in this thought experiment because you introduce the notion of a penalty of $100,000 and jail, for cheating. I don't know sufficient theory, but if the penalty for cheating were sufficiently large - say $10^40, then you would loose the game due to the likelyhood of the QM measurement failing.

In practice, $100,000 and the trip to jail might still be a loosing proposition, depending on how much the measurment of polarization is affected by the Heisenberg uncertainty principle.
 
Last edited by a moderator:
  • #44
Originally posted by NateTG
http://www.ilja-schmelzer.de/realism/games.html

There is a 'cheat' in this thought experiment because you introduce the notion of a penalty of $100,000 and jail, for cheating. I don't know sufficient theory, but if the penalty for cheating were sufficiently large - say $10^40, then you would loose the game due to the likelyhood of the QM measurement failing.

Yep. But something which makes me loose is not cheating. The point is that I win the game if I have a sufficiently accurate Bell device, even if it seems that you have a winning strategy.
In practice, $100,000 and the trip to jail might still be a loosing proposition, depending on how much the measurment of polarization is affected by the Heisenberg uncertainty principle.
Of course the proposal is not about real devices but ideal devices. But there is no limit for this accuracy based on the uncertainty principle.
 
Last edited by a moderator:
  • #45
Originally posted by DrChinese
Common sense? These statements fly in the face of much evidence. Causality is untenable, and has little or no basis in the real world. The facts support the following statements:

1. Many events have no prior cause.
2. Some events are influenced by the future.
3. Few events have a single specific prior cause.

All of which violate the spirit of causality.
Sorry, but realism is not about "everything has a cause". A stochastical theory (Wiener process) is nonetheless a classical realistic theory even if it is not deterministic.
Or answer this for me: why did you decide to eat what you did for lunch today? What "caused" that? There has never been the slightest indication that any element of human behavior is caused by anything.
So what? It is even useful to have a method to create results which are "not caused by anything" if you want to prove that some correlation is causation. For example, you observe a correlation between someone pressing a button and the behaviour of a light bulb.
But does the light bulb cause that the button is pressed or reverse?
To prove that pressing the button causes the change of state of the light bulb you need an experiment which allows you to exclude a backward influence. This can be done with various random number generators, which includes the free will of the experimenter.

Now, classical causality is not about "everything is caused by something else", but it is the falsifiable statement that there is no causal influence from future to past.
On another note, Bohm's ideas were non-local. They are not generally accepted, either.
Of course, they are not Einstein-local (better Einstein-causal). That´s the whole point, because Einstein-causal realism is falsified by the violation of Bell´s inequality but realism alone not. And BM is realistic in this general sense.

And who cares about "generally accepted" if the rejection is based on bad arguments?
 
  • #46
Originally posted by Ilja
Sorry, but realism is not about "everything has a cause".

...

And who cares about "generally accepted" if the rejection is based on bad arguments?

1. In your reply, above, you seemed to come off the classical realism position expressed below - I think:

"Classical realism (that means the set of principles used by Bell
to prove his inequality except Einstein-causality) is IMHO simply common sense."

So are you saying you believe in non-local realism? Or are you supporting local realism as if Bell/Aspect doesn't exist?

2. As to "generally accepted"... a better idea can come from anywhere, and a bad argument is still a bad argument regardless of whose mouth it comes from. But for a technical discussion to be meaningful/fruitful, the language has to supply a common ground. It helps if we know what points we agree on so we don't waste time on those. Usually taken to be the "generally accepted" principles unless otherwise stated.

Personally, I am here to discuss to a wide variety of ideas - generally accepted or not. Sometimes the written form of comments leads to a misunderstanding of a position. From my perspective, and simply stated:

a. Determinism: everything has a cause.

b. Causality: the past influences the future and not vice versa. Sometimes determinism is assumed as well.

c. Realism: all observables of a system have definite values (or possibly hidden variables which determine those values) independent of the act of observation.

Personally, I reject all three of the above. Some people lump all three together and call it classical realism. So that was my point of confusion.
 
  • #47
Originally posted by DrChinese
1. In your reply, above, you seemed to come off the classical realism position expressed below - I think:

"Classical realism (that means the set of principles used by Bell
to prove his inequality except Einstein-causality) is IMHO simply common sense."

So are you saying you believe in non-local realism? Or are you supporting local realism as if Bell/Aspect doesn't exist?
I´m supporting non-local realism. Especially Bohmian mechanics as a proof of existence of such theories. BM is not very nice, research for better models would be interesting. The best example is IMHO Nelsonian stochastics.
a. Determinism: everything has a cause.

b. Causality: the past influences the future and not vice versa. Sometimes determinism is assumed as well.

c. Realism: all observables of a system have definite values (or possibly hidden variables which determine those values) independent of the act of observation.

Personally, I reject all three of the above. Some people lump all three together and call it classical realism. So that was my point of confusion.
I name "classical realism" b and c, but not a. Classical stochastic processes (like a Wiener process) are realistic models even if they are not deterministic.

I plan to support the claim that realism is simply common sense in more detail in some future work. The idea is to justify classical Bayesian probability theory following Jaynes, then to use something like the forcing method to obtain a Kolmogorov version (Jaynes dislikes this). And that´s almost all we need in the proof of Bell´s theorem (except locality, of course), that means, what we need for "classical realism" in my sense.

Of course we need also some reasonable version of the ability to create independent (random) numbers.
 
  • #48
Originally posted by Ilja
Of course the proposal is not about real devices but ideal devices. But there is no limit for this accuracy based on the uncertainty principle.

Let me try explaining my understanding of the situation:

You've got some sort of source of entangled pairs of particles for the Bell device. In order to satisfy the time constraints described in the game, you need to contain each of the particles from these pairs for later use, and transport them to the site where you're flipping the cards. If you do not, then there is no way that you and your accomplice can be sure that you're using particles from the same pair.

Now, if you use some sort of gate detector to determine whether your jar has captured the particle, then you'll mess up the entangled nature of particles, so you've got to use some other method to make sure that you capture the particles in the jars. (This problem does not apply to the EPR paradox because entanglement can be determined *after* the measurements are made for the experiment described in it.)

Now, you've got an alternative possibility which is to observe the source of the radiation to determine when and where the entangled pair is expelled in order for the capture to function. (I'm thinking of e.g. observing positronium decay, although I don't know if that generates entangled radiation.) My understanding is that this is the exactly the type of situation that HUP applies to.
 
  • #49
Originally posted by NateTG
Let me try explaining my understanding of the situation:

You've got some sort of source of entangled pairs of particles for the Bell device. In order to satisfy the time constraints described in the game, you need to contain each of the particles from these pairs for later use, and transport them to the site where you're flipping the cards. If you do not, then there is no way that you and your accomplice can be sure that you're using particles from the same pair.

Now, if you use some sort of gate detector to determine whether your jar has captured the particle, then you'll mess up the entangled nature of particles, so you've got to use some other method to make sure that you capture the particles in the jars.

The point is that "mess up" is not a yes/no question. Of course every experiment is a little bit "messed up". The question is not if a given experiment is messed up, but if it is impossible to increase the accuracy beyond a certain absolute limit.

In our case I have simply no idea what this hypothetical limit on accuracy may be. What we have to measure is a set of commuting operators and standard quantum theory does not impose any limit of accuracy of such measurements.
 
  • #50
Originally posted by Ilja
Of course we need also some reasonable version of the ability to create independent (random) numbers.

Are you talking about for use in EPR experiments? Or what is this in reference to?
 
  • #51
Originally posted by DrChinese
Are you talking about for use in EPR experiments? Or what is this in reference to?

We need random number generators as for the "ideal" EPR-Bell-experiment, as in general for a reasonable notion of causality.
 
  • #52
Originally posted by Ilja
We need random number generators as for the "ideal" EPR-Bell-experiment, as in general for a reasonable notion of causality.

The Innsbruck EPR experiments (1998) used independent random numbers generated from physical processes. The results were not correalated until well after the end of the experiment. I think that should pass muster. They took great pains to insure that locality was actually being tested.

Although they too used the "fair sampling" assumption, their refined tests showed increasing agreement with the predictions of QM, and the Inequality was violated by 30 standard deviations. This represents an advance over the Aspect results, which I believe were 5 standard deviations.
 
  • #53
You see here is the problem I have with the EPR, and that is the universe does not need to be local (from what I read).


In the many-worlds, interpration... the bell experiement and its variants I assume,
there will appear of form the prespective of the obervers that the EPR is real, but in fact - if I recall and I will find the link and correct myself if i recall incorrectly, is that not the universe spilts but rather the obervers, detectors etc do. Also what apparaently again I will confrim or correct meself, is that how can a wavefunction calapse if say objects are moving at different speeds this will somewhat alter the chain of "what comes frist" and the importance of this is that is purely realtive what photon, etc or measurment actually collapse the wavefunction!

like I said this is subject to major corrections and I find the source of the links.

cheers.
 
  • #54
DrChinese
The Innsbruck EPR experiments (1998) ...

Although they too used the "fair sampling" assumption, their refined tests showed increasing agreement with the predictions of QM, and the Inequality was violated by 30 standard deviations. This represents an advance over the Aspect results, which I believe were 5 standard deviations.
The experiments also showed not merely the "increasing", but, as always, the full agreement with the local realistic theories such as SED/Stochastic Optics (as well as with the standard Quantum Optics models of the actual setups). And, of course, without requiring the untested and massive data extrapolation, wishfully labeled "fair sampling" (which amounts to an equivalent of having 90% of "data" put in by hand) in order to satisfy the so-called "ideal QM prediction" (a prediction based on Bohr-von Neumann "Measurement Theory"/Projection postulate).

The heretics from Bohr-von Neumann Orthodoxy, among others the key founders of the Quantum Theory - Planck, Einstein, de Broglie and Schroedinger, were and are right -- Quantum Theory doesn't need the parasitic non-physical projection postulate (wave function collapse) mysticism (or its even more absurd alternatives, such as "Many Worlds" metaphysics). And without it, there is no "Bell's QM prediction" (deduced via the Projection Postulate) which violates the local realism or the need to "fudge" the results to obtain the "increasing" agreement with such "prediction".
 
  • #55
Originally posted by nightlight
The experiments also showed not merely the "increasing", but, as always, the full agreement with the local realistic theories such as SED/Stochastic Optics (as well as with the standard Quantum Optics models of the actual setups). And, of course, without requiring the untested and massive data extrapolation, wishfully labeled "fair sampling" (which amounts to an equivalent of having 90% of "data" put in by hand) in order to satisfy the so-called "ideal QM prediction" (a prediction based on Bohr-von Neumann "Measurement Theory"/Projection postulate).

The heretics from Bohr-von Neumann Orthodoxy, among others the key founders of the Quantum Theory - Planck, Einstein, de Broglie and Schroedinger, were and are right -- Quantum Theory doesn't need the parasitic non-physical projection postulate (wave function collapse) mysticism (or its even more absurd alternatives, such as "Many Worlds" metaphysics). And without it, there is no "Bell's QM prediction" (deduced via the Projection Postulate) which violates the local realism or the need to "fudge" the results to obtain the "increasing" agreement with such "prediction".

nightlight,

When new and independent experiments provide increasing agreement with theoretical predictions against local reality, the norm is to acknowledge the obvious and look for constructive avenues for further research. It is completely reasonable that an independent observer to the fracas would deduce that some people are arguing from an emotionally charged position, while others are letting experimental results do the talking.

If there is a reasonable experiment which will today provide us with more information that we now have, where is it? Until the future you speak of arrives - the one where X >> 5% current detections are seen - I am really not sure of what you are getting at. You may as well say we don't know anything about anything.

Quantum theory has yielded experimental prediction after experimental prediction, many of which have been essentially verified in all material respects, There has not been - since its introduction - any competing theory which has yielded a single testable prediction in opposition of QM which has been experimentally verified. The Copenhagen Interpretation is silent on the subject of local reality, and theories which advocate local reality have been repeatedly falsified with increasing exactness. So if you have something positive to advance things, we are listening. Otherwise your position seems like sour grapes.

Hey, I would have bet money that the speed of light c was an absolute speed limit too. But recent tests of the recession velocity of ancient galaxies and quasars show hundreds with velocities exceeding 2c, with the latest oldest showing a recession velocity in excess of 3c. What am I going to do, reject all non-conforming experimental evidence that goes against my viewpoint? We have to advance via useful debate, and the position you are advocating fails the test - even if you are ultimately right. I can learn to live with a universe which is expanding faster than c. I am quite sure you can live without local reality.
 
  • #56
Ok Drchinese and nightlight you too seem to know lots, so if you don't mind I am going to take lessons from you, while I do know bits and peices of QM and EPR, I don't claim to be an expert, my maths is awful so please try to aviod or help me around it, and my degree is in chemistry so i might know somethings form that prespectve, that might be wrong in a QM pure sense.

Ok

What do we actually mean by local?

If i remmber correctly is local been used by some has an attempt to regain some sense of causality - YES I am aware of the issue of casulity on a philosophical issue and argee with Hume by in large, and kant is not so unreasonble in part either.

if I remmber also a local theory suggests that space is some what "reductionist" while the EPR seems to indicate some kind of holist nature to the universe, that what you might do to one side, you do to anther.

also there is no "specfic character" to say an electron, or photon until we measure it.

the suggestion of many-worlds being a meta-theory is a correct one, but that's not reason to fog it off, I am a firm beleiver that we should rule out via levels of certainty (beyond a reasonible doudt) sure it might be an emotional charge, but on the other hand it might also be rather rush to conclude without attempting alternatives.

can you two or one of you explain to be better than I have how the many-worlds, suggests a local universe rather than a non-local one.? has you will probably do a dam fine job.

thanks kindly.


additonal correction:

nonlocal is when information travels faster than light... I omitted this.
 
Last edited:
  • #57
Originally posted by DrChinese
Quantum theory has yielded experimental prediction after experimental prediction, many of which have been essentially verified in all material respects,

This sucesses of Quantum Theory owe not a single penny to the projection/collapse postulate. That is an independent QM postulate (not needed or used by QED or QCD or non-relativistic QM applications, such as solid state, chemistry etc) and its sole decisive test is the question whether the Bell's QM prediction can violate Bell's inequality in any experiment. That hasn't happened after over three decades of trying, not even close.

The absurd aspect that you and others here seem unaware of is that Bell's QM prediction which violates Bell Inequality is deduced using, as the essential step, the Collapse Postulate. At the same time the only rationale for that postulate was the historical misconception that no hidden variable theory can replicate the empirical QM predictions, even in principle.

During the Bohr-Enstein debate of 1920s, Bohr had used Heisenberg's Uncertainty Principle as the QM theoretical basis for the HV impossibility presumption. The entangled states demonstrated unambiguously that HUP cannot work as the "hidden variable impossibility" proof (as the original EPR argument and the related Schroedinger arguments demonstrated; Einstein & Schroedinger never accepted HUP as the HV impossibility proof anyway).

Having lost the HUP as the basis for the HV impossibility, the Orthodoxy switched its basis to the von Neumann's HV impossibility theorem. That one turned out to be invalid (as an HV impossibility theorem), too. Then Bell came up with the weaker HV impossibility proof, claiming to show that only the Local HV theories are impossible (contradict QM empirical predictions). And that theorem, the Bell's claim that his QM prediction violates LHV inequality, is presently the only rationale for needing the Collapse Postulate at all (otherwise, without Bell's theorem, one could assume that objects have properties all along and the measurement merely uncovers them).

The absurd aspect is that in order to prove the LHV impossibility, Bell uses the Collapse Postulate, while the only remaining present day rationale for the Collapse Postulate is none other than the Bell's LHV impossibility claim. You can drop both items, Bell's QM prediction violating LHV inequality (violation that was never achieved experimentally, anyway) and the Collapse Postulate and nothing will be taken away from all the decades of Quantum Theory successes -- nothing else depends on either item (that's precisely why the Bell-EPR tests had to be done in the first place -- no other empirical consequences require Collapse Postulate).

So if you have something positive to advance things, we are listening. Otherwise your position seems like sour grapes.

That's a completely upside-down characterization. It is precisely the LHV impossibility dogma (the Bell's no-go theorem claim) held by the Orthodoxy that keeps the doors locked to potential advancements (in the same way that the geo-centric dogma has held up the progress of astronomy for centuries). There is no analogy at all to the sour grapes metaphor in the position of the critics. Had the collapse postulate been the essential element of the successes of the Quantum Theory, then the critics could exhibit sour-grape emotion. But the "collapse" not only wasn't the essential element of the successes, it has nothing at all to do with the QT successes. It is so purely gratuitous add-on that the EPR-Bell setup had to be thought up for the sole purpose of bringing up a situation where the "collapse" may play some empirical role at all (if the collapse occurs as imagined, it should have produced the Bell inequality violations; and as we all know, that violation hasn't happened as yet).

Therefore, I consider the critique by the "heretics" (from Einstein and Schroedinger through Marshall-Santos and other present day QM heretics, including the super-star of theoretical physics, Gerard 't Hooft, his initial doubts going back at least to 1996) of this empirically unsupported no-go dogma a positive contribution to the physics.
 
Last edited:

Similar threads

Replies
4
Views
1K
Replies
3
Views
1K
Replies
1
Views
994
Replies
27
Views
3K
Replies
81
Views
8K
Replies
12
Views
2K
Replies
6
Views
1K
Replies
225
Views
12K
Back
Top