Is action at a distance possible as envisaged by the EPR Paradox.

In summary: QM?In summary, John Bell was not a big fan of QM. He thought it was premature, and that the theory didn't yet meet the standard of predictability set by Einstein.
  • #1,366
ThomasT said:
By this I mean its understandability. And understanding has to do with visualizability. Why assume that the fundamental principles of our universe aren't visualizable? After all, we are part of reality. Why not assume that the principles that govern our physical universe pervade and permeate all scales of behavior and interaction? Whether you know it or not, qm is very much based on analogies from ordinary experience. A 'block' conception of reality, vis GR, contradicts our experience. Our universe appears to be evolving. Why not just assume that it 'is' evolving -- that 'change' or 'time' isn't just an illusion, but is real? Why not assume that the fundamental physical principles govern physical behavior at all scales?

Very nice post TT.

I agree; we all want the world to be logical and understandable. No one wants it to be horrible, incomprehensible or 'magical'. We want to know that it all works the way we 'perceive' it. We also want nature to be 'homogeneous' on all scales. It’s very logical and natural, and I agree.

But I think it could be a mistake... or at least lead to mistakes.

A classical mistake is when one of the brightest minds in history, Albert Einstein, did not like what his own field equations for theory of general relativity revealed – the universe cannot be static.

Albert Einstein was very dissatisfied, and made a modification of his original theory and included the cosmological constant (lambda: Λ) to make the universe static. Einstein abandoned the concept after the observation of the Hubble Redshift, and called it the '"biggest blunder" of his life.

(However, the discovery of cosmic acceleration in the 1990s has renewed interest in a cosmological constant, but today we all know that the universe is expanding, even if that was not Albert Einstein’s logical hypothesis.)

Another classical example is Isaac Newton, who found his own law of gravity and the notion of "action at a distance" deeply uncomfortable, so uncomfortable that he made a strong reservation in 1692.

We must learn from this.

I think that humans have a big "ontological weakness" – we think that the human mind is "default" and the "scientific center" of everything in the universe, and there are even some who are convinced that their own brain is greatest of all :smile:. But there is no evidence at all that this is the case (please note: I’m not talking about "God").

One extremely simple example is "human colors". Do they exist? The answer is No. Colors only exist inside our heads. In the "real world" there is only electromagnetic radiation of different frequency and wavelength. A scientist trying to visualize "logical colors" in nature will not go far.

ThomasT said:
Anyway, to get back to your question, if an ontological or epistemological description of 'reality' is at odds with our experience, then I think it should be seriously questioned. I think that this orientation accords with the best traditions of the scientific method. If you think otherwise, then I'm open to learning.

Have you ever tried to visualize a four-dimensional space-time? Or visualize the bending and curving of that 4D space-time?? To my understanding, not even the brightest minds can do this?? Yes, it works perfectly in the mathematical equations, but to imagine an "ontological description" that fits "our experience"... is this even possible?? Yet, we know it’s there, and we can take pictures of it in the form of gravitational lensing on the large cosmological scale:

500px-Gravitationell-lins-4.jpg

Abell 1689 is a galaxy cluster in the constellation Virgo

Does this fits your picture of a "logical reality"...?
– What’s the weather today honey?
– I don’t know... it looks BENT??

ThomasT said:
Wrt predicting the results of experiments, I agree. However, this isn't the only thing relevant to 'understanding' or really 'explaining' why things are as they are and why things behave as they do. Just because you can predict something doesn't mean that you understand how and why it happens.

I don’t think mainstream science claims the full understanding of EPR-Bell experiments, it’s still a paradox. What is a fact though is that either locality and/or realism have to go if QM is correct (and QM is the most precise theory we got so far):
Bell's Theorem proves that QM violates Local Realism.

ThomasT said:
Wrt the OP of this thread, the question is, does the detection of a particle at detector A, spacelike separated from the 'possible' detection of a particle at detector B, determine the 'existence' of an underlying reality that, it might be assumed, determines the detection attribute registered by detector B? If you think that the answer to this must be, obviously, no, then you agree with EPR, and Einstein. Otherwise, you're a nonlocalist or spookyactionatadistanceist, in which case the onus is on you to demonstrate the physical existence of the spooky (or merely ftl?) propagations/interactions between A and B, or B and A, or whatever.

This is spot on the problem, in several "dimensions". There seems to be some in this thread that for real thinks that Einstein would have stuck to his original interpretation of the EPR paradox, despite the work of John Bell and the many experimentalists who are verifying QM predictions and Bell's Theorem, time after time. I’m pretty sure that this would not have been the case. Just look at the cosmological constant and Hubble Redshift. Einstein changed his mind immediately. He did not start looking for "loopholes" in Hubble's telescope or any other farfetched 'escape' – he was a diehard empiricist.

We already know that there are problems in getting full compatibility between QM and GR when it comes to gravity in extreme situations, and EPR-Bell is just another verification of this incompatibility. If we try to solve the EPR-Bell situation as a "spookyactionatadistanceist" we get problems with SR and http://en.wikipedia.org/wiki/Relativity_of_simultaneity" . If we try to solve it as a "surrealist" (non-separable/non-realism) we get the problems RUTA is struggling with.

So this question is definitely NOT solved, and it’s definitely NOT easy.

But, let’s not make it too easy by saying the problem doesn’t exist at all, because there’s still QM-incompatible gravity dragging us down, and it will never go away... :wink:
 
Last edited by a moderator:
Physics news on Phys.org
  • #1,367
RUTA said:
I assume you mean to say "if one abandons realism, there is no good reason to have locality." Then you conclude "one has to violate ... locality that is natural in relativity."

In Relational Blockworld we have locality and separability in the classical (statistical) limit of an underlying graphical spacetime structure. There is non-separability at the level of individual relations (graphical level), but Poincare invariance (which includes Lorentz invariance) holds at the graphical level.

So, the point is, you can create a model that is non-separable ("not realism") and local at the quantum level while becoming separable in a statistical limit (classical limit).

No I conclude that the Occam Razor would tell us to abandon realism so that there would not be any reason to invoke non-locality to escape the contradiction that results from Bell's inequalities. Abandoning realism would not let one have GHZ either. Hot having realism brings us back to what people thought till 1963. Now of course, the opinion of the masters (who created QM and its origins, except for de Broglie who remained realist) on realism is no more enough and to get rid of the weak statements about local realism, we have to show that microscopic realism is indeed false as the Copenhagen school though and as Einstein partly proved already in 1931 with Tolman and Podolky (then all statements against local realism would be obsolete: if there is no realism, of course there is no "local realsim", no " blue realism'; adjectives beside realism become moot , and this is what I want to prove, as a few others. But physics is not math and one often needs several "proofs" to cover as many philosophical points of view as one can: one proof of non existence of realism will convince some but not other. Decisive proofs belong to the realm of mathematics (including logic) and in physics , one has more or less decisive arguments, usually resting on experiments some of which being possibly thought experiments.

Sometimes, for very weak statements such as "local realism is false" where one would like "realism is false" (very weak because from 1927 to 1963 the admitted status was" local (naive) realism is not what one has in nature"), one can get a contradiction with experiment so obvious that one does not need many "proofs" to convince enough people. The problem here is that so many legends, misquotations etc have spoiled the subject that many people got confused, to the points that many leading experts of the next generations after the founding fathers have said and written very false statements about non-locality, e.g., that non-locality follows from the type of experiments that Clauser, Aspect, Gisin, Zeilinger, their teams and other teams have done on the Bphm-Bell version of EPR pairs.

I have realized only recently that the strength of the arguments against local realism indicate by itself that the result is probably too weak (which I claim for many reasons). This being said, when one says "proof" one should recall that one deals with proofs in the sense of Physics. For instance fair sampling is not proven, et there are better known issues.
What is clear is that the description of history that accompany Bell's theory is very much lacking precision and accuracy and this has caused many misconceptions and false statements.

This is about the very beginning of the quote. I cannot make sense of the rest as there are other things that should also be the contrary of what is written: for instance, separability relates to locality and not to realism. This being said, getting macroscopic realism out of microscopic realism is something that I believe happens but that I would be happy to see a proof of, even if a very simple model if the discussion os precise and rigorous enough: does a reference exist?
 
Last edited:
  • #1,368
JesseM said:
Any chance you could post some of Einstein's quotes that you think show he was not a "naive realist" or would not have agreed with the ideas in the EPR paper? If it would take too long to find them and type them up, I will understand of course.

The book of Fine, the Shaky Game, that you have bought contains examples and references to more. I have cited many times the ETP paper of 1931 which has been uploaded by someone these last days (perhaps you(?) after the post that I quote now). In teh Born Einstein correspondence, one see Einstein making fun twice of the theories of de Brolie and Bohm, which EXACTLY means that he considered the type of microscopic realism used by Bell for his 1964 theorem way too naive to correspond to the laws of nature. AND Einstein gave several versions of the main point of the EPR paper (or "EPR"), that he mostly formulated as "either there is non-locality (something unacceptable to Bohr according to Popper who discussed with Bohr, many times I think (?) or QM is incomplete. he considered the version of Podolsky, i.e., "EPR" too obscure and missing the main point(s). In all these proofs/arguments that Einstein wrote on QM non complete (in letters or published) he NEVER used elements of reality. As proofs belong to math including logic, you do not expect that I will prove my views on Einstein's opinion, but see Fine's book on that matter and Einstein's writing including the 1931 ETP paper and
the report of the state of the EPR subject as described by Einstein in 1933 (2 years before "EPR") given by Rosenfeld, Bohr's close collaborator where you will learn that Einstein used himself the word "paradox" in the context of the problem covered by "EPR", while many contemporary "masters" state that the word "paradox" got eventually attacked to that matter by the community of physicists who understood that something was wrong with the argument, or other nonsenses of the same kind. Some of these new "masters" got me excited with non-locality and it took me a few weeks if not a few months of intensive reading originals and books such as Fine's and some by Jammer for instance (but also other sources) to see how deeply and widely polluted the subject was. Many disciplines would collapse with that level of non-professionalism by the masters. I cannot imagine something of that kind happening in math (and this is not directly because of the difference of nature between math and physics). Well, I prefer to write about physics issues as I am not an historian anyway.
 
  • #1,369
charlylebeaugosse said:
This is about the very beginning of the quote. I cannot make sense of the rest as there are other things that should also be the contrary of what is written: for instance, separability relates to locality and not to realism.

Separability relates to realism, not causal locality. Are you thinking "locality" in the sense of a differentiable manifold being locally homeomorphic to the reals? That's constitutive locality, not causal locality.

charlylebeaugosse said:
This being said, getting macroscopic realism out of microscopic realism is something that I believe happens but that I would be happy to see a proof of, even if a very simple model if the discussion os precise and rigorous enough: does a reference exist?

We get macroscopic separability from microscopic nonseparability in arXiv 0908.4348. We can't discuss that paper here, it's still under review (revise & resubmit stage). I only made reference to it as an example of nonseparable and local going to separable and local in a statistical sense. I have trouble following your posts and (mistakenly?) thought you were claiming that one couldn't have a nonseparable and local underlying theory.
 
  • #1,370
DrChinese said:
1. Nothing. What's your point?
I was explaining what is prediction for full sample from LR perspective.

DrChinese said:
2. You are the local realist, what do YOU predict for the xxx case? Does it match QM or not?
In case of full sample LR prediction is that all possible outcomes happen with equal probabilities:
P(H'H'H')=1/8
P(H'H'V')=1/8
P(H'V'H')=1/8
P(H'V'V')=1/8
P(V'H'H')=1/8
P(V'H'V')=1/8
P(V'V'H')=1/8
P(V'V'V')=1/8
It matches QM with complete decoherence.

QM prediction for ideal case (no decoherence at all) was:
P(H'H'H')=1/4
P(H'H'V')=0
P(H'V'H')=0
P(H'V'V')=1/4
P(V'H'H')=0
P(V'H'V')=1/4
P(V'V'H')=1/4
P(V'V'V')=0

Observed result was roughly:
P(H'H'H')=7/32
P(H'H'V')=1/32
P(H'V'H')=1/32
P(H'V'V')=7/32
P(V'H'H')=1/32
P(V'H'V')=7/32
P(V'V'H')=7/32
P(V'V'V')=1/32
 
  • #1,372
RUTA said:
In Relational Blockworld, if the entity "isn't there," i.e., is "screened off," it doesn't exist at all. So, the answer to your question is that there is no Moon to wonder :smile:

RUTA, I have been thinking (for once :smile:).

If everything that is "screened off" does not exist, I guess that photons emitted in an earlier "process", "traveling" thru vacuum without any interaction, do not exist, right?

But if the photons in the http://en.wikipedia.org/wiki/CMB" (CMB) that have been traveling thru the space since the "last scattering", 400 000 years after the Big Bang, did not exist until they bumped into one of humans apparatus – How can the CMB be stretched out (redshifted) during billions of years if it DID NOT EXIST...?:bugeye:?

We do have pretty good data of the CMB from http://en.wikipedia.org/wiki/Cosmic_Background_Explorer" :

[PLAIN]http://upload.wikimedia.org/wikipedia/commons/thumb/6/6c/PLANCK_FSM_03_Black.jpg/800px-PLANCK_FSM_03_Black.jpg

How do you explain this?
 
Last edited by a moderator:
  • #1,373
DevilsAvocado said:
But... this is an essay from 1949. How can this relate to Bell's Theorem?

I don’t agree. As you state yourself:

I absolutely do not think Einstein would start looking for farfetched loopholes etc. He was way too smart for that. I think he would have accepted the situation, for the start of something new.
Well, you was asking about hypothetical "what if" situation.
There are no solid arguments to defend either position so I think that we can safely leave it at that - we disagree.

DevilsAvocado said:
Well, this is pretty obvious, isn’t it?? The completely "new thing" is when polarizers are nonparallel!? Einstein would of course immediately have realized that his own argument had boomeranged on him:
no action on a distance (polarisers parallel) ⇒ determinism
determinism (polarisers nonparallel) ⇒ action on a distance
Here I would not agree that this ("polarisers nonparallel") is completely new thing because it is essentially HUP.

You are trying to ascribe to Einstein non-contextual determinism a la Bell. But that was not position of Einstein. His position was Ensemble Interpretation and it's essence is contextuality. Ensemble is a factor in determining measurement outcome for individual photon so it is related to context of individual measurement.

DevilsAvocado said:
Are you saying that if we run an EPR-Bell experiment as I proposed, we "should expect complete decoherence of entanglement" and the experiment would fail? No expected QM statistics??
Yes, experiment would fail.
When you take theoretical QM predictions you disregard experimental imperfections. That is so more or less everywhere in theory.
But when you come to experimental verification you substitute idealized theoretical prediction with something like: theoretical prediction + experimental imperfections, with the aim to minimize these experimental imperfections.
For QM one of possible experimental imperfections has a name "decoherence" except if you specifically explore decoherence.
So when the effect from "experimental imperfections" is too high experiment is a failure. But it does not falsify theory. It is simply not useful in this case.

So the answer to your question: "No expected QM statistics?" is that result of experiment would not allow to determine QM statistics as decoherence would be too high.
 
  • #1,374
charlylebeaugosse said:
Non-existence of local realism means of course absence of HV a la Bell/Boh/de Broglie,
But is there any reason to think Einstein believed in the "non-existence of local realism"?
charlylebeaugosse said:
since those HV are a strong form of microscopic realism (non only one has pre-existence of observable meaning and values to measurement but one also has predictability). Now, HV that are compatible with QM and such that not only what is measured but also whatever makes sense obeys he UP would be acceptable.
But what does it even mean for hidden variables to obey the uncertainty principle? For example, suppose we believe that measurement invariably alters the momentum of a particle, so that the momentum you measure at time T is always different from the momentum immediately before measurement. But suppose that before measurement, there were already hidden variables associated with the particle that predetermined what momentum would be measured if the particle's momentum was measured at time T. Would Einstein have said that such a theory was impossible? What if it was impossible to measure all the hidden variables simultaneously, so it was impossible to use them to determine both the position and momentum at a single time?

Anyway, did Einstein consider the uncertainty principle to be "sacred", not to be overturned even in future theories? Some of his thought-experiments with Bohr tried to find ways to violate it, though perhaps Bohr's answers convinced him that it was a basic principle of nature.
charlylebeaugosse said:
Einstein would not have been long to dismiss the hypothesis of Bell's Theorem as not more physical than the theories of Bohm and de Broglie how which he made fun often.
Bohm and de Broglie was a speculative idea about non-local hidden variables, but Bell gave a general proof that a wide class of local hidden variables theories were incompatible with QM--do you think Einstein would have denied this conclusion?
charlylebeaugosse said:
So Born is probably right in thinking that Einstein believed in HVs, but for sure not in the classical ones that Bell used but this is not sure as eh correspondance with Einstein shows that he did not understand anything of the EPR story.
What do you mean by "classical ones"? What would a non-classical hidden variables theory look like?
 
  • #1,375
zonde said:
Here I would not agree that this ("polarisers nonparallel") is completely new thing because it is essentially HUP.

Is this really correct?? According to Wikipedia:
In quantum mechanics, the Heisenberg uncertainty principle states by precise inequalities that certain pairs of physical properties, like position and momentum, cannot simultaneously be known to arbitrary precision. That is, the more precisely one property is measured, the less precisely the other can be measured. In other words, the more you know the position of a particle, the less you can know about its velocity, and the more you know about the velocity of a particle, the less you can know about its instantaneous position.

And the statistics of nonparallel polarizers is all about the QM version of Malus' law: cos^2(a-b)

I don’t get it? Are saying that Einstein had already discovered the "things" Bell did later?

zonde said:
Yes, experiment would fail.

Unless you are saying that every EPR-Bell experiment will fail every time no matter what the "interval" and setup – according to you – this is very strange.

If you are not saying this I can, even as a layman, guarantee you that the QM statistics will be the same regardless if the intervals between the entangled pairs is 100 microseconds, 100 seconds, 100 minutes, 100 days, or 100 months. Expecting anything else is not very bright. It’s like expecting different probabilities from throwing dice with 1 min intervals, or 1 hour intervals...

If you are saying that every EPR-Bell experiment will fail every time no matter what, I can only conclude that this is not the opinion in mainstream science:
http://plato.stanford.edu/entries/bell-theorem/"
...
In the face of the spectacular experimental achievement of Weihs et al. and the anticipated result of the experiment of Fry and Walther there is little that a determined advocate of local realistic theories can say except that, despite the spacelike separation of the analysis-detection events involving particles 1 and 2, the backward light-cones of these two events overlap, and it is conceivable that some controlling factor in the overlap region is responsible for a conspiracy affecting their outcomes. There is so little physical detail in this supposition that a discussion of it is best delayed until a methodological discussion in Section 7.



But let’s put like this, to get by this little "problem", suppose sometime in the future there will be 100% detection efficiency in EPR-Bell experiments.

How would the Ensemble Interpretation handle EPR-Bell experiments with very long intervals between the entangled pairs, where is the "memory" located that handle the QM statistics correct?

Or even worse: If a very advanced civilization in the future decided to setup 1000 individual EPR-Bell experiments, separated by 1 lightyear, and fire one entangled pair at relative angle 22.5º, in the same moment, and then gather the 1000 individual results to check the collective QM statistics (they will of course get cos^2(22.5) = 85%).

The obvious question: How do you handle this scenario in the Ensemble Interpretation??
 
Last edited by a moderator:
  • #1,376
DevilsAvocado said:
RUTA, I have been thinking (for once :smile:).

If everything that is "screened off" does not exist, I guess that photons emitted in an earlier "process", "traveling" thru vacuum without any interaction, do not exist, right?

But if the photons in the http://en.wikipedia.org/wiki/CMB" (CMB) that have been traveling thru the space since the "last scattering", 400 000 years after the Big Bang, did not exist until they bumped into one of humans apparatus – How can the CMB be stretched out (redshifted) during billions of years if it DID NOT EXIST...?:bugeye:?

We do have pretty good data of the CMB from COBE. How do you explain this?

Very good question! I understand that Wheeler and Feynman gave up on direct action for cosmological reasons, i.e., the universe is so empty that most photons will never hit anything and therefore will never contribute to the action. I saw a talk on direct action at Imperial College last month where the speaker was trying to resolve this "problem" via horizons. After my talk, that speaker was very interested to know how we handled this problem. He was surprised when I told him we don't have photons, so we don't have to account for "non-interacting entities." The CMB represents relations between "here and now" and "there and then." That's all there is to it.

Now of course, we have to change GR accordingly and that's nontrivial. We're using direct action Regge calculus (which is NOT how it was intended to be used) and that approach is nasty. We're only now working on the 2-body freefall problem. We'll study that solution to obtain a direct action explanation for redshift (which also gives us time difference). Once we have that, we'll do the 2-body orbital problem to see what we have to say about dark matter.

Our nonseparable approach to classical gravity will be empirically distinct from GR. Exactly how it differs is what we're working on now. If it passes existing empirical data, then we'll propose an experiment where it differs. If it passes that test, then perhaps we'll understand why GR thwarted quantization. You realize how unlikely these things are? We have a much better chance of winning the mega lottery :smile:
 
Last edited by a moderator:
  • #1,377
DevilsAvocado said:
Is this really correct?? According to Wikipedia:

And the statistics of nonparallel polarizers is all about the QM version of Malus' law: cos^2(a-b)

I don’t get it? Are saying that Einstein had already discovered the "things" Bell did later?
No, I don't think Einstein had discovered contradictions that Bell discovered. But it might be that he didn't considered anything like naive model of Bell as anything plausible.

DevilsAvocado said:
Unless you are saying that every EPR-Bell experiment will fail every time no matter what the "interval" and setup – according to you – this is very strange.
No, I am not saying this.

DevilsAvocado said:
If you are not saying this I can, even as a layman, guarantee you that the QM statistics will be the same regardless if the intervals between the entangled pairs is 100 microseconds, 100 seconds, 100 minutes, 100 days, or 100 months. Expecting anything else is not very bright. It’s like expecting different probabilities from throwing dice with 1 min intervals, or 1 hour intervals...
Dice throwing does not suffer from decoherence where observing QM statistics does.

DevilsAvocado said:
If you are saying that every EPR-Bell experiment will fail every time no matter what, I can only conclude that this is not the opinion in mainstream science:

But let’s put like this, to get by this little "problem", suppose sometime in the future there will be 100% detection efficiency in EPR-Bell experiments.

How would the Ensemble Interpretation handle EPR-Bell experiments with very long intervals between the entangled pairs, where is the "memory" located that handle the QM statistics correct?

Or even worse: If a very advanced civilization in the future decided to setup 1000 individual EPR-Bell experiments, separated by 1 lightyear, and fire one entangled pair at relative angle 22.5º, in the same moment, and then gather the 1000 individual results to check the collective QM statistics (they will of course get cos^2(22.5) = 85%).

The obvious question: How do you handle this scenario in the Ensemble Interpretation??
You are mixing in 100% detection efficiency but I don't understand what role it plays in you argument. In case of 100% detection efficiency you will observe complete decoherence between H and V modes. So QM statistics will reduce to product state statistics (they are QM statistics as well).

About "global RAM" I explained that there is no such thing so if some of the pairs are not within each other's coherence interval inside common measurement equipment you can not observe entanglement but only QM statistics that describe product state.
 
  • #1,378
zonde said:
You are mixing in 100% detection efficiency but I don't understand what role it plays in you argument. In case of 100% detection efficiency you will observe complete decoherence between H and V modes. So QM statistics will reduce to product state statistics (they are QM statistics as well).

About "global RAM" I explained that there is no such thing so if some of the pairs are not within each other's coherence interval inside common measurement equipment you can not observe entanglement but only QM statistics that describe product state.

Okay, we probably misunderstand each other. Could you in simple English briefly describe how the Ensemble Interpretation explains what happens in an EPR-Bell experiment (let’s pretend it’s 100% perfect to avoid the logjam about loopholes etc)? And what is included in the "Ensemble"?
 
  • #1,379
RUTA said:
Very good question!

Thanks RUTA! It’s the first time a Professor of Physics gave me credit! I’m buying champagne for tonight! :smile:

RUTA said:
I understand that Wheeler and Feynman gave up on direct action for cosmological reasons, i.e., the universe is so empty that most photons will never hit anything and therefore will never contribute to the action.

Wow!:eek:! Wheeler and Feynman did struggle with this!? (now I have to buy two bottles :biggrin:) Pardon a layman, but what is "direct action"? Is it a part of the time-symmetric http://en.wikipedia.org/wiki/Wheeler%E2%80%93Feynman_absorber_theory" ?


RUTA said:
I saw a talk on direct action at Imperial College last month where the speaker was trying to resolve this "problem" via horizons. After my talk, that speaker was very interested to know how we handled this problem. He was surprised when I told him we don't have photons,

Hehe, kinda understand the speaker :smile: ... "we don't have photons" ... huh?:rolleyes:?

RUTA said:
Now of course, we have to change GR accordingly and that's nontrivial. We're using direct action Regge calculus (which is NOT how it was intended to be used) and that approach is nasty. We're only now working on the 2-body freefall problem. We'll study that solution to obtain a direct action explanation for redshift (which also gives us time difference). Once we have that, we'll do the 2-body orbital problem to see what we have to say about dark matter.

This is very interesting. I can see that you have a lot of work to do. Modifying GR is probably not an easy task. Is this the http://en.wikipedia.org/wiki/Two-body_problem" you are working on? (Edit: Below is of course the 2-body orbital problem, sorry... :redface:)

[URL]http://upload.wikimedia.org/wikipedia/commons/0/0e/Orbit5.gif[/URL]

RUTA said:
then perhaps we'll understand why GR thwarted quantization

Great! Amazing! Exciting! I admire you guys!

RUTA said:
You realize how unlikely these things are? We have a much better chance of winning the mega lottery :smile:

Well, people win a lot of money on the lottery every day. It’s just a matter of probability (and bet). :wink:
 
Last edited by a moderator:
  • #1,380
Can Planck black holes be shown to violate the Bell inequality?
 
  • #1,381
DevilsAvocado said:
Okay, we probably misunderstand each other. Could you in simple English briefly describe how the Ensemble Interpretation explains what happens in an EPR-Bell experiment (let’s pretend it’s 100% perfect to avoid the logjam about loopholes etc)? And what is included in the "Ensemble"?
Well, first about source used in EPR-Bell experiments. Most common source used is Parametric Downconversion Crystal. It produces photon beams that consist of mixture of H/V and V/H photon pairs (or H/H and V/V in case of PDC Type I).

From basic laws about photon polarization we can conclude that if polarizer is perfectly aligned with say H photon polarization axis then all H photons will go through but all V photons will be filtered out.
But if polarizer is at 45° in respect to H photons then half of H photons and half of V photons are going through. So it means that photon polarization does not play any role in determining whether it will go through or will be filtered in 45° case.
However in this 45° case we have some other "thing" that allows us to measure correlations between photons of the same pair.

From perspective of Ensemble Interpretation this "thing" is not property of individual photon but some relation between photons from different pairs. One common example for such a "thing" would be phase. Obviously we can say something about phase of some oscillator only when we compare it with another oscillator that oscillates at the same frequency.
Now to have some measurement of phase we have to combine two photons in single measurement so that they can interfere constructively or destructively and measurement will give "click" in first case and will give no "click" in second case.

However if we get "click" in all cases when we get photon (100% efficiency) there is no way how we can obtain some information about their relative phase. Even more - when detector produces a "click" it's state will be reset to some initial (random) state and interference between two photon arriving one after another can not form.
So while in case of 100% efficiency we can have correlations for polarization measurement at 0° or 90° we can't have correlations for +45° or -45° measurement in this case.

Another way to look at this is that entanglement QM statistics are observable only when we combine polarization measurement with some other measurement of different type. But pure polarization measurement produces only product state statistics (probability at angle "a" x probability at angle "b").


I would like to add to this description that in order to produce correlations with this other measurement after polarizer it has to be that polarization measurements at +45° and -45° change this other "thing" (say phase) in antisymmetric way. Say relative phase between H and V modes changes in opposite ways if we compare +45° and -45° polarization measurements in counterfactual manner.

I hope I described my viewpoint clearly enough.
 
  • #1,382
zonde said:
I hope I described my viewpoint clearly enough.

zonde, I’m only a layman, and I am not saying this to be rude, but with all due respect – I think you may have missed the very core of Bell's Theorem and EPR-Bell experiments.

This is the point I was trying to address earlier:
zonde said:
From perspective of Ensemble Interpretation this "thing" is not property of individual photon but some relation between photons from different pairs.

According to QM; it’s all about probability and statistics. I think that we all can agree that the probability and statistics of throwing dice is not dependent on the context or situation, right? If I throw a dice 1000 times in a row at home, I will get the same statistics as if we gathered 1000 PF user at different global locations throwing a dice once, and then collective check the statistics. Do you agree?

Now, my point is that if we run a "collective" EPR-Bell experiment in the same way as the "1000 PF users", we should of course get the same QM statistics as in one single EPR-Bell experiment. Do you agree?

My crucial conclusion of above is: The Ensemble Interpretation is going to run into some severe difficulties with the "1000 PF users" example, since there is no ensemble present in one single entangled pair, and still we will get the violation of the local realistic inequality when we compare the collective statistics of the 1000 single entangled pairs.

I guess you will not agree with my last conclusion, but I can’t see how you could explain this with the Ensemble Interpretation?

zonde said:
From basic laws about photon polarization we can conclude that if polarizer is perfectly aligned with say H photon polarization axis then all H photons will go through but all V photons will be filtered out.
But if polarizer is at 45° in respect to H photons then half of H photons and half of V photons are going through. So it means that photon polarization does not play any role in determining whether it will go through or will be filtered in 45° case.
However in this 45° case we have some other "thing" that allows us to measure correlations between photons of the same pair.

Here I think you are missing to whole point. When the polarizers are perfectly aligned parallel, even I can write a simple little computer program that proves Local Realism by the means of predefined LHV's. All I have to do is to (randomly) predefine the perfect correlations (1,1) or (0,0). No problem.

And in the case of 45º it’s even simpler. There is no correlation what so ever – it’s always 100% random! You couldn’t prove anything at this angle, could you!:bugeye:?

zonde said:
So while in case of 100% efficiency we can have correlations for polarization measurement at 0° or 90° we can't have correlations for +45° or -45° measurement in this case.

I’m totally confused?? If we skip the 'faultiness' on parallel and perpendicular, are you saying that an EPR-Bell experiment with 100% efficiency cannot produce the statistics we see today!?:eek:!?:bugeye:!?:confused:!?

(If this is what you are saying, it must be the most mind-blowing comment so far in this thread...)

zonde said:
I would like to add to this description that in order to produce correlations with this other measurement after polarizer it has to be that polarization measurements at +45° and -45° change this other "thing" (say phase) in antisymmetric way. Say relative phase between H and V modes changes in opposite ways if we compare +45° and -45° polarization measurements in counterfactual manner.

As I already said, 45º is totally disqualified as a decisive factor due to its 100% randomness. It won’t tell us anything about the Ensemble Interpretation, LHVT or Bell's Theorem.

Let’s instead take one of the simplest proofs of Bell's Inequality, by Nick Herbert:
If both polarizers are set to , we will get perfect agreement, i.e. 100% matches and 0% discordance.
13z71hi.png

To start, we set first polarizer at +30º, and the second polarizer at :
16jlw1g.png

If we calculate the discordance (i.e. the number of mismatching outcome), we get 25% according to QM and experiments.

Now, if we set first polarizer to , and the second polarizer to -30º:
106jwrd.png

This discordance will also naturally be 25%.

Now let’s ask ourselves:

– What will the discordance be if we set the polarizers to +30º and -30º??
2zjm5jk.png

If we assume a local reality, that NOTHING we do to one polarizer can affect the outcome of the other polarizer, we can formulate this simple Bell Inequality:
N(+30°, -30°) ≤ N(+30°, 0°) + N(0°, -30°)

The symbol N represents the number of discordance (or mismatches).

This inequality is as good as any other you’ve seen in this thread.

(The "is less than or equal to" sign ≤ is just to show that there could be compensating changes where a mismatch is converted to a match.)

We can make this simple Bell Inequality even simpler:
50% = 25% + 25%

This is the obvious local realistic assumption.

But this wrong! According to QM and physical experiments we will now get 75% discordance!
sin^2(60º) = 75%

Thus John Bell has demonstrated by the means of very brilliant and simple tools that our natural assumption about a local reality is by over 25% incompatible with the predictions of Quantum Mechanics and physical experiments.​


How are you going to explain this with the Ensemble Interpretation? If we run 1000 separate single experiments at (+30°, -30°) to verify the QM prediction of 75% discordance??

THERE IS NO ENSEMBLE!?
 
Last edited:
  • #1,383
DevilsAvocado said:
Wow!:eek:! Wheeler and Feynman did struggle with this!? (now I have to buy two bottles :biggrin:) Pardon a layman, but what is "direct action"? Is it a part of the time-symmetric http://en.wikipedia.org/wiki/Wheeler%E2%80%93Feynman_absorber_theory" ?

"Direct action" means all sources are connected to sinks so there are no "free field" contributions to the Lagrangian. That doesn't mean Wheeler and Feynman thought there were no photons -- I'm not really sure what their motives were for using this approach.

In Relational Blockworld, we have a mathematical rule for the divergence-free graphical co-construction of sources, space and time, that's why we don't have free fields in our action.

DevilsAvocado said:
Hehe, kinda understand the speaker :smile: ... "we don't have photons" ... huh?:rolleyes:?

Exactly. It wasn't my idea, but I'm just as crazy for using it as those who proposed it (Bohr, Ulfbeck, Mottelson, Zeilinger) :smile:

In its defense, it's a very powerful means of dismissing lots of conceptual issues in quantum physics (QM and QFT), but it does entail corrections to GR -- you can imagine that "direct connections" are fine in flat spacetime, but in curved spacetime between sources at distances where curvature is significant, this idea won't marry up with GR.

DevilsAvocado said:
This is very interesting. I can see that you have a lot of work to do. Modifying GR is probably not an easy task. Is this the http://en.wikipedia.org/wiki/Two-body_problem" you are working on? (Edit: Below is of course the 2-body orbital problem, sorry... :redface:)

[PLAIN]http://upload.wikimedia.org/wikipedia/commons/0/0e/Orbit5.gif[/QUOTE]

Yes, that's the orbital problem we have to solve. I can't begin to tell you how much more complicated the math for Regge calculus is than simply solving Newtonian gravity or even GR numerically. So, we're just trying to do the case where the two bodies free fall directly towards one another first.
 
Last edited by a moderator:
  • #1,384
Can one embed in spacetime a geometry which manifests the quantum mechanical observations of all Bell-type experiments therein?
 
  • #1,385
DevilsAvocado said:
Now, my point is that if we run a "collective" EPR-Bell experiment in the same way as the "1000 PF users", we should of course get the same QM statistics as in one single EPR-Bell experiment. Do you agree?
Strange but I believe I have states clearly enough this in my previous posts.
No, I disagree!

This discussion is not going anywhere if I will have to state it in every reply to you that I disagree about outcome of "collective" EPR-Bell experiment consisting of individual experiments with single pair.

This is similar to "collective" double slit experiment involving individual experiments with single particle. Even from orthodox QM perspective this question will be quite dubious because you need coherent source of photons to observe interference. But there is no coherence for single photon.
 
  • #1,386
Loren Booda said:
Can one embed in spacetime a geometry which manifests the quantum mechanical observations of all Bell-type experiments therein?

Our contention with Relational Blockworld is that a causally local but nonseparable reality solves all the QM "weirdness."

[See “Reconciling Spacetime and the Quantum: Relational Blockworld and the Quantum Liar Paradox,” W.M. Stuckey, Michael Silberstein & Michael Cifone, Foundations of Physics 38, No. 4, 348 – 383 (2008), quant-ph/0510090 (revised December 2007).

“Why Quantum Mechanics Favors Adynamical and Acausal Interpretations such as Relational Blockworld over Backwardly Causal and Time-Symmetric Rivals,” Michael Silberstein, Michael Cifone & W.M. Stuckey, Studies in History & Philosophy of Modern Physics 39, No. 4,
736 – 751 (2008).]

However, this interpretation implies a fundamental theory whereby the current "spacetime + matter" is to be replaced by "spacetimematter," e.g., one consequence of this view is that GR vacuum solutions are only approximations. So, it's incumbent upon us to produce this "theory X" (as Wallace calls it) and that's what we're working on now. To see our current attempt at how this might work, see Figures 1-4 of arXiv 0908.4348.
 
  • #1,387
zonde said:
Strange but I believe I have states clearly enough this in my previous posts.
No, I disagree!

Sorry, my fault. I will not ask about this again. I get your point now.

zonde said:
This discussion is not going anywhere if I will have to state it in every reply to you that I disagree about outcome of "collective" EPR-Bell experiment consisting of individual experiments with single pair.

Yes, we disagree on this, and this is the whole point. Although I'm sure that I can prove to you that your assumption is wrong, thus I will also show that the Ensemble Interpretation is wrong (unless you have missed something in your explanation).

Let me ask you: What is the time-limit (between the pairs) for you to consider a stream of entangled photons an "Ensemble"? Is it 1 nanosecond, 1 microsecond, 1 millisecond, 1 second, or what??

There must clearly be some limit (according to you), since you have stated that coherence is lost for a "single pair", and then the EPR-Bell experiment will fail.

So, what's the difference in time between two "single pairs" and two "coherent pairs" in an "Ensemble"??

zonde said:
This is similar to "collective" double slit experiment involving individual experiments with single particle. Even from orthodox QM perspective this question will be quite dubious because you need coherent source of photons to observe interference. But there is no coherence for single photon.

And here is where you got it all wrong. Your view is the old classical view on interference, where the effect originates from several photons interfering with each other. But this is proven wrong. The interference originates from one wavefunction of one photon interfering with itself! As the Nobel Laureate http://en.wikipedia.org/wiki/Paul_Dirac" states:
http://en.wikipedia.org/wiki/Photon_dynamics_in_the_double-slit_experiment#Probability_for_a_single_photon"
...
Some time before the discovery of quantum mechanics people realized that the connexion between light waves and photons must be of a statistical character. What they did not clearly realize, however, was that the wave function gives information about the probability of one photon being in a particular place and not the probable number of photons in that place. The importance of the distinction can be made clear in the following way. Suppose we have a beam of light consisting of a large number of photons split up into two components of equal intensity. On the assumption that the beam is connected with the probable number of photons in it, we should have half the total number going into each component. If the two components are now made to interfere, we should require a photon in one component to be able to interfere with one in the other. Sometimes these two photons would have to annihilate one another and other times they would have to produce four photons. This would contradict the conservation of energy. The new theory, which connects the wave function with probabilities for one photon gets over the difficulty by making each photon go partly into each of the two components. Each photon then interferes only with itself. Interference between two different photons never occurs.

— Paul Dirac, The Principles of Quantum Mechanics, Fourth Edition, Chapter 1

I guess your last hope is to say that Paul Dirac was wrong and that you are right, but then you run into next problem - physical proofs. This video by Akira Tonomura at Hitachi Ltd shows a double slit experiment involving individual electrons distributed as an interference pattern:
https://www.youtube.com/watch?v=
<object width="640" height="505">
<param name="movie" value="http://www.youtube.com/v/FCoiyhC30bc&fs=1&amp;hl=en_US&amp;rel=0&amp;color1=0x402061&amp;color2=0x9461ca"></param>
<param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param>
<embed src="http://www.youtube.com/v/FCoiyhC30bc&fs=1&amp;hl=en_US&amp;rel=0&amp;color1=0x402061&amp;color2=0x9461ca" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="640" height="505"></embed>
</object>

As you can see, you are obviously wrong, and we could of course extend the time between every electron to 1 second, or 1 minute, or 1 hour, or 1 day, or 1 month, and still get exactly the same result as above!

This double slit experiment could of course also be distributed at different geographic locations, and when we later assemble the individual results, we would of course get the same collective picture as above. It's exactly the same mechanism as throwing dice - probability.

Even if you disagree, you can’t deny physical proofs can you...
 
Last edited by a moderator:
  • #1,388
RUTA said:
"Direct action" means all sources are connected to sinks so there are no "free field" contributions to the Lagrangian. That doesn't mean Wheeler and Feynman thought there were no photons -- I'm not really sure what their motives were for using this approach.

In Relational Blockworld, we have a mathematical rule for the divergence-free graphical co-construction of sources, space and time, that's why we don't have free fields in our action.

This is really hard for me... I can only guess my way thru "the haze of complexity"... I guess what you are saying is that if we have no photons, then naturally also the force carrier of one of the four fundamental interactions, electromagnetism, also has to go, and this must in some (very strange to me) way be replaced by a "Direct action", right?? (And that also goes for the rest of the 3 fundamental interactions? :rolleyes:)

Very weird indeed, every part in physics should be affected by this... (if I’m correct)

RUTA said:
Exactly. It wasn't my idea, but I'm just as crazy for using it as those who proposed it (Bohr, Ulfbeck, Mottelson, Zeilinger) :smile:

In its defense, it's a very powerful means of dismissing lots of conceptual issues in quantum physics (QM and QFT), but it does entail corrections to GR -- you can imagine that "direct connections" are fine in flat spacetime, but in curved spacetime between sources at distances where curvature is significant, this idea won't marry up with GR.

Ohh yeah, "crazy" is the term... :smile:

RUTA said:
Yes, that's the orbital problem we have to solve. I can't begin to tell you how much more complicated the math for Regge calculus is than simply solving Newtonian gravity or even GR numerically. So, we're just trying to do the case where the two bodies free fall directly towards one another first.

I have serious trouble just understanding the metric on Minkowski space... so this is actually very easy for me to relate to... :redface:

Seriously, can one look on RBW as a "digitalization", or maybe "sampling", or just "quantization" of everything in nature (I’m thinking of the "blocks")? Just as digital music on a CD, or sounds on a sampler, is just small blocks of an original analog continuing signal, or is this totally wrong (and silly)...?:rolleyes:?

If I’m correct, will this (hopefully) be the "key" to the quantization of gravity (which is maybe the most "analog" and "continuing", in distance, we have).

Just some personal thoughts... or whatever... :rolleyes:
 
  • #1,389
Loren Booda said:
Can Planck black holes be shown to violate the Bell inequality?

I don’t know anything about Micro black holes, but you must have Quantum entanglement in some way to violate a Bell inequality.


...thinking more about it... this could maybe be a really cool way of "stealing" information from a Black hole by sending in one part of an entangle pair...?? :cool:
 
  • #1,390
DevilsAvocado said:
I don’t know anything about Micro black holes, but you must have Quantum entanglement in some way to violate a Bell inequality.


...thinking more about it... this could maybe be a really cool way of "stealing" information from a Black hole by sending in one part of an entangle pair...?? :cool:

I don't know what it is you'd entangle, and anyway, the information can't leave the event horizon. Even if they are String Theory Fuzzballs, the event horizon is still "no return"... the ultimate in decoherence.
 
  • #1,391
DevilsAvocado said:
Yes, we disagree on this, and this is the whole point. Although I'm sure that I can prove to you that your assumption is wrong, thus I will also show that the Ensemble Interpretation is wrong (unless you have missed something in your explanation).
How do you intend to do that without actual experimental results?

DevilsAvocado said:
Let me ask you: What is the time-limit (between the pairs) for you to consider a stream of entangled photons an "Ensemble"? Is it 1 nanosecond, 1 microsecond, 1 millisecond, 1 second, or what??

There must clearly be some limit (according to you), since you have stated that coherence is lost for a "single pair", and then the EPR-Bell experiment will fail.

So, what's the difference in time between two "single pairs" and two "coherent pairs" in an "Ensemble"??
You question is quite reasonable but I have to admit that I don't have clear answers.
I tried to look at this using different experiments that involve observation of HOM (Hong-Ou_Mandel) dip and I have to say that "coherent pairs" should be those that use this "memory" the same way i.e. results are correlated not just random. But allowed time offset before coherence is lost is very small - in the scale of picoseconds.
But the time limit for preservation of "memory" content should be much bigger.

DevilsAvocado said:
And here is where you got it all wrong. Your view is the old classical view on interference, where the effect originates from several photons interfering with each other. But this is proven wrong. The interference originates from one wavefunction of one photon interfering with itself! As the Nobel Laureate http://en.wikipedia.org/wiki/Paul_Dirac" states:

But look at your quote. Dirac says: "The new theory, which connects the wave function with probabilities for one photon gets over the difficulty by making each photon go partly into each of the two components. Each photon then interferes only with itself. Interference between two different photons never occurs."
But he doesn't tell what is result physically for constructive and destructive interference.
What happens when photon interferes with itself destructively? Does it disappears or jumps to another place or what?
What happens when photon interferes with itself constructively? Does it just stays the way it is or what?
He is just going away from physical context of question to avoid the need for giving answer in physical sense.

This matter is not so simple. Take a look for example at this Feynman quote: (there was discussion about this quote https://www.physicsforums.com/showthread.php?t=406161")
"It is to be emphasized that no matter how many amplitudes we draw, add, or multiply, our objective is to calculate a single final amplitude for the event. Mistakes are often made by physics students at first because they do not keep this important point in mind. They work for so long analyzing events involving a single photon that they begin to think that the wavefunction or amplitude is somehow associated with the photon. But these amplitudes are probability amplitudes, that give, when squared, the probability of a complete event. Keeping this principle in mind should help the student avoid being confused by things such as the "collapse of the wavefunction" and similar magic."

Why interpretations of QM can not do away with single bare photon in single world?
Why do we need superposition or pilot-wave or many worlds?
If I would have to give one single word for what is common in all these interpretations I will say that it's context of measurement.
 
Last edited by a moderator:
  • #1,392
zonde said:
How do you intend to do that without actual experimental results?


You question is quite reasonable but I have to admit that I don't have clear answers.
I tried to look at this using different experiments that involve observation of HOM (Hong-Ou_Mandel) dip and I have to say that "coherent pairs" should be those that use this "memory" the same way i.e. results are correlated not just random. But allowed time offset before coherence is lost is very small - in the scale of picoseconds.
But the time limit for preservation of "memory" content should be much bigger.



But look at your quote. Dirac says: "The new theory, which connects the wave function with probabilities for one photon gets over the difficulty by making each photon go partly into each of the two components. Each photon then interferes only with itself. Interference between two different photons never occurs."
But he doesn't tell what is result physically for constructive and destructive interference.
What happens when photon interferes with itself destructively? Does it disappears or jumps to another place or what?
What happens when photon interferes with itself constructively? Does it just stays the way it is or what?
He is just going away from physical context of question to avoid the need for giving answer in physical sense.

This matter is not so simple. Take a look for example at this Feynman quote: (there was discussion about this quote https://www.physicsforums.com/showthread.php?t=406161")
"It is to be emphasized that no matter how many amplitudes we draw, add, or multiply, our objective is to calculate a single final amplitude for the event. Mistakes are often made by physics students at first because they do not keep this important point in mind. They work for so long analyzing events involving a single photon that they begin to think that the wavefunction or amplitude is somehow associated with the photon. But these amplitudes are probability amplitudes, that give, when squared, the probability of a complete event. Keeping this principle in mind should help the student avoid being confused by things such as the "collapse of the wavefunction" and similar magic."

Why interpretations of QM can not do away with single bare photon in single world?
Why do we need superposition or pilot-wave or many worlds?
If I would have to give one single word for what is common in all these interpretations I will say that it's context of measurement.

Ignoring context and Interpretations of QM, I don't see how you can see anything like locality or realism with the violations of BI's. As for how a photon interferes with itself destructively, I would guess it would be a net loss of energy, but who knows. Does it matter? That doesn't really effect non-locality in the context of Bell. The bottom line is that the results of these experiments are incompatible with ANY LHV theory, and the only other local theory on offer is deBB, which personally I don't buy (although it's viable for now).
 
Last edited by a moderator:
  • #1,393
DevilsAvocado said:
This is really hard for me... I can only guess my way thru "the haze of complexity"... I guess what you are saying is that if we have no photons, then naturally also the force carrier of one of the four fundamental interactions, electromagnetism, also has to go, and this must in some (very strange to me) way be replaced by a "Direct action", right?? (And that also goes for the rest of the 3 fundamental interactions? :rolleyes:)

Yes, at the fundamental level there are no "forces." The notion of "force" has to do the deviation of a worldline (matter) from geodesy in a background spacetime. In our approach, spacetime and matter are fused into spacetimematter, and the WHOLE thing is co-constructed. It's like GR where you can view the ontology as free of gravitational force. The difference is that in GR you can have vacuum solutions, i.e., it's meaningful to talk about empty spacetime. In our approach, spatio-temporal distances are defined only between sources, so there is no vacuum solution. This solves problems with closed time-like curves in GR, btw.

DevilsAvocado said:
I have serious trouble just understanding the metric on Minkowski space... so this is actually very easy for me to relate to... :redface:

Seriously, can one look on RBW as a "digitalization", or maybe "sampling", or just "quantization" of everything in nature (I’m thinking of the "blocks")? Just as digital music on a CD, or sounds on a sampler, is just small blocks of an original analog continuing signal, or is this totally wrong (and silly)...?:rolleyes:?

If I’m correct, will this (hopefully) be the "key" to the quantization of gravity (which is maybe the most "analog" and "continuing", in distance, we have).

Quantization of "everything" is probably the best metaphor. We use our fundamental rule to yield a partition function over the spacetimematter graph (local and nonseparable). The probability for any particular quantum outcome (graphical relation evidenced by a single detector click) can be obtained per the partition function. Thus, sets of many relations center statistically around the most probable outcome and one obtains classical physics. So, we don't start with classical physics (local and separable) and "quantize it." We start with a quantum physics (local and nonseparable) and obtain classical physics in the statistical limit.
 
  • #1,394
zonde said:
How do you intend to do that without actual experimental results?

Nema problema! :wink:

zonde said:
You question is quite reasonable but I have to admit that I don't have clear answers.
I tried to look at this using different experiments that involve observation of HOM (Hong-Ou_Mandel) dip and I have to say that "coherent pairs" should be those that use this "memory" the same way i.e. results are correlated not just random. But allowed time offset before coherence is lost is very small - in the scale of picoseconds.
But the time limit for preservation of "memory" content should be much bigger.

And I have to admit that your answer is somewhat unclear...

Immediately you run into several difficulties. To start with, spontaneous parametric down-conversion in BBO crystals is due to random vacuum fluctuations, and it’s not a very effective process. One out of 106 photons converts into two entangled photons, one in a million.

Then you have coincidence counting, and time window, and delays occurring in the electronics and optics, in the experimental setup.

All this results in roughly 1.5 entangled pair per millisecond:
http://arxiv.org/abs/quant-ph/9810080"
Weihs, Jennenwein, Simon, Weinfurter, and Zeilinger

...
The total of the delays occurring in the electronics and optics of our random number generator, sampling circuit, amplifier, electro-optic modulator and avalanche photodiodes was measured to be 75 ns.
...
A typical observed value of the function S in such a measurement was S = 2.73±0.02 for 14700 coincidence events collected in 10 s. This corresponds to a violation of the CHSH inequality of 30 standard deviations assuming only statistical errors.

There goes your "correlated" picoseconds.

And I am extremely interested in your "memory". What is this? How does it work? To me it looks at least as spooky as non-locality... This physical "entity" must have enormous "computing power" – keeping track of every microscopic probability in the whole universe... and not only on every present probability, "it" must remember all it "did" in the past to produce the correct correlated data... in real-time without delays... How on Earth is this ever possible??

Another interesting problem: If Alice & Bob are separated by 20 km and big a stream of not entangled photons, mixed with a few entangled photons, are running towards their random polarizers and measuring apparatus, to be time tagged – how can your "memory" know if one specific photon is entangled or not? This is something that is established later when data from both Alice & Bob is compared.

For your "memory" to know this at the exact measuring moment – "it" would need TRUE FTL communication!?:bugeye:!?

Let’s admit, this doesn’t work, does it?

zonde said:
But he doesn't tell what is result physically for constructive and destructive interference.
What happens when photon interferes with itself destructively? Does it disappears or jumps to another place or what?
What happens when photon interferes with itself constructively? Does it just stays the way it is or what?
He is just going away from physical context of question to avoid the need for giving answer in physical sense.

This is so easy that even I can give you a correct answer: What happen is that the single wavefunction for one photon goes thru both slits, just as a single water wave, and after the slits starts creating an interference pattern, just as a single water wave. When the wavefunction reaches the detector there are destructive and constructive amplitudes of probability for the photon to be detected. Naturally, more single photons will be detected in those areas where there are constructive amplitudes of probability (= higher probability).

It’s very simple.

Two_sources_interference.gif
 

Attachments

  • Two_sources_interference.gif
    Two_sources_interference.gif
    119.1 KB · Views: 386
  • Two_sources_interference.gif
    Two_sources_interference.gif
    119.1 KB · Views: 388
  • Two_sources_interference.gif
    Two_sources_interference.gif
    119.1 KB · Views: 407
Last edited by a moderator:
  • #1,395
DevilsAvocado said:
Nema problema! :wink:



And I have to admit that your answer is somewhat unclear...

Immediately you run into several difficulties. To start with, spontaneous parametric down-conversion in BBO crystals is due to random vacuum fluctuations, and it’s not a very effective process. One out of 106 photons converts into two entangled photons, one in a million.

Then you have coincidence counting, and time window, and delays occurring in the electronics and optics, in the experimental setup.

All this results in roughly 1.5 entangled pair per millisecond:


There goes your "correlated" picoseconds.

And I am extremely interested in your "memory". What is this? How does it work? To me it looks at least as spooky as non-locality... This physical "entity" must have enormous "computing power" – keeping track of every microscopic probability in the whole universe... and not only on every present probability, "it" must remember all it "did" in the past to produce the correct correlated data... in real-time without delays... How on Earth is this ever possible??

Another interesting problem: If Alice & Bob are separated by 20 km and big a stream of not entangled photons, mixed with a few entangled photons, are running towards their random polarizers and measuring apparatus, to be time tagged – how can your "memory" know if one specific photon is entangled or not? This is something that is established later when data from both Alice & Bob is compared.

For your "memory" to know this at the exact measuring moment – "it" would need TRUE FTL communication!?:bugeye:!?

Let’s admit, this doesn’t work, does it?



This is so easy that even I can give you a correct answer: What happen is that the single wavefunction for one photon goes thru both slits, just as a single water wave, and after the slits starts creating an interference pattern, just as a single water wave. When the wavefunction reaches the detector there are destructive and constructive amplitudes of probability for the photon to be detected. Naturally, more single photons will be detected in those areas where there are constructive amplitudes of probability (= higher probability).

It’s very simple.

Two_sources_interference.gif

Postoji problem. Mislim da je problem, Zonde's approach and endlessly reductionist requirements. Nearly 90 pages and a couple people (not naming names) seems to be chasing their tails. Your .gif picture is very eloquent, and really doesn't it say it all?! What more is needed, a frying pan inset with "No LHV!" to beat some about the head? This fair sampling complaint, then offshoots (by another) to endless talk of Malus' Law, then back to demands for evidence that is self-evident, or on the other hand impossible (FTL verification without classical means). I'm ready to believe this is all going to end with Abbot & Costello asking "Who's on first?"
 
  • #1,396
Thanks for the informative replies RUTA. Some comments below:

1. I've begun a second, much slower, reading of your main RBW paper, arXiv 0908.4348. I have the feeling that it might turn out to be somewhat influential. Who knows. In any case, as I mentioned, insofar as I understand the rationale of your approach it does seem quite reasonable, even if certain (necessary?) formal aspects of it are somewhat alien to my, admittedly pedestrian, way of thinking. Hopefully, I'll be able to ask you some worthwhile questions about it in the future -- definitely in a different thread, and probably in a different subforum.

2. Wrt the OP of this thread, I was asking if you assume (not just wrt RBW, but in your general thinking as a theoretical physicist and natural philosopher) the observation-independent existence of an 'underlying reality'. I mean, do you think that this a reasonable inference from observations, or isn't it?

Wrt the OP, "action at a distance ... as envisaged by the EPR Paradox" entails that there's no underlying reality.

Keeping in mind that there's a difference between saying that there's no way of talking, objectively, about an underlying reality (or any subset thereof such as, say, a light medium), and that an underlying reality doesn't exist, then what would you advise readers like me to believe -- that no underlying reality exists independent of observation, or that it's reasonable to assume that an underlying reality independent of observation exists but there's just no way to objectively talk about it?

If it's assumed that no observation-independent underlying reality exists, then the answer to the OP's question is that "action at a distance ... as envisaged by the EPR Paradox" isn't just possible, it's all there is. Alternatively, if it's assumed that an observation-independent underlying reality exists (even if we have no way of objectively talking about it), then the answer to the OP's question is no, "action at a distance ... as envisaged by the EPR Paradox" isn't possible.

In the course of answering the OP's question, it's been suggested that violations of BIs (and GHZ inconsistencies, etc.) inform us that either an observation-independent underlying reality doesn't exist, or, if it exists, then it's nonlocal in the EPR sense. But nonlocality, "as envisiged by the EPR Paradox", entails that an observation-independent reality doesn't exist. So, the suggestion becomes, violations of BIs show that there is no reality underlying instrumental behavior. This would seem to render any and all 'interpretations' of the qm formalism as simply insipid exercises in the manipulation of agile terms.

Of course, there's another, more reasonable way to interpret BI violations -- that they're not telling us anything about the nature of reality, but rather that they have to do with how certain experimental situations might be formalised. In which case, the answer, in my view, to the OP's question is simply that there is, currently, no definitive answer to his question -- but that the most reasonable assumptions, based on what is known, entail that, no, it's not possible.
 
  • #1,397
Thanks for the thoughtful reply DA. I'm not sure I totally agree with (or maybe I don't fully understand) some of your points. Comments below:

DevilsAvocado said:
I agree; we all want the world to be logical and understandable. No one wants it to be horrible, incomprehensible or 'magical'. We want to know that it all works the way we 'perceive' it. We also want nature to be 'homogeneous' on all scales. It’s very logical and natural, and I agree.
Not strictly "'homogeneous' on all scales", keeping in mind that there do seem to be certain 'emergent' organizing principles that pertain to some physical 'regimes' and not others, but rather that there might be some fundamental, or maybe a single fundamental, dynamical principle(s) that pervade(s) all scales of behavior.

DevilsAvocado said:
But I think it could be a mistake... or at least lead to mistakes.
Sure, it could. But maybe not. Modern particle physics has proceeded according to a reductionist program -- in the sense of 'explaining' the macroscopic world in terms of properties and principles governing the microscopic and submicroscopic world. But there's another approach (also a sort of reductionism) that aims at abstracting dynamical principles that are relevant at all scales of behavior -- perhaps even reducing to one basic fundamental wave dynamic.

DevilsAvocado said:
A classical mistake is when one of the brightest minds in history, Albert Einstein, did not like what his own field equations for theory of general relativity revealed – the universe cannot be static.

Albert Einstein was very dissatisfied, and made a modification of his original theory and included the cosmological constant (lambda: ?) to make the universe static. Einstein abandoned the concept after the observation of the Hubble Redshift, and called it the '"biggest blunder" of his life.

(However, the discovery of cosmic acceleration in the 1990s has renewed interest in a cosmological constant, but today we all know that the universe is expanding, even if that was not Albert Einstein’s logical hypothesis.)
Einstein made a logical judgement, given what was known at the time, and then changed his mind given observational evidence of the expansion. It's quite possible that the mainstream paradigms of both fundamental physics and cosmology might change significantly in the next, say, 100 to 200 years.

DevilsAvocado said:
Another classical example is Isaac Newton, who found his own law of gravity and the notion of "action at a distance" deeply uncomfortable, so uncomfortable that he made a strong reservation in 1692.
Newton formulated some general mathematical relationships which accorded with observations. His reservation wrt to his gravitational law was that he wasn't going to speculate regarding the underlying reason(s) for its apparent truth. Then, a couple of centuries after Newton, Einstein presented a more sophisticated (in terms of its predictive accuracy) and more explanatory (in terms of its geometric representation) model. And, I think we can assume that GR is a mathematical/geometrical simplification of the fundamental physical reality determining gravitational behavior. Just as the Standard Model is a simplification, and qm is a simplification.

DevilsAvocado said:
We must learn from this.
I agree. And the main thing we learn from is observation. Relatively recent and fascinating cosmological observations have led to inferences regarding the nature of 'dark energy' and 'dark matter'. But, in keeping with the theme of your reply to my reply to nismaratwork, these apparent phenomena don't necessarily entail the existence of anything wholly unfamiliar to our sensory reality. Dark energy might be, fundamentally, the kinetic energy of the universal expansion. The apparent acceleration of the expansion might just be a blip in the overall trend. It might be taken as evidence that gravity isn't the dominant force in our universe. I'm not familiar with the current mainstream views on this.

ThomasT said:
Our universe appears to be evolving. Why not just assume that it 'is' evolving -- that 'change' or 'time' isn't just an illusion, but is real? Why not assume that the fundamental physical principles govern physical behavior at all scales?
If there's a fundamental physical principle, say, in the form of a fundamental wave dynamic, and if it makes sense to assume that it's present at all scales, then conceptualizing the boundary of our universe in terms of an expanding spherical (ideally) shell, the mother of all waveforms, so to speak, then the discovery of the cosmic-scale expansion becomes maybe the single most important scientific discovery in history.

And dark matter might be waves in a medium or media of unknown structure. Is there any particular reason to assume that wave behavior in media that we can't see is 'fundamentally' different from wave behavior in media that we can see? It might be argued that standard qm is based on the notion that the wave mechanics of unknown media is essentially the same as the wave mechanics of known media.

DevilsAvocado said:
I think that humans have a big "ontological weakness" – we think that the human mind is "default" and the "scientific center" of everything in the universe, and there are even some who are convinced that their own brain is greatest of all . But there is no evidence at all that this is the case (please note: I’m not talking about "God").
I certainly agree that this seems to be the general orientation. Whereas, the more scientifically sophisticated worldview would seem to be that what our sensory faculties reveal to us is not the fundamental 'reality'. Perhaps we're just complex, bounded waveforms, persisting for a virtual instant as far as the life of the universe as a whole is concerned -- or however one might want to talk about it.

DevilsAvocado said:
One extremely simple example is "human colors". Do they exist? The answer is No. Colors only exist inside our heads. In the "real world" there is only electromagnetic radiation of different frequency and wavelength. A scientist trying to visualize "logical colors" in nature will not go far.
Well, colors do exist. But, as you've noted, it's really important to specify the context within which they can be said to exist. We humans, and moons and cars and computers, exist, but these forms that are a function of our sensory faculties aren't the fundamental form of reality.

And the way that all of our sensory faculties seem to function (vibrationally) gives us another clue (along with quantum phenomena, and the apparent behavior of dark matter, etc.) wrt the fundamental nature of reality. It's wavelike. Particles and particulate media emerge from complex wave interactions. Now, wrt my statement(s), is there any reason to suppose that wave behavior in particulate media is governed by different fundamental dynamical principles than wave behavior in nonparticulate media? Of course, I have no idea.

DevilsAvocado said:
Have you ever tried to visualize a four-dimensional space-time?
I don't want to. I think that it's a simplification of underlying complex wave behavior.

DevilsAvocado said:
Or visualize the bending and curving of that 4D space-time??
No. But consider the possibility that 'gravitational lensing' is further evidence in favor of a wave interpretation of fundamental reality. (And keep in mind that insofar as we entertain the idea of a fundamental reality that exists whether we happen to be probing it or not, then we can't logically entertain the possibility of EPR-envisaged spooky action at a distance per the OP.)

DevilsAvocado said:
To my understanding, not even the brightest minds can do this?? Yes, it works perfectly in the mathematical equations, but to imagine an "ontological description" that fits "our experience"... is this even possible??
Sure, there's wave activity in a medium or media that we can't detect that's affecting the light.

DevilsAvocado said:
Yet, we know it’s there, and we can take pictures of it in the form of gravitational lensing on the large cosmological scale:
Does this fits your picture of a "logical reality"...?
Yes.

DevilsAvocado said:
I don’t think mainstream science claims the full understanding of EPR-Bell experiments, it’s still a paradox. What is a fact though is that either locality and/or realism have to go if QM is correct (and QM is the most precise theory we got so far): Bell's Theorem proves that QM violates Local Realism.
I agree that objective realism is a pipe dream. There's simply no way to know, definitively, what the underlying reality is or, definitively, how it behaves. It is, nonetheless, a wonderful speculative enterprise. And I do think that informed speculations about the nature of reality will help fundamental physics advance.

But if we opt for nonlocality, per EPR and the OP, then there is no underlying reality -- and I find that a very limiting and boring option.

DevilsAvocado said:
There seems to be some in this thread that for real thinks that Einstein would have stuck to his original interpretation of the EPR paradox, despite the work of John Bell and the many experimentalists who are verifying QM predictions and Bell's Theorem, time after time. I’m pretty sure that this would not have been the case. Just look at the cosmological constant and Hubble Redshift. Einstein changed his mind immediately. He did not start looking for "loopholes" in Hubble's telescope or any other farfetched 'escape' – he was a diehard empiricist.
Bell experiments are one thing. Interpretations of Bell experiments are quite another. Do they inform us about the nature of reality? How, especially when one interpretation is that BI violations tell us that an underlying reality doesn't even exist? And if that's the case, then what is there to 'discover'?

The acceptance that the cosmological expansion is real is much less problematic than the acceptance that there's no reality underlying instrumental behavior.

I don't know what the mainstream view is, but, if it's that qm and experiments are incompatible with the predictions of LR models of a certain form specified by Bell, then I currently agree with that. And the experiments tell us nothing about any necessary qualitative features of the reality underlying the instrumental behavior, except maybe that the correlation between detector behavior and emitter behavior would seem to support the assumption that there's something moving from emitter to detector, which would seem to support the assumption that there's an underlying real 'whatever', produced by the emission process, which exists prior to and independent of filtration and detection, which would support the contention that the correct answer to the OP's question is, no, "action at a distance ... as envisaged by the EPR Paradox" is not possible.

DevilsAvocado said:
We already know that there are problems in getting full compatibility between QM and GR when it comes to gravity in extreme situations, and EPR-Bell is just another verification of this incompatibility. If we try to solve the EPR-Bell situation as a "spookyactionatadistanceist" we get problems with SR and Relativity of Simultaneity (RoS) + problems with QM and the No-communication theorem. If we try to solve it as a "surrealist" (non-separable/non-realism) we get the problems RUTA is struggling with.

So this question is definitely NOT solved, and it’s definitely NOT easy.

But, let’s not make it too easy by saying the problem doesn’t exist at all, because there’s still QM-incompatible gravity dragging us down, and it will never go away...
I agree. And the QM-GR, RBW-GR formal problems are beyond my comprehension. However, this thread is (ok, it sort of was at one time) about answering the question, "Is action at a distance possible as envisaged by EPR?". And here's my not quite definitive answer to that:

If there's no underlying reality, then it's possible.
Experiments suggest that there's an underlying reality.
Therefore, it's not possible.

Or, in the words of Captain Beefheart:

The stars are matter,
We are matter,
But it doesn't matter.
 
  • #1,398
DevilsAvocado said:
This is so easy that even I can give you a correct answer: What happen is that the single wavefunction for one photon goes thru both slits, just as a single water wave, and after the slits starts creating an interference pattern, just as a single water wave.

***
When the wavefunction reaches the detector there are destructive and constructive amplitudes of probability for the photon to be detected.
***

Naturally, more single photons will be detected in those areas where there are constructive amplitudes of probability (= higher probability).

It’s very simple.
Incredible!
You have got it!

So if we hypothetically detect all photons even those with miserable detection probability we loose any idea about interference pattern.
That's what I call unfair sampling but you can call it whatever way you want.

Forget about ensemble interpretation. I can explain it to you using orthodox QM in much simpler way.

Just to check that we are on the same line. Consider http://en.wikipedia.org/wiki/Mach-Zehnder_interferometer" .

Mach-zender-interferometer.png


After we count all the phase shifts inside different mirrors Wikipedia says that: "there is no phase difference in the two beams in detector 1, yielding constructive interference." So detector 1 fires when photon arrives there (constructive interference) but detector 2 does not fire when photon arrives there (destructive interference).
So photons arrive at both detectors but because of interference one detector fires but the other don't.

Are we still on the same line here?
 
Last edited by a moderator:
  • #1,399
zonde said:
Consider http://en.wikipedia.org/wiki/Mach-Zehnder_interferometer" .

Mach-zender-interferometer.png


After we count all the phase shifts inside different mirrors Wikipedia says that: "there is no phase difference in the two beams in detector 1, yielding constructive interference." So detector 1 fires when photon arrives there (constructive interference) but detector 2 does not fire when photon arrives there (destructive interference).

So photons arrive at both detectors but because of interference one detector fires but the other don't.

Most people would say that no photons arrive at detector 2.
 
Last edited by a moderator:
  • #1,400
RUTA said:
Most people would say that no photons arrive at detector 2.

After finally reading this entire thread, I feel confident saying that Zonde is not most people... :rolleyes:
 

Similar threads

  • Quantum Physics
2
Replies
45
Views
2K
  • Quantum Physics
Replies
4
Views
994
Replies
20
Views
1K
  • Quantum Physics
Replies
2
Views
1K
  • Quantum Physics
Replies
6
Views
1K
  • Quantum Physics
Replies
18
Views
2K
Replies
3
Views
1K
  • Quantum Physics
3
Replies
100
Views
9K
Replies
6
Views
3K
Replies
3
Views
740
Back
Top