Bell experiment would somehow prove non-locality and information FTL?

In summary: Bell's theorem states that a theory cannot be both "local" and "realistic." You have to give up one or the other, if you accept the validity of Bell's Theorem.
  • #106
heusdens said:
The whole point here again, is what do you define as a "well defined state"?
I'd have to think about that more to have something really precise, but as a first try, you could say that every possible physical state of the universe can be represented by an element of some mathematically-defined set, with the universe's state corresponds to a single element at every moment. And there is some mathematical function for the time-evolution that tells you what future states the universe will be in given its past states (the function could be either deterministic or stochastic). And knowing which element of the set corresponds to its current state gives you the maximum possible information about the physical universe, there are no other variables which could affect your measurements or your predictions about the future state of the universe which could differ even for states that correspond to the same element of the state.
heusdens said:
A signal that by all means is random can not, by mere logic, be also non-random, yet it can be easily shown to be the case.
But you're not really violating the laws of logic, you're just using the word "random" in a poorly-defined linguistic way, as opposed to a precise mathematical definition. Similarly, if I say "putting one rabbit and one rabbit together can give a lot more than two rabbits, since they could have babies", I'm not really violating the laws of arithmetic, I'm just using the phrase "putting one and one together" in a way that doesn't really correspond to addition in arithmetic.
heusdens said:
I just have to create a clear signal, and split that into two signals that are correlated, and add to both signals a random noise.

Each of the signals now is random. Yet I can manage to recreate the clear signal from both random signals.
But how are you defining "random"? Without a clear definition this is just vague verbal reasoning. There might indeed be some definition where two strings of digits could individually be maximally random, but taken together they are not (I think this would be true if you define randomness in terms of algorithmic incompressibility, for example)--this need not be any more of a contradiction than the fact that two objects can individually weigh less than five pounds while together they weigh more than five pounds.
heusdens said:
However, if you give me a clear formal description of an experiment and set up which can in principle be made using only the "classical" aspects of physics, I am about sure one can show a deviation from the Bell Inequality in the non-QM case too.
Just think of the experiment with Alice and Bob at the computer monitors which I described earlier, and try to think of a way to get the Bell inequality violations in such a way that a third-party observer can see exactly how the trick is being done--what procedure the computer uses to decide whether to display a + or - depending on what letter Alice and Bob type, based on some sort of signal or object sent to each computer from a common source, with the signal or object not containing any "hidden" information which can't be seen by this third-party observer but which help the computer to decide its output. This description might be a little vague, but as long as you avoid having each computer measure one member of a pair of entangled particles in order to choose its answer, it should be sufficiently "classical" for the purposes of this discussion.
 
Last edited:
Physics news on Phys.org
  • #107
DrChinese said:
The above represents an improper understanding of polarization and how it is measured. The "gap" has nothing WHATSOEVER to do with the cos^2 relationship. In fact, such filters are sometimes used in Bell tests but often they are not. Instead, polarizing beam splitters (bifringent prisms) are used and these have no gap.

Possibly, because I'm not too familiar with these kind of things.

Perhaps I'm confusing this with other kind of filters for other experiments.

You seem to keep missing the idea that the setup is tuned initially so that "perfect" correlations are seen (0 degrees of difference). There is very little noise to speak of when the angles are the same. So this is not an issue in any sense. All reputable experiments have a small amount of noise and this is considered when the margin of error is calculated. This is on the order of magnitude of 50+ standard deviations in modern Bell tests.

From what do you imply that I didn't catch that?

If you like, I can provide several references for Bell tests to assist in seeing that it is not an experimental issue.

You can post them, I would be glad to read them.

Bell test results agree with the basic predictions of ordinary QM, without the need for adding a non-local component. My conclusion is that the HUP is fundamental, and there is no observation independent layer of reality for quantum observables. (But that is merely one possible interpretation. MWI and BM are others.)

Sorry, what does HUP stand for?

I have in the course of this and other threats heard so many different explenations, each having their own dismerits (and merits), but all rather one-sided and only revealing partial truths.

I do not exactly conform myself to any of such explenations, because as for one thing, they basically shift the problem to some other department of physics, without resolving it (we would in some of these explenations for example have to reconsider relativity since it undermines it basic premisses, or otherwise undermine other basic premisis about our understanding of the world, or introduce arbitrary new phenomena, like many worlds, etc.).

So, actually I am trying to figure things out in a more substantial way.

The refereces to dialectics was meant to give a clue to this, because dialectics tries to escape from this one-sidedness of these formal mathematical explenations, and instead give a full picture of what can be regarded as truth.

[ Perhaps not everyone is happy with that, cause dialectics is not specifically related to quantum physics, and such discussions are meant to occur in the forums meant for philosophic topics, yet most of such threads are rather worthless, since most topics are rather un concrete. ]

One thing is clear, that in regard of dialectics, we can distinguish between appearance and essence. Formal logic does not make that distinction, which therefore ends up in contradictions, since dialectics does not insist for the appearance of something to coincide with it's essence.
 
Last edited:
  • #108
JesseM said:
I'd have to think about that more to have something really precise, but as a first try, you could say that every possible physical state of the universe can be represented by an element of some mathematically-defined set, with the universe's state corresponds to a single element at every moment. And there is some mathematical function for the time-evolution that tells you what future states the universe will be in given its past states (the function could be either deterministic or stochastic). And knowing which element of the set corresponds to its current state gives you the maximum possible information about the physical universe, there are no other variables which could affect your measurements or your predictions about the future state of the universe which could differ even for states that correspond to the same element of the state. But you're not really violating the laws of logic, you're just using the word "random" in a poorly-defined linguistic way, as opposed to a precise mathematical definition. Similarly, if I say "putting one rabbit and one rabbit together can give a lot more than two rabbits, since they could have babies", I'm not really violating the laws of arithmetic, I'm just using the phrase "putting one and one together" in a way that doesn't really correspond to addition in arithmetic. But how are you defining "random"? Without a clear definition this is just vague verbal reasoning. There might indeed be some definition where two strings of digits could individually be maximally random, but taken together they are not (I think this would be true if you define randomness in terms of algorithmic incompressibility, for example)--this need not be any more of a contradiction than the fact that two objects can individually weigh less than five pounds while together they weigh more than five pounds. Just think of the experiment with Alice and Bob at the computer monitors which I described earlier, and try to think of a way to get the Bell inequality violations in such a way that a third-party observer can see exactly how the trick is being done--what procedure the computer uses to decide whether to display a + or - depending on what letter Alice and Bob type, based on some sort of signal or object sent to each computer from a common source, with the signal or object not containing any "hidden" information which can't be seen by this third-party observer but which help the computer to decide its output. This description might be a little vague, but as long as you avoid having each computer measure one member of a pair of entangled particles in order to choose its answer, it should be sufficiently "classical" for the purposes of this discussion.

I am not an expert on information science, but for sure "random" has a precise mathematical definition by which it can be judged and stated that a stream of data is random or not.
Part of that definition will of course entail that from any part of the stream of data, we are not able to tell what data will come next, neither can we discover any meaningfull pattern from the data.

Now, supposing this definition, if I use such a random stream of data (the same random data) for two signals, and add to one of them a non-random stream of data, both data streams individually are still random, although they DO have a correlation.

That is the sort of correlation we are in fact looking for.

Just as a small example. I throw dice and note every time the outcome.
From that I create 2 streams of data. On one stream of data I add up a message, encoded in some form [ for example, I could encode the message or signal in a stream of numbers in base-6, and add that to the random values (modulo 6), and produce another random signal. ]

The resulting data stream keeps being random, since by no means we know the random stream of data, and can't detect the data that goes with it from either one stream.
Yet, we know from how we created these two data streams, that they do have a correlation. We only have to substract each value from one stream from the other, to get back to the data that was implemented on the stream.
This of course only works because we use the *same* random stream for *both* signals.
If however this correlation would be lost (for instance, if we use different random streams for the data signals), we would not be able to extract the original stream of data.

Now in this case, the resulting two streams are totally correlated, because we use the exact same random stream. But we can create of course other streams of data, by using different (independent) streams of random data, and in such a way that the original (meaningfull) stream of data gets more and more lost.

Wether we get the signal from both datastreams would depend for instance on some setting at the end of the data stream where we measure the output.

So, my thesis is, that in such a way the QM correlations might be reproduced, just by using random data streams which contain correlated data (to be discovered only by combining the two signals), and which correlation is dependend on some setting at both sides of the measuring device.

I know the above is a rather loosely defined "system" and is not written in strong mathematical terms, but I hope you get the picture.
 
Last edited:
  • #109
JesseM said:
What do you mean by "undefined"? What would happen when you measured the non-commuting observables?
Undefined is like [itex]\frac{0}{0}[/itex] or maybe like [itex]0^0[/itex] or [itex]\lim_{x \rightarrow 0} \sin{\frac{1}{x}}[/itex].

Can you give an example of the sort of model you're talking about?
Strong determinism is present, for example with Bohmian mechanics in a universe with a zero-diameter big bang, or in MWI's branch-both-ways.
Something like like “Deterministic Model of Spin and Statistics”, Physical Review D27, 2316-2326 (1983) http://edelstein.huji.ac.il/staff/pitowsky/papers/Paper%2004.pdf
(This is certainly not mainstream, but is mathematically sound.)
Even DrChinese's 'negative probabilities' apply (although, I expect that that approach will run into some problems if it's taken further).

Could you have a non-standard notion of probability that applies to a deterministic computer simulation, for example? If so, what aspects of the program's output would fail to obey the standard laws of probability?

There is no terminating deterministic touring machine that has any states with undefined probabilities.

I'll elaborate a little:

Kolmogorov (not sure it's spelled correctly) probability requires that if A has a probability of occurring, and B has a probability of occurring, then (A and B) must also have a probability (possibly 0) of occurring.

Now, if A and B are commutative observables, then this probability can be experimentally determined, so this notion will hold for any classical setting where measurements are non-perturbing, and thus always commutative.

However, in the Quantum setting, A and B may not be commutative, so, in order for the usual notion of probability to apply it's necessary to assume that the expression (A and B) has a well-defined probability.

In order to construct the inequality, Bell's theorem adds and subtracts expressions with untestable probabilities. Without the assumption that these scenarios have well-defined probabilities, it's like 'simplifying':
[tex]\frac{0}{0}-\frac{0}{0}[/tex]
to
[tex]0[/tex]
 
Last edited by a moderator:
  • #110
heusdens said:
I am not an expert on information science, but for sure "random" has a precise mathematical definition by which it can be judged and stated that a stream of data is random or not.
I think there are various definitions, but like I said, even if two 10-digit strings are maximally random for their length, there's no reason the 20-digit string created by combining them would also have to be maximally random for its length.
heusdens said:
Now, supposing this definition, if I use such a random stream of data (the same random data) for two signals, and add to one of them a non-random stream of data
What do you mean by "adding" two streams, one random and the other nonrandom? If one stream was 10010101101 and the other was 11111111111, what would the sum be?
heusdens said:
both data streams individually are still random, although they DO have a correlation.

That is the sort of correlation we are in fact looking for.
Is it? Please state a particular form of the Bell inequality, and show how your example violates it--I promise, you won't be able to get any such violation unless the streams are generated using entangled particles.

Alternately, you could explain how this idea of adding random and nonrandom data streams can be used to reproduce the results seen in the experiment with the computer monitors I brought up earlier in this post, where whenever Alice and Bob type the same letter they both get the same response from their computer, and yet one or both of these inequalities are violated:
* Number(Alice types A, gets +; Bob types B, gets +) plus Number(Alice types B, gets +; Bob types C, gets +) is greater than or equal to Number(Alice types A, gets +; Bob types C, gets +).

* when Alice and Bob pick different letters, the probability of them getting opposite results (one sees a + and the other sees a -) must be greater than or equal to 1/3.
(If you'd like some additional explanation of where these inequalities come from I can provide it.) When I asked you about this before, you did say "I am about sure one can show a deviation from the Bell Inequality in the non-QM case too." Well, are you willing to try to come up with a specific example?
heusdens said:
Just as a small example. I throw dice and note every time the outcome.
From that I create streams of data. One stream of data I add up a message, encoded in some form. The resulting data stream keeps being random, since by no means we know the random stream of data, and can't detect the data that goes with it.
Yet, we know from how we created these two data streams, that they do have a correlation. We only have to substract each value from one stream from the other, to get back to the data that was implemented on the stream.
This of course only works because we use the *same* random stream for *both* signals.
If however this correlation would be lost, we would not be able to extract the original stream of data.
Again, what does this have to do with the Bell inequalities? The Bell inequalities are all specific quantitative statements about the number of measuremenst with some outcomes vs. some other outcomes, not just some broad statement about the measurements being "correlated" when they measure on the same axis and "uncorrelated" on another. Again, would it help to go over the specific reasoning behind the two inequalities I mentioned above?
heusdens said:
So, my thesis is, that in such a way the QM correlations might be reproduced, just by using random data streams which contain correlated data (to be discovered only by combining the two signals), and which correlation is dependend on some setting at both sides of the measuring device.
OK, so to have a "non-quantum" case let's just Alice and Bob are both being sent a stream of signals which their computers are using as a basis for deciding whether to display a + or - each time they type one of the three letters, and that I am a third-party observer who sees every digit of both streams, how the streams are being generated, and what algorithm the computer uses to choose its output based on its input from the streams. In this case I promise you it will be impossible to reproduce the results described, where on each trial where they both type the same letter, they always get opposite symbols on their display, yet one or both of the inequalities I gave above is violated. If you think this is wrong, can you try to give the specifics of a counterexample?
 
  • #112
DrChinese said:
Einstein's view of realism was repeated by him long after EPR. He may not have liked the paper, but not because he thought it was erroneous. He was not happy with the focus on certain specifics of QM.

Einstein never disavowed "naive" realism: "I think that a particle must have a separate reality independent of the measurements. That is: an electron has spin, location and so forth even when it is not being measured." Personally, I don't think this is a naive statement. But that does not make it correct, either.

To bring clarity to the discussion, let's allow that there is: naive realism, strong realism, EPR realism, Bell realism, Einstein realism, ... (In my view: naive realism = strong realism = EPR realism = Bell realism = silliness.)

Now I am not aware that Einstein ever endorsed the other versions, so could you let me have the full quotes and sources that you rely on?

Note: We are not looking for Einstein's support of ''pre-measurement values'' BUT for the idea that measurement does NOT perturb the measured system (for that is the implicit silliness with EPR, Bell, etc). Einstein (1940, 1954) understood that the wave-function plus Born-formula related to the statistical prediction of ''measurement ourcomes'' and NOT pre-measurement values.
 
  • #113
heusdens said:
Like I said, there are probably a number of possible definitions--one used in information theory is algorithmic randomness, which says that a random string is "incompressible", meaning it's impossible to find a program that can generate it which is shorter than the string itself. The definition given in your link is more like statistical randomness, which is probably related--if there is some way of having better-than-even chances of guessing the next digit, then that could help in finding a shorter program to generate the string. But the definition the guy in the link was using doesn't seem quite identical to statistical randomness, because a string could be "statistically random" in the sense that there's no pattern in the string itself that would help you guess the next digit, but knowledge of some external information would allow you to predict it (as might be the case for a deterministic pseudorandom algorithm).
 
Last edited by a moderator:
  • #114
JesseM said:
I think there are various definitions, but like I said, even if two 10-digit strings are maximally random for their length, there's no reason the 20-digit string created by combining them would also have to be maximally random for its length.

No. It is even more subtle then that, whatever test we have for examining a data stream if it is random, it is in theory possible that a data stream passes this test, while it is not random, but can be decoded using an algorithm and a key.

What do you mean by "adding" two streams, one random and the other nonrandom? If one stream was 10010101101 and the other was 11111111111, what would the sum be?

Some form would be to encode it in base X, and do the additions of the random stream data R and meaningfull data stream D as: ( R(i) + D(i) ) modulo X.

It does not matter how we do it, as long as we can decompose the stream back, if we have the random signal R to extract singal D from it.

Is it? Please state a particular form of the Bell inequality, and show how your example violates it--I promise, you won't be able to get any such violation unless the streams are generated using entangled particles.

I am working on a good formulation of it.

Alternately, you could explain how this idea of adding random and nonrandom data streams can be used to reproduce the results seen in the experiment with the computer monitors I brought up earlier in this post, where whenever Alice and Bob type the same letter they both get the same response from their computer, and yet one or both of these inequalities are violated: (If you'd like some additional explanation of where these inequalities come from I can provide it.) When I asked you about this before, you did say "I am about sure one can show a deviation from the Bell Inequality in the non-QM case too." Well, are you willing to try to come up with a specific example?

I will do my best.

Again, what does this have to do with the Bell inequalities? The Bell inequalities are all specific quantitative statements about the number of measuremenst with some outcomes vs. some other outcomes, not just some broad statement about the measurements being "correlated" when they measure on the same axis and "uncorrelated" on another. Again, would it help to go over the specific reasoning behind the two inequalities I mentioned above? OK, so to have a "non-quantum" case let's just Alice and Bob are both being sent a stream of signals which their computers are using as a basis for deciding whether to display a + or - each time they type one of the three letters, and that I am a third-party observer who sees every digit of both streams, how the streams are being generated, and what algorithm the computer uses to choose its output based on its input from the streams. In this case I promise you it will be impossible to reproduce the results described, where on each trial where they both type the same letter, they always get opposite symbols on their display, yet one or both of the inequalities I gave above is violated. If you think this is wrong, can you try to give the specifics of a counterexample?

I said: I will try!
 
  • #115
wm said:
Note: We are not looking for Einstein's support of ''pre-measurement values'' BUT for the idea that measurement does NOT perturb the measured system
You can certainly assume measurement perturbs the system, but if you want to explain the perfect correlation between results when the experimenters measure along the same axis, in terms of local hidden variables, you'd have to assume it perturbs the state of the system in an entirely predictable way which does not vary between trials (i.e. if the particle was in state X and you make measurement Y, then if you get result Z once, you should get that result every time it was in state X and you made measurent Y).
 
  • #116
DrChinese said:
Bell test results agree with the basic predictions of ordinary QM, without the need for adding a non-local component. My conclusion is that the HUP is fundamental, and there is no observation independent layer of reality for quantum observables. (But that is merely one possible interpretation. MWI and BM are others.)

DrC, could you please expand on this interesting position? As I read it, you have a local comprehension of Bell-test results? (I agree that there is such, but did not realize that you had such.)

However, your next sentence is not so clear: By definition, an observable is observation dependent, so you seem to be saying that there are no underlying quantum beables?? There is no ''thing-in-itself''??

Personally: I reject BM on the grounds of its non-locality; and incline to endorse MWI with its locality, while rejecting its need for ''many worlds''.
 
  • #117
JesseM said:
Like I said, there are probably a number of possible definitions--one used in information theory is algorithmic randomness, which says that a random string is "incompressible", meaning it's impossible to find a program that can generate it which is shorter than the string itself. The definition given in your link is more like statistical randomness, which is probably related--if there is some way of having better-than-even chances of guessing the next digit, then that could help in finding a shorter program to generate the string. But the definition the guy in the link was using doesn't seem quite identical to statistical randomness, because a string could be "statistically random" in the sense that there's no pattern in the string itself that would help you guess the next digit, but knowledge of some external information would allow you to predict it (as might be the case for a deterministic pseudorandom algorithm).

Although it is a somewhat *side* issue, I think that even in theory there is no possibility of creating a true random stream. It could always be data that could be decoded back, using some algorithm and key, into meaningfull data.

So this makes random a very problematic feature. It would mean for instance that random and not-random fails the law of excluded middle. The same stream can be random (from some point of view or for some observer) and not random (from some other point of view or some other observer).
 
  • #118
JesseM said:
You can certainly assume measurement perturbs the system, but if you want to explain the perfect correlation between results when the experimenters measure along the same axis, in terms of local hidden variables, you'd have to assume it perturbs the state of the system in an entirely predictable way which does not vary between trials (i.e. if the particle was in state X and you make measurement Y, then if you get result Z once, you should get that result every time it was in state X and you made measurent Y).

Yes; its called determinism. That is why, in a Bell-test, when the detectors have the (say) same settings, the outcomes are identical (++, ++, --, ++, --, ...) (with no evidence of DrC's HUP). NEVERTHELESS, each perturbed particle (with its revealed observable) now differs from its pre-measurement state (with its often-hidden beable).

AND NOTE: Prior to one or other measurement, our knowledge of the state is generally insufficient for us to avoid a probablistic prediction; here 50/50 ++ XOR --. So, from locality, determinism is the underlying mechanism that delivers such beautifully correlated results from randomly delivered twins; HUP notwithstanding.
 
Last edited:
  • #119
heusdens said:
Although it is a somewhat *side* issue, I think that even in theory there is no possibility of creating a true random stream. It could always be data that could be decoded back, using some algorithm and key, into meaningfull data.
Well, neither definition says that this would preclude a string from being random. The "algorithmic incompressibility" definition just tells you that the algorithm for encoding the message, plus the most compressed algorithm for generating the message on its own (and if the message is meaningful, it might have a short algorithm to generate it), must be longer than the string itself. And the "statistical randomness" definition says that despite the fact that the string is an encoded message, that won't help you predict what each successive digit will be (or if it does help, then the string is not statistically random).
heusdens said:
So this makes random a very problematic feature. It would mean for instance that random and not-random fails the law of excluded middle. The same stream can be random (from some point of view or for some observer) and not random (from some other point of view or some other observer).
With any precise mathematical definition of randomness, there will be no violation of logic.
 
  • #120
Maybe this is a very naive attempt, but what if we just create 3 random streams

(that is: each stream is random of it self, and in relation to the other, so that neither one can predict any of the data of the same stream, or of the other stream, nor can one when combining any two streams or even all 3 streams, extract any usefull data from it).

of data, which are labelled a,b,c, corresponding to detector settings A,B,C of Alice and Bob. If Alice picks A and Bob picks A, they get the same data, likewise for B and for C. However if Alice picks a different setting as Bob, they get random outcomes. Does that match the criteria for breaking the inequality, or not?
 
Last edited:
  • #121
wm said:
Yes; its called determinism. That is why, in a Bell-test, when the detectors have the (say) same settings, the outcomes are identical (++, ++, --, ++, --, ...) (with no evidence of DrC's HUP). NEVERTHELESS, each perturbed particle (with its revealed observable) now differs from its pre-measurement state (with its often-hidden beable).
If the results follow in a determistic way from the choice of measurement + the preexisting state, what difference does it make if the revealed observable differs from the preexisting state? In terms of the proof it's interchangeable...if we say the preexisting state is {A+, B+, C-}, you could either say that this means the particle's state was spin-up on axis A, spin-up on axis B, and spin-down on axis C, and that the measurement just reveals these preexisting spins, or you define the state {A+, B+, C-} to mean "the particle is in a state X such that if it is perturbed by a measurement on the A-axis, the deterministic outcome will be that it is measured to be spin-up; if it is perturbed by a measurement on the B-axis, the deterministic outcome will be that it is measured to be spin-up; and if it is perturbed by a measurement on the C-axis, the determistic outcome will be that it is measured to be spin-down." Note that this second definition doesn't make any assumptions about what state X was actually like, just that the combination of the preexisting state X and a given measurement Y will always determistically lead to the same outcome.
 
  • #122
wm said:
Yes; its called determinism. That is why, in a Bell-test, when the detectors have the (say) same settings, the outcomes are identical (++, ++, --, ++, --, ...) (with no evidence of DrC's HUP). NEVERTHELESS, each perturbed particle (with its revealed observable) now differs from its pre-measurement state (with its often-hidden beable).

AND NOTE: Prior to one or other measurement, our knowledge of the state is generally insufficient for us to avoid a probablistic prediction; here 50/50 ++ XOR --. So, from locality, determinism is the underlying mechanism that delivers such beautifully correlated results from randomly delivered twins; HUP notwithstanding.

HUP stands for?
 
  • #123
JesseM said:
If the results follow in a determistic way from the choice of measurement + the preexisting state, what difference does it make if the revealed observable differs from the preexisting state? In terms of the proof it's interchangeable...if we say the preexisting state is {A+, B+, C-}, you could either say that this means the particle's state was spin-up on axis A, spin-up on axis B, and spin-down on axis C, and that the measurement just reveals these preexisting spins, or you define the state {A+, B+, C-} to mean "the particle is in a state X such that if it is perturbed by a measurement on the A-axis, the deterministic outcome will be that it is measured to be spin-up; if it is perturbed by a measurement on the B-axis, the deterministic outcome will be that it is measured to be spin-up; and if it is perturbed by a measurement on the C-axis, the determistic outcome will be that it is measured to be spin-down." Note that this second definition doesn't make any assumptions about what state X was actually like, just that the combination of the preexisting state X and a given measurement Y will always determistically lead to the same outcome.

The difference is that one view is true and helpful, the other misleading and confusing:

The difference is that the post-measurement state is ''manufactured'' from the pre-measurement state by the chosen observation. One pre-state, differing manufacturing processes, differing post-states. Example: if I deliver vertically-polarised photons to you, you can manufacture various alternative polarisations from the choice of detector setting (manufacturing process).

Did the delivered (pristine, pre-measurement, virginal) photons carry this countable-infinity of polarisations BEFORE you ''measured'' (= manufactured = processed) them?
 
  • #124
heusdens said:
HUP stands for?


Heisenberg's Uncertainty Principle.
 
  • #125
heusdens said:
Maybe this is a very naive attempt, but what if we just create 3 random streams

(that is: each stream is random of it self, and in relation to the other, so that neither one can predict any of the data of the same stream, or of the other stream, nor can one when combining any two streams or even all 3 streams, extract any usefull data from it).

of data, which are labelled a,b,c, corresponding to detector settings A,B,C of Alice and Bob. If Alice picks A and Bob picks A, they get the same data, likewise for B and for C. However if Alice picks a different setting as Bob, they get random outcomes. Does that match the criteria for breaking the inequality, or not?

Does anybody comment this?

Looks like if Alice and Bob choose unequal detector setting they now get random values, so with probability of equal (or unequal) values of 50%.

This is larger then we expect from Bell Inequality (1/3) ?
 
  • #126
heusdens said:
Maybe this is a very naive attempt, but what if we just create 3 random streams

(that is: each stream is random of it self, and in relation to the other, so that neither one can predict any of the data of the same stream, or of the other stream, nor can one when combining any two streams or even all 3 streams, extract any usefull data from it).

of data, which are labelled a,b,c, corresponding to detector settings A,B,C of Alice and Bob. If Alice picks A and Bob picks A, they get the same data, likewise for B and for C. However if Alice picks a different setting as Bob, they get random outcomes. Does that match the criteria for breaking the inequality, or not?
No, it won't violate either inequality. Suppose at the moment Bob is picking a letter, his computer is receiving + from stream a, + from stream b, and - from stream c (meaning that if he types A he'll see + on the screen, if he types B he'll see +, and if he types C he'll see -). This means that at the same moment Alice's computer must be getting - from her stream a, - from her stream b, and + from her stream c.

You could represent this by saying that at this moment, Bob's computer is primed in state {a+, b+, c-} based on its data streams, and Alice's computer is primed in state {a-, b-, c+} based on its own streams. Likewise, at another moment the Bob's computer could be primed in state {a-, b+, c-} and Alice's would be primed in state {a+, b-, c+}. This is just like the assumption that each particle they receive has a definite spin on all three axes.

So, consider the inequality:

(number of trials where Bob's computer primed in a state that includes a+ and b-) plus (number of trials where Bob's computer primed in a state that includes b+ and c-) is greater than or equal to (number of trials where Bob's computer primed in a state that includes a+ and c-)

If you think about it, you should be able to see why this must be true. On every trial where Bob's computer is "primed in a state that includes in a state that includes a+ and c-", the computer must either be primed in the state {a+, b+, c-} or in the state {a+, b-, c-}, there aren't any other possibilities. If it's primed in the state {a+, b-, c-}, then this must also be a trial in which it's "primed in a state that includes a+ and b-", so it contributes to the number on the left side of the inequality as well as the number on the right side. But if it's primed in the state {a+, b+, c-}, then this must be a trial in which it's "primed in a state that includes b+ and c-", so it still contributes to the left side of the inequality. Either way, every trial that contributes 1 to the right side of the inequality must also contribute 1 to the left side, so the number on the left side must always be greater than or equal to the right side.

Of course, Bob only finds out the state of one of his three streams on each trial. But under the assumption that Alice's stream at a given moment is always the opposite of Bob's, if Bob types A and gets + while Alice types B and gets +, that implies that Bob's computer must have been primed in a state that includes a+ and b-. So assuming Alice and Bob both pick letters randomly with equal frequencies, we can rewrite the inequality in terms of their actual measurements as:

(probability that Bob types A and gets + while Alice types B and gets +) plus (probability that Bob types B and gets + while Alice types C and gets +) is greater than or equal to (probability that Bob types A and gets + while Alice types C and gets +).

If the computers decide their output based on your data-stream method, the inequality above should be satisfied over a large number of trials.

And to understand why the other inequality I mentioned (the one saying that if they pick different letters, the probability of their getting opposite answers should be greater than or equal to 1/3) will be satisfied too using your datastream method, here's a slight modification of my post #10:
if we imagine Bob's computer is primed in state {a+, b-, c+} and Alice's computer is primed in state {a-, b+, c-} then we can look at each possible way that Alice and Bob can randomly choose different letters to type, and what the results would be:

Bob picks A, Alice picks B: same result (Bob gets a +, Alice gets a +)

Bob picks A, Alice picks C: opposite results (Bob gets a +, Alice gets a -)

Bob picks B, Alice picks A: same result (Bob gets a -, Alice gets a -)

Bob picks B, Alice picks C: same result (Bob gets a -, Alice gets a -)

Bob picks C, Alice picks A: opposite results (Bob gets a +, Alice gets a -)

Bob picks C, Alice picks picks B: same result (Bob gets a +, Alice gets a +)

In this case, you can see that in 1/3 of trials where they pick different letters, they should get opposite results. You'd get the same answer if you assumed any other heterogeneous primed state where there are two of one outcome and one of the other, like {a+, b+ c-}/{a-, b-, c+} or {a+, b-, c-}/{a-, b+, c+}. On the other hand, if you assume a homogenous primed state where each of the three datastreams tells the computer to give the same output, like {a+, b+, c+}/{a-, b-, c-}, then of course even if Alice and Bob pick different keys to type they're guaranteed to get opposite results on their screen probability 1. So if you imagine that when multiple primed states are generated by the datastreams over the course of many trials, on some fraction of trials there are inhomogoneous primed states like {a+, b-, c-}/{a-, b+, c+} while on other trials you have homogoneous primed states like {a+, b+, c+}/{a-, b-, c-}, then the probability of getting opposite answers when they type different letters should be somewhere between 1/3 and 1. 1/3 is the lower bound, though--even if the datastreams were made in such a way that the computer was in inhomogenous primed states in 100% of the trials, it wouldn't make sense for Alice and Bob to get opposite answers in less than 1/3 of trials where they typed different letters.
 
  • #127
heusdens said:
Does anybody comment this?

Looks like if Alice and Bob choose unequal detector setting they now get random values, so with probability of equal (or unequal) values of 50%.

This is larger then we expect from Bell Inequality (1/3) ?
The Bell inequality says that if they type different letters (choosing different streams under your method), the probability that they'll get opposite answers should be greater than or equal to 1/3. It's only violated if the probability ends up being smaller than 1/3.
 
Last edited:
  • #128
wm said:
The difference is that one view is true and helpful, the other misleading and confusing:

The difference is that the post-measurement state is ''manufactured'' from the pre-measurement state by the chosen observation. One pre-state, differing manufacturing processes, differing post-states. Example: if I deliver vertically-polarised photons to you, you can manufacture various alternative polarisations from the choice of detector setting (manufacturing process).

Did the delivered (pristine, pre-measurement, virginal) photons carry this countable-infinity of polarisations BEFORE you ''measured'' (= manufactured = processed) them?
But the question is irrelevant to the proof the Bell inequality. As long as you assume that the result of each possible measurement is predetermined, and that this includes the fact that the two experimenters are predetermined to get opposite spins if they measure the same axis, then Bell inequalities follow from this, there's nothing in the proof that requires you to assume you are just measuring a preexisting spin on that axis without perturbing it.
 
  • #129
JesseM said:
But the question is irrelevant to the proof the Bell inequality. As long as you assume that the result of each possible measurement is predetermined, and that this includes the fact that the two experimenters are predetermined to get opposite spins if they measure the same axis, then Bell inequalities follow from this, there's nothing in the proof that requires you to assume you are just measuring a preexisting spin on that axis without perturbing it.

Consider Bell (1964) and identify the un-numbered equations between (14) and (15) as (14a), (14b), (14c).

1. I'd welcome your detailed comment on Bell's move from (14a) to (14b), bound by your claim that there's nothing in the proof that requires you to assume you are just measuring a pre-existing spin on that axis without perturbing it.

2. That is, inter alia, please specify the FUNCTIONS A and B that satisfy Bell's move.

Thanks, wm
 
  • #130
wm said:
Consider Bell (1964) and identify the un-numbered equations between (14) and (15) as (14a), (14b), (14c).

1. I'd welcome your detailed comment on Bell's move from (14a) to (14b), bound by your claim that there's nothing in the proof that requires you to assume you are just measuring a pre-existing spin on that axis without perturbing it.

2. That is, inter alia, please specify the FUNCTIONS A and B that satisfy Bell's move.

Thanks, wm
I don't have this paper--is it online? In any case, if you're not worried about perfect mathematical rigor it's quite easy to prove that various versions of Bell's inequality must hold if particles have predetermined responses to all measurements and locality is obeyed--can you identify a flaw in the short proofs I gave in post #126 to heusdens, for example? Based on these sorts of simple proofs, I'm confident that even if Bell assumed measurements simply revealed preexisting spins (or whatever variable is being measured) in his original proof, it would be a fairly trivial matter to modify the proof to remove this assumption.
 
Last edited:
  • #131
heusdens said:
Yes, but the point that you miss then that any detectable object is spatially spread, this would then mean also that it contains "independent objects" - by your same reasoning! - rather as one object! So, if on that account the world is treated, independent objects wouldn't exist!

Indeed, relativistically, extended solid objects do not exist as single entity - except for hypothetical lightlike objects such as strings. But your "solid body" does not have a relativistic meaning as such ; it must always be seen as a bound state of smaller objects.
One reason this can be seen is that otherwise, the extended object would be able to transmit signals faster than light. Relativity requires that the speed of sound in a body must be smaller than the speed of light, which implies finite elasticity and hence "independent parts" which can move wrt each other.

So when can we know wether an object - any object at all! - can be treated as one object, or as a constellation of independent objects?

The "one object" treatment is always an approximation, and it depends on the situation at hand to know whether the approximation will yield good enough results for the application at hand.


Read the longer post which refutes MWI on logic grounds.

It's erroneous, as I showed...


1 cloud + 1 cloud might equal 1 cloud, that is the clouds themselves may merge, and what we previously saw as two separate cloud, becomes one new cloud.
Still in numbers/abstract form, 1+1=2 still applies, only the underlying reality we speak about, does not hold on to this formality.
This is of course because what for logic is a requirement, that we can speak of independend and seperable "objects", is not a requirement for the world itself.

There is no requirement in logic to "talk about independent objects" or whatever. Logic is about the truth of falsehood of statements.
Logic is rather: I saw two clouds. I saw a dog. Hence, the statement: "I saw two clouds and I saw a dog" is also true. It doesn't imply any kind of conservation law about statements of physical objects.


For example you could make a logical valid statement about an object and from the dynamics of the situation you could describe it's motion, which would incorporate making statements about where in the world the object would need to be found at any given time.
So this formaly would then state that an object at some given time would either be at location x, or not be a location x, but not both or something else.
So, there you already see the limitations of such formalism.

This is again not a requirement of logic, but of the hypotheses that are build into the physical theory at hand, and if the result turns out not to be correct, the hypotheses have a problem.
You need to make many hypotheses even to be able to make the statement that to an object is associated a point in an Euclidean space. This doesn't need to be so at all, but you can make that hypothesis. You also need to make the hypothesis that this association is unique. If you then find that you should assign two points in a Euclidean plane to a single object, then this simply means that your hypothesis of this unique association was erroneous. That's btw what quantum theory does.

The question then is: is there a complete and consistent description of the world possible at all, in which what we recognize on abstract/formal and mathematical grounds as true, also is true in the real world?

Nobody knows. The only thing we know is that there are abstract/formal mathematical theories which give us rather good results concerning statements of observation. It would probably be naive to think that they are ultimately correct, but they are good enough. It now seems that we have a mathematical model (build upon standard logic) which gives us a prediction of EPR situations, which seems to be in agreement with observation. As such, all we can do, is done.

The result can be more generalized to formalized systems.
But I'm not exactly sure about what constraints the formalized or formalizable system must have.

That's simple: it must make correct predictions of observations ! That's the single one and only requirement.

No, this is completely wrong, in the sense that the limitations of formal logic were discovered long time before quantum mechanics showed us these paradoxes.

But quantum mechanics doesn't show us any "paradox" in the sense that we derive two different and contradicting results WITHIN QUANTUM THEORY. Quantum theory (which is a formal theory based upon standard logic) gives us unambiguously the correct result, which is also experimentally observed. This shows us that there is no "problem of logic" there. We only have difficulties *believing* what quantum mechanics tells us, that's all. So we try to explain it DIFFERENTLY, with EXTRA REQUIREMENTS. And THEN we run into troubles.

So given that a formal theory based upon standard logic gives us the correct predictions, it would be a strange reaction to want to change standard logic (or the theory that makes the correct predictions). It simply means that we misjudged the explanatory power of the theory at hand, by thinking that it was "just a statistical tool but which must have some or other underlying mechanism". It is the last statement which fails.
 
  • #132
JesseM said:
I don't have this paper--is it online? In any case, if you're not worried about perfect mathematical rigor it's quite easy to prove that various versions of Bell's inequality must hold if particles have predetermined responses to all measurements and locality is obeyed--can you identify a flaw in the short proofs I gave in post #126 to heusdens, for example? Based on these sorts of simple proofs, I'm confident that even if Bell assumed measurements simply revealed preexisting spins (or whatever variable is being measured) in his original proof, it would be a fairly trivial matter to modify the proof to remove this assumption.

1. I think the paper is available from DrC's website. I'm happy to wait for you to access it

2. I suggest we stick with it (ie, Bell 1964) because it is well-known that Bell's theorem is applicable to dirty-socks, down-hill skiers, computers, students, ... .

3. So (I hope), the question we are addressing is this: Why is Bell's theorem (BT) invalid for the original case-study (EPRB; ie Bell 1964) and similar settings??

4. That is: Why are Bellian inequalities FALSE in the settings that BT was designed to illuminate? (Hint: Examine those totally simplistic settings in which it holds!)

If this is not OK, or you have a better idea, let me know. Regards, wm
 
  • #133
wm said:
2. I suggest we stick with it (ie, Bell 1964) because it is well-known that Bell's theorem is applicable to dirty-socks, down-hill skiers, computers, students, ... .
Although I tailored the short proofs I gave above to a particular thought-experiment, it's quite trivial to change a few words so they cover any situation where two people can measure one of three properties and they find that whenever they measure the same property they get opposite results. If you don't see how, I can do this explicitly if you'd like.
wm said:
3. So (I hope), the question we are addressing is this: Why is Bell's theorem (BT) invalid for the original case-study (EPRB; ie Bell 1964) and similar settings??
I am interested in the physics of the situation, not in playing a sort of "gotcha" game where if we can show that Bell's original proof did not cover all possible local hidden variable explanations then the whole proof is declared null and void, even if it would be trivial to modify the proof to cover the new explanations we just thought up as well. I'll try reading his paper to see what modifications, if any, would be needed to cover the case where measurement is not merely revealing preexisting spins, but in the meantime let me ask you this: do you agree or disagree that if we have two experimenters with a spacelike separation who have a choice of 3 possible measurements which we label A,B,C that can each return two possible answers which we label + and - (note that these could be properties of socks, downhill skiers, whatever you like), then if they always get opposite answers when they make the same measurement on any given trial, and we try to explain this in terms of some event in both their past light cone which predetermined the answer they'd get to each possible measurement with no violations of locality allowed (and also with the assumption that their choice of what to measure is independent of what the predetermined answers are on each trial, so their measurements are not having a backwards-in-time effect on the original predetermining event, as well as the assumption that the experimenters are not splitting into multiple copies as in the many-worlds interpretation), then the following inequalities must hold:

1. Probability(Experimenter #1 measures A and gets +, Experimenter #2 measures B and gets +) plus Probability(Experimenter #1 measures B and gets +, Experimenter #2 measures C and gets +) must be greater than or equal to Probability(Experimenter #1 measures A and gets +, Experimenter #2 measures C and gets +)

2. On the trials where they make different measurements, the probability of getting opposite answers must be greater than or equal to 1/3
 
Last edited:
  • #134
heusdens said:
Sorry, what does HUP stand for?

I do not exactly conform myself to any of such explenations, because as for one thing, they basically shift the problem to some other department of physics, without resolving it (we would in some of these explenations for example have to reconsider relativity since it undermines it basic premisses, or otherwise undermine other basic premisis about our understanding of the world, or introduce arbitrary new phenomena, like many worlds, etc.).

So, actually I am trying to figure things out in a more substantial way.

The refereces to dialectics was meant to give a clue to this, because dialectics tries to escape from this one-sidedness of these formal mathematical explenations, and instead give a full picture of what can be regarded as truth.

[ Perhaps not everyone is happy with that, cause dialectics is not specifically related to quantum physics, and such discussions are meant to occur in the forums meant for philosophic topics, yet most of such threads are rather worthless, since most topics are rather un concrete. ]

Sorry, I ususally give my abbreviations...

HUP = Heisenberg Uncertainty Principle.

As to more "substantial" treatments... we are all interested in this as well. I caution you that it is an error to think that this has not been explored in depth by many folks. If you check the Preprint Archives you will find hundreds of papers like Travis Norsen's (well not exactly like...) - in the past year alone - that look at Bell from every conceivable angle. Most of these papers claim to add new insight, but precious few are likely to be remembered years from now.

Also, please keep in mind that there are forum guidelines regarding personal theories. You keep referring to "dialectics" which I have never seen mentioned in regards to Bell's Theorem. Unless this has some direct bearing on this thread topic, I would recommend staying away from this.
 
  • #135
wm said:
To bring clarity to the discussion, let's allow that there is: naive realism, strong realism, EPR realism, Bell realism, Einstein realism, ... (In my view: naive realism = strong realism = EPR realism = Bell realism = silliness.)

Now I am not aware that Einstein ever endorsed the other versions, so could you let me have the full quotes and sources that you rely on?

Note: We are not looking for Einstein's support of ''pre-measurement values'' BUT for the idea that measurement does NOT perturb the measured system (for that is the implicit silliness with EPR, Bell, etc). Einstein (1940, 1954) understood that the wave-function plus Born-formula related to the statistical prediction of ''measurement ourcomes'' and NOT pre-measurement values.

Einstein of course supported what you call "pre-measurement values". This is because he said:

"I think that a particle must have a separate reality independent of the measurements. That is: an electron has spin, location and so forth even when it is not being measured. I like to think that the moon is there even if I am not looking at it."

I cannot find a date and exact source.
 
  • #136
wm said:
DrC, could you please expand on this interesting position? As I read it, you have a local comprehension of Bell-test results? (I agree that there is such, but did not realize that you had such.)

However, your next sentence is not so clear: By definition, an observable is observation dependent, so you seem to be saying that there are no underlying quantum beables?? There is no ''thing-in-itself''??

Personally: I reject BM on the grounds of its non-locality; and incline to endorse MWI with its locality, while rejecting its need for ''many worlds''.

I tend to support a rejection of realism rather than a rejection of locality (in order to reconcile with Bell's Theorem). I do not know if there are beables, but there definitely are observables. I do not, for instance, if there is a one-to-one mapping of observables to beables. My guess would be that there is not, since there can be nearly an infinite number of observables for a single particle,

I definitely do not agree that it is a closed question (i.e. by definition) an observable must be observer dependent. That is one of the questions we seek the answer to. I happen to think it is, but I do not expect others to necessarily agree with this position. I believe that the Heisenberg Uncertainty Principle essentially calls for this position.
 
  • #137
wm said:
Yes; its called determinism. That is why, in a Bell-test, when the detectors have the (say) same settings, the outcomes are identical (++, ++, --, ++, --, ...) (with no evidence of DrC's HUP). NEVERTHELESS, each perturbed particle (with its revealed observable) now differs from its pre-measurement state (with its often-hidden beable).

Ah, but the Heisenberg Uncertainty Principle (HUP) is quite present in such cases! Note that we cannot learn MORE information than the HUP allows about one particle by studying its entangled twin!
 
  • #139
heusdens said:
Does anybody comment this?

Looks like if Alice and Bob choose unequal detector setting they now get random values, so with probability of equal (or unequal) values of 50%.

This is larger then we expect from Bell Inequality (1/3) ?

As JesseM has pointed out: you actually get values as low as 25% in actual Bell test situations - not the 50% you imagine. The reason is that there is (anti)correlation between the results of unequal detector settings. So I hope we all see this point.

I have a web page that shows the cases (AB, BC, AC as red/yellow/blue) and may help anyone to visualize the situation:

Bell's Theorem with Easy Math
 
  • #140
DrChinese said:
As JesseM has pointed out: you actually get values as low as 25% in actual Bell test situations - not the 50% you imagine. The reason is that there is (anti)correlation between the results of unequal detector settings. So I hope we all see this point.

I have a web page that shows the cases (AB, BC, AC as red/yellow/blue) and may help anyone to visualize the situation:

Bell's Theorem with Easy Math

Yeah, the example I gave was obviously wrong, cause I didn't get the equality right and made a too simple approach.

I want to perform some experiments in thought and get everyone's reaction to it. (and please correct me where I get things wrong).

We have the (formally described) setup of two detectors , which each have a setting of A,B,C and each setting (of which only one at a time is used) produces a value as '+' or '-'. Right?
We then also have this source that produces output to both detectors.
We have no prior idea about how the source and detector setting correspond to output values.

First let us imagine, there wasn't a source at all. We then have two 'devices' with settings A,B and C, that produce either a '+' or a '-' when set. Like in the previous case, only one of A, B or C for both devices can be set.
The devices are now some kind of blackbox. We don't know anything about the internals of it, if either the result is produced from something within, or if there is some signal going in. Anyway we get a result.

First we inspect individual data from the detectors.

For both detectors and for every setting we use we get + or - in equal amounts, that is the chance of having either + or - is 50% (or .5).

{is this assumption correct?}

Now we inspect results from both detectors, and see how they compare.

First remarkable thing:

I.

If the detectors have an equal setting, then the results are either ++ or --. The positive correlation (= same result from detectors) is 100% (or 1).

{Question:
a. can we still assume that each individual detector produces a really random result?
b. does the correlation only hold for exactly simultanious results?
c. the cances of either ++ or -- are 50% each ??

It would be weird if b and/or c does not hold and a still holds...
}

Second remarkable thing:

II.

If the detectors have an unequal setting, then we find that results of +- and -+ , that is negative correlation (= unequal result from detectors) happening with a change of 25% (or .25).

{Same questions as above, but for c now: chances of +- or -+ are 50% each??}

Now how can we explain this??

We first try to find independent explenations for the separate remarks.

First let us look at I. (detector settings equal)

We can make all kind of suggestions about how this could be the case.

For example, we could assume that both detectors have an exactly the same algorithm with which to produce the data. Each data separate is a random result, but the results of both sides are always the same. The algorithm works because what the detectors still have in common is time and possible also other easily overlooked common sources (external light source, common to both observers, and other such common sources).

{the assumption is here that if we take the detector results 'out of sync', for instance data of detector 1 at time t and data of detector 2 at time t + delta t (delta t>0) this results aren not produced; -- is that a realistic assumption??}

A less trivial approach is to suspect that detector 1 has received a signal from detector 2, and knows that it setting is the same, somehow, and can produce the positive correlation result. The signal need not be instantanious to explain it (if the signal contains the timestamp). The weird thing also for this explenation is that it breaks the symmetrie, since we could also suppose detector 2 somehow gets a signal from detector 1. If we assume symmetry, both signals would occur for this explenation. But then how could we get the correlation as we see, based on both the signals? In the a-symmetric case, we would have no trouble to find a possibility for correlation, since then only one detector would have to adjust to produce the corresponding output. It is more difficult this can happen for the symmetric case (both adjustments would cancel out), but if we assume that the setup is symmetric, we have to assume just that. This however can then be showed to be equal to the case in which both detectors receive a simultanious signal that the detector settings match, so both detectors can make equal and simultanious adjustments. This is like postulating that exactly in the middle (in order for simultanious arrival) between those detectors is a receiver/transmitter, that receives the detector signals, and transmit back wether they are equal or not.Now we look at II. (detector settings unequal)

We get a .25 chance that we have unequal results (+- or -+).
This is same as a .75 chance for having equal results (++ or --).

Both detectors individually (if I assume correctly) still produce random results, but they results from both detectors are now equal in 3 out of 4 on average, which is the same as that they are unequal in 1 out of 4 on average.

In principle we can now suppose that same kind of things that were supposed to explain the outcomes in the previous case, also happen here, with the exception that the output that is generated is not always ++ or --, but only in 3 out of 4 cases.
This is then just assuming a different algorithm to produce that result.Now we try to combine explenations I and II.

For each explenation I and II seperately we could assume that something purely internal generated the outcomes. But if I and II occur, we have no way of explaining this.
So, this already urges us to assume that the detector states (settings) are getting transmitted to a common source, exactly in the middle (that is in the orthogonal plane which intersects the line between both detectors in the middle point of that line).

If we can also verify that in the experiment (by placing detectors very far away) this hypothetical signal speed is like instantanious, by changing the dector setting simultaniously, and get instant correlations.
To cope with that, the hypothetical assumption is that this is like a signal that travels back in time from the detector to a common source, and travels forward in time to the detector.

Conclusion:
Although we did not setup this imaginary experiment with this common source, it already follows from the results of this experiment, that such a common source must be assumed, which communicates back and forth between the detectors.
 
Last edited:

Similar threads

Replies
50
Views
4K
Replies
12
Views
2K
Replies
24
Views
642
Replies
11
Views
2K
Replies
1
Views
1K
Back
Top