Are entanglement correlations truly random?

In summary: And yes, there could be regions of varying sizes within the universe in which there "less-than-maximum" entanglement between member particles. I would guess them as being small, although I guess there is no specific way to check that. If the member particles within the hypothetical region follow some kind of monogamy rule whereby they are only entangled with each other, then they would be considered to be maximally entangled.
  • #36
entropy1 said:
I would say those sources are not independent. If they are, there would have to be hidden variables, but that would yield different results. Otherwise I can't imagine such an experiment.

Just to be specific:

https://arxiv.org/abs/0809.3991
"High-fidelity entanglement swapping with fully independent sources"
Rainer Kaltenbaek, Robert Prevedel, Markus Aspelmeyer, Anton Zeilinger

The swapping operation entangles a pair of photons that were produced from fully independent sources. Of course the pairs are post-selected, but their polarization will be as random as it gets. And correlated with each other.
 
  • Like
Likes Simon Phoenix
Physics news on Phys.org
  • #37
Chris Miller said:
Your remark got me wondering, is there any evidence that entanglement actually involves any interaction between particles? As in, do they actually exert any sort of influence on each other, or are they just similarly seeded/configured? Analogous say to a "random" number generating algorithm, which will always, given the same seed, generate the same sequence of values, regardless of frame of reference. With the particles, the seed is physically configured, so maybe more akin to similarly loaded dice.

Well in some respects, I would agree: entangled particles are similarly "seeded". But I would exercise restraint with that statement too. Bell tells us that they can't contain local hidden variables that predetermine outcomes. For example, a pair entangled as to spin is in a superposition until measured.
 
  • Like
Likes QuantumQuest
  • #38
entropy1 said:
It is just that correlating data is random in every other alignment. That would be random, were it not that in exactly one alignment they correlate. This differs from independent random sources. It is this difference that sets correlating data apart, in my view.

You seem to me to be making very heavy weather of the notion of statistical dependence in these posts - but then again, as I've said, maybe I'm missing the point of what you're saying.

I'm not sure what your notion of 'alignment' is trying to say - it's kind of obvious/trivially true and I can't yet see the utility of it. Take a uniformly random binary string*. Clearly there's no correlation between bit ##j## and bit ##k## (otherwise it wouldn't be a uniformly random string). But that's all you're saying with the notion of alignment you've described.

I can't really see that you're saying anything different in your posts than if we have dependent distributions for two events ##A## and ##B## then the joint distribution ##p(A,B)## is not equal to the product of the marginal distributions ##p(A)p(B)##

But I may very well have misunderstood what you're trying to say.

In information terms we would say that two distributions are correlated (dependent) if knowledge of one of the events reduces our uncertainty about the other event. Coming back to Chris' allusion to crypto we can see this is a useful parameter. If we have access to a ciphertext (eavesdropping) does this reduce our uncertainty about the message that was sent? If it does then there is some information about the message we can recover from the ciphertext. So assuming a secret key (that is, maximal uncertainty about the key) the only way to have zero information about the message in the ciphertext is to use a one-time pad. So in all crypto (except one time pads) there ##is## information in the ciphertext - the idea is to make this information unrecoverable to a polynomial time adversary.

*"string" here is again used as a shorthand because we're really properly talking about distributions.
 
  • #39
DrChinese said:
Just to be specific:

https://arxiv.org/abs/0809.3991
Do you mean entanglement swapping?

Simon Phoenix said:
You seem to me to be making very heavy weather of the notion of statistical dependence in these posts - but then again, as I've said, maybe I'm missing the point of what you're saying.
My proposal is, I think, really straightforward to get. I'll see if I can work my code into a graphic illustration. Maybe that illuminates the idea a bit. :smile:
 
  • #40
entropy1 said:
Do you mean entanglement swapping?

Yes, per the title of the paper. :smile:

The resulting entangled photons (after the swap) are from fully independent sources and have never interacted or even existed in a common light cone. The sources are phase locked together via a synchronizing signal. Note that the swap can occur after the entangled pair is observed, and in fact the entangled photons need not have ever existed at the same time.
 
  • #41
DrChinese said:
Yes, per the title of the paper. :smile:

The resulting entangled photons (after the swap) are from fully independent sources and have never interacted or even existed in a common light cone. The sources are phase locked together via a synchronizing signal. Note that the swap can occur after the entangled pair is observed, and in fact the entangled photons need not have ever existed at the same time.
You could also see that as two dependent pairs of entangled particles (or measurements), that get dependent of each other by the swap. But then you have to suppose that an entangled pair is inherently dependent, which not seems that unreasonable to me. :rolleyes:
 
  • #42
entropy1 said:
You could also see that as two dependent pairs of entangled particles (or measurements), that get dependent of each other by the swap.

That how I see it. Of course, the swap occurs in a separate volume of spacetime from either of the final entangled pair. And can occur in any causal sequence relative to the creation or measurement of the final entangled pair.

In my view, the resulting outcome pairs are truly random even if redundant.
 
  • #43
DrChinese said:
That how I see it. Of course, the swap occurs in a separate volume of spacetime from either of the final entangled pair. And can occur in any causal sequence relative to the creation or measurement of the final entangled pair.

In my view, the resulting outcome pairs are truly random even if redundant.
But the final measurements (particles 1 and 4) would be dependent, right?
 
  • #44
entropy1 said:
But the final measurements (particles 1 and 4) would be dependent, right?

Particles 1 & 4 (the final entangled pair) are fully independent in the sense that they were never in the same region of spacetime, nor did they ever interact with a particle that had previously interacted with the other in any manner.

The dependent part would only be that there is a correlation.
 
  • #45
DrChinese said:
The dependent part would only be that there is a correlation.
Correlated, strangely similar maybe, but given that they don't even have to exist at the same time, it's hard to conceive of a connection... unless through some higher dimension?
 
  • #46
DrChinese said:
The dependent part would only be that there is a correlation.
And entanglement. :wink:
 
  • #47
entropy1 said:
My proposal is, I think, really straightforward to get.

Yes - basically think of putting your binary string on a wheel. Now copy it and put this on an 'inner' wheel. Rotate the inner wheel. With overwhelming probability there's only one position of the two wheels where there is perfect correlation between the bits in the same positions on the wheels.

Now do the same but with 2 independently produced random strings - now, with overwhelming probability, there will be no position we can rotate to for which there's a perfect correlation.

But so what? I really don't see where you're going with this, or how it is helpful. There may well be something useful in this perspective - it's just I'm not seeing it yet.

All you're talking about here is the notion of statistical dependence - kind of statistics and probability 101. You're just talking about the difference between independent and dependent events. So far there's precious little in any of this to do with entanglement. The fact that things can be perfectly correlated is not the defining feature of entanglement (look up Bertlmann's socks)

Another model that's useful - think of a binary symmetric communication channel. We have Alice's input and Bob's output.

Alice inputs the symbol 1 with probability 1/2 and the symbol 0 with probability 1/2.

For the symmetric channel Bob will record the symbol 1 with probability 1/2 and the symbol 0 with probability 1/2.

Now if there's no noise on the channel if Alice inputs 1, Bob receives a 1. Same for the symbol 0. In this case there's perfect correlation and the message is received without error (the coins joined by a rigid rod example). If there's perfect noise on the channel there's no correlation now between Alice's input and Bob's output and so no information can flow (two independent coins being tossed).

If it's a binary symmetric channel the marginal probabilities remain the same - but the conditional probabilities change as we change the noise characteristic of the channel.

What are you saying in your posts that's any different to this example of dependence vs independence?
 
  • #48
Chris Miller said:
it's hard to conceive of a connection... unless through some higher dimension?

DrChinese has given the example of full entanglement swapping, but there's a kind of 'half-way house' that might help to shed some light.

Imagine a perfect optical cavity (the experiments were done in very high-Q microwave cavities). Now take a 2 level atom in its excited state (Rydberg atoms can be used as a reasonable approximation to a 2 level atom). Fire this atom through the cavity with a specific transit time such that the field and atom become perfectly entangled.

Now suppose we live in an ideal world and we can maintain the entanglement between the cavity field and the atom. Go make a cup of tea. Ship the atom off to the outer moons of Saturn.

Now take a second atom prepared in its ground state and fire this through the cavity with a different tailored transit time through the cavity. Tailor this time just right and after this second atom has gone through the cavity the 2 atoms are now entangled and the cavity field 'decoupled'.

The two atoms have never directly interacted - and (if we can maintain the entanglement long enough) the two atoms can be fired through the cavity years apart.

OK that's wildly fanciful in terms of shipping things off to Saturn and maintaining entanglement for years - but the experiments have been performed (although with more modest parameters).
 
  • #49
Simon Phoenix said:
Yes - basically think of putting your binary string on a wheel. Now copy it and put this on an 'inner' wheel. Rotate the inner wheel. With overwhelming probability there's only one position of the two wheels where there is perfect correlation between the bits in the same positions on the wheels.

Now do the same but with 2 independently produced random strings - now, with overwhelming probability, there will be no position we can rotate to for which there's a perfect correlation.
Told you it was simple. :biggrin: Unfortunately I think I have to refrain from replying to your post because I fear I would go far off topic. But I'll give your post a good contemplation. :smile:
 
  • #50
This is all very confusing... just thinking out loud...

I flip a coin and it lands heads. Does it still make sense to describe the probability of a heads for that flip as p=.5, ten minutes after the fact? Does probability even exist for events in the past? If the flip occurred within a closed box, opened after ten minutes, it seems my probability before opening the box, of guessing heads correctly, is p=.5, but the coin's probability of heads went to p=1 at the end of the unseen flip... and my probability of guessing heads correctly goes to p=1 as soon as the box is opened... but I don't think that is how it is done.

I have a sequence to which a subsequent element is appended every ten minutes (maybe a coin flip). While waiting for the next element I produce a rule that specifies all the elements so far, and when each next element arrives I adjust the rule to succeed incorporating the inclusion of that one. Is a finite string characterized by a rule random? I don't think that is how it's done either.

Both of these thoughts goes to a time relationship of randomness... does the standard treatment not take time into account? It looks to me like things that have not happened yet (hypothetical) may be characterized as probabilistic or random, but once these things happen and are made manifest*, they may be characterized as having a probability of 1, or characterized by a complete generating rule.

* Have probability and randomness joined with length, time, and simultaneity respecting relativistic measures?
 
  • #51
Simon Phoenix said:
DrChinese has given the example of full entanglement swapping, but there's a kind of 'half-way house' that might help to shed some light.

Imagine a perfect optical cavity (the experiments were done in very high-Q microwave cavities). Now take a 2 level atom in its excited state (Rydberg atoms can be used as a reasonable approximation to a 2 level atom). Fire this atom through the cavity with a specific transit time such that the field and atom become perfectly entangled.

Now suppose we live in an ideal world and we can maintain the entanglement between the cavity field and the atom. Go make a cup of tea. Ship the atom off to the outer moons of Saturn.

Now take a second atom prepared in its ground state and fire this through the cavity with a different tailored transit time through the cavity. Tailor this time just right and after this second atom has gone through the cavity the 2 atoms are now entangled and the cavity field 'decoupled'.

The two atoms have never directly interacted - and (if we can maintain the entanglement long enough) the two atoms can be fired through the cavity years apart.

OK that's wildly fanciful in terms of shipping things off to Saturn and maintaining entanglement for years - but the experiments have been performed (although with more modest parameters).

Thanks for the great example. It seems more than ever to me as though "entangled" isn't quite the right word for the phenomenon. There is no co-dependence. The two atoms have just been configured/seeded to perform with some predictable similarity. I.e., only their states are correlated.

Question: Does it have to be the same "perfect optical cavity," or could these two atoms be "entangled" via two identical cavities in distant locations?
 
  • #52
Simon Phoenix said:
Yes - basically think of putting your binary string on a wheel. Now copy it and put this on an 'inner' wheel. Rotate the inner wheel. With overwhelming probability there's only one position of the two wheels where there is perfect correlation between the bits in the same positions on the wheels.

Now do the same but with 2 independently produced random strings - now, with overwhelming probability, there will be no position we can rotate to for which there's a perfect correlation.

But so what? I really don't see where you're going with this, or how it is helpful. There may well be something useful in this perspective - it's just I'm not seeing it yet.
What I noticed was, that the case of two truly independent sources differs from the case of pairs of entangled particles, in the sense that the latter case 'contains' a correlation. So, a difference of the latter case with 'true randomness' means that you have either conclude that the latter is 'not truly' random, or you have to extend the definition of 'true randomness' to include correlation.

The difference between the two is that in the first case the sources are independent, and in the second they are dependent. If you define correlation = dependence, then you can just observe that there is a correlation and leave it with that. It would be a tautology. But the correlation has a cause, and the extension with this cause is what, in my eyes, characterizes the difference. But maybe that's a tautology too. :wink:
 
Last edited:
  • #53
entropy1 said:
So, a difference of the latter case with 'true randomness' means that you have either conclude that the latter is 'not truly' random, or you have to extend the definition of 'true randomness' to include correlation.

Seems to me like you're going round in circles a bit here. Let's focus on just binary variables. I'm assuming by truly random you actually mean uniformly at random so that the probability of obtaining 1 is 1/2.

You're conflating the notion of randomness with the notion of dependence I think. Suppose we have two random processes ##A## and ##B## each spitting out binary strings. If they're independent processes then the total entropy is simply the sum of the entropy of process ##A## and the entropy of process ##B##. If the processes are dependent then the total entropy is less than this. Really what's your issue here?

If we do have ##S(A,B) \lt S(A) + S(B)## then ##A## and ##B## are both random processes, just not independently so. This is all just standard probability theory.

OK, it's usually expressed in terms of conditional probabilities so that for independent random processes we would write ##P(A|B) = P(A)## and ##P(B|A) = P(B)## or, equivalently, we would write the joint distribution ##P(A,B) = P(A)P(B)##

For dependent processes we would have ##P(A,B) = P(A)P(B|A) = P(B)P(A|B)##

There's no need at all to redefine anything. There's nothing that you're saying here other than "two random processes can be dependent".
 
  • #54
If there is no correlation between ##A## and ##B##, we have ##P(A,B)=P(A)P(B)##, factorization. If there is a correlation between ##A## and ##B##, then ##P(A,B)≠P(A)P(B)##. Instead, we have ##P(A,B)=P(A|B)P(B)## with ##P(A)≠P(A|B)##. We can write ##P(A,B=0)=P(A|B=0)P(B=0)## and ##P(A,B=1)=P(A|B=1)P(B=1)##. So the difference with the factorizing ##P(A,B=x)=P(A)P(B=x)## is that ##P(A)## has been replaced by ##P(A|B=0)## in one case and ##P(A|B=1)## in the other case, with ##P(A)≠P(A|B=0)## and ##P(A)≠P(A,B=1)##. So you could see it as that the probability of ##A## in the correlating case has two different values, namely ##P(A|B=0)## and ##P(A|B=1)##, depending on the outcome of ##B##. So this would be a reason for me to call this "not truly random". Does this make any sense?

* With 'A being truly random' I mean that the probability of A doesn't vary.
 
Last edited:
  • #55
The correlation codes for the (relative) measurement basisses, so there is more information in ensembles A and B than just two times purely random bits.

And I think the entropy of correlated values is lower than uncorrelated values. To get all ones and zero's aligned takes more than pure randomness.
 
Last edited:
  • #56
entropy1 said:
To get all ones and zero's aligned takes more than pure randomness.
So that's like saying you have a random number generator that has an output of 0 or 1 and places it into variable A, then you use a not function to place the opposite of A into B, and somehow those anticorrelated values are less random because of it?
 
  • #57
jerromyjon said:
So that's like saying you have a random number generator that has an output of 0 or 1 and places it into variable A, then you use a not function to place the opposite of A into B, and somehow those anticorrelated values are less random because of it?

He's saying that if you randomly generate one bit, you have one bit of randomness; using a not function to generate a second (anti)correlated bit doesn't generate any additional randomness. You don't have two bits of randomness just because you have two bits. You would have to randomly generate both bits to get two bits of randomness, but if you did that, they wouldn't be correlated.
 
  • #58
PeterDonis said:
You would have to randomly generate both bits to get two bits of randomness, but if you did that, they wouldn't be correlated.
Exactly.
entropy1 said:
So you could see it as that the probability of A in the correlating case has two different values, namely P(A|B=0)P(A|B=0)P(A|B=0) and P(A|B=1)P(A|B=1)P(A|B=1), depending on the outcome of B. So this would be a reason for me to call this "not truly random".
entropy1 said:
The correlation codes for the (relative) measurement basisses, so there is more information in ensembles A and B than just two times purely random bits.
Maybe I'm misinterpreting, maybe I'm missing something... but it seems to me the debate centers around 2 separate values with a single random base... or more specifically 2 correlated values.
 
  • #59
So, if we have a certain amount of correlation, but not complete, do we have a fractional number of bits?

Moreover, if we have a single bit of randomness distributed over two measurement results, isn't that bit a hidden variable?
 
  • #60
entropy1 said:
if we have a certain amount of correlation, but not complete, do we have a fractional number of bits?

Of entropy, yes.

entropy1 said:
if we have a single bit of randomness distributed over two measurement results, isn't that bit a hidden variable?

If the joint state of the two bits is a "hidden variable", then yes. If we're talking about classical bits, then it's fine to look at it that way, because classical bits can't violate the Bell inequalities. If we're talking about entangled quantum bits, then you can consider their joint quantum state as a "hidden variable", but it can't be a local hidden variable in the Bell sense because measurements on entangled quantum bits can violate the Bell inequalities.
 
  • #61
PeterDonis said:
because measurements on entangled quantum bits can violate the Bell inequalities.
Even in case of parallel basisses/full correlation?
 
  • #62
entropy1 said:
Even in case of parallel basisses/full correlation?

"Violate the Bell inequalities" means over the full range of possible combinations of measurement settings. Obviously if you only pick the one case where both measurements are parallel, you won't violate the inequalities. So what?
 
  • #63
PeterDonis said:
Of entropy, yes.
Ok. I think that is important. Are you willing and able to suggest a Google search term for this, I quess, entanglement-entropy? (on that word I find only very advanced articles) :smile:
 
  • #64
entropy1 said:
entanglement-entropy

Yes, that's a good search term.

entropy1 said:
on that word I find only very advanced articles

Yes, that's because it is an advanced topic. :wink:
 
  • #65
Simon Phoenix said:
Yes - basically think of putting your binary string on a wheel. Now copy it and put this on an 'inner' wheel. Rotate the inner wheel. With overwhelming probability there's only one position of the two wheels where there is perfect correlation between the bits in the same positions on the wheels.

Now do the same but with 2 independently produced random strings - now, with overwhelming probability, there will be no position we can rotate to for which there's a perfect correlation.

But so what? I really don't see where you're going with this, or how it is helpful. There may well be something useful in this perspective - it's just I'm not seeing it yet.
Suppose we have (1) a random string A with bits a0..an-1 and random string B with bits b0..bn-1. Suppose they are not correlated, and we have P(A,B)=P(A)P(B).

Now suppose (2) we compare a0..an-1 with b1..bn-1b0, and find total correlation.

Now are A and B correlated or not? If we take the alignment of A and B in (1), we would say they are not correlated. However, A and B contain a correlation that reveals itself in (2). We would say that the probability of a correlation 'arising' just by chance is unlikely. Independent sources would very probably not generate such a correlation by chance.

So the correlation must be the result of something. You could have the same result if you take all the ones and put them together with the other ones, and similarly for the zeros. In any case there is a non-random cause for the resulting correlation (a physical cause).

Maybe that is what I mean by 'not truly random'.
 
Last edited:
  • #66
entropy1 said:
Suppose they are not correlated, and we have P(A,B)=P(A)P(B).

What are these probabilities? What is P(A, B) and what are P(A) and P(B)?

entropy1 said:
we compare a0..an-1 with b1..bn-1b0, and find total correlation.

What does "total correlation" mean? How would it be expressed in terms of probabilities like P(A, B) or P(A) or P(B)?
 
  • #67
entropy1 said:
Suppose we have (1) a random string A with bits a0..an-1 and random string B with bits b0..bn-1. Suppose they are not correlated, and we have P(A,B)=P(A)P(B).

Now suppose (2) we compare a0..an-1 with b1..bn-1b0, and find total correlation.

Now are A and B correlated or not? If we take the alignment of A and B in (1), we would say they are not correlated. However, A and B contain a correlation that reveals itself in (2). We would say that the probability of a correlation 'arising' just by chance is unlikely. Independent sources would very probably not generate such a correlation by chance.

So the correlation must be the result of something. You could have the same result if you take all the ones and put them together with the other ones, and similarly for the zeros. In any case there is a non-random cause for the resulting correlation (a physical cause).

Maybe that is what I mean by 'not truly random'.
The correlation between your strings is found by lining them up and counting coincidences. There is no correlation of a single bit. Coincidences is all you got.
 
  • #68
entropy1 said:
You could have the same result if you take all the ones and put them together with the other ones, and similarly for the zeros.

No, that's called cherry-picking your data.
 
  • #69
entropy1 said:
Suppose we have (1) a random string A with bits a0..an-1 and random string B with bits b0..bn-1. Suppose they are not correlated, and we have P(A,B)=P(A)P(B).

Now suppose (2) we compare a0..an-1 with b1..bn-1b0, and find total correlation.

Now are A and B correlated or not? If we take the alignment of A and B in (1), we would say they are not correlated. However, A and B contain a correlation that reveals itself in (2). We would say that the probability of a correlation 'arising' just by chance is unlikely. Independent sources would very probably not generate such a correlation by chance.

So the correlation must be the result of something. You could have the same result if you take all the ones and put them together with the other ones, and similarly for the zeros. In any case there is a non-random cause for the resulting correlation (a physical cause).

Maybe that is what I mean by 'not truly random'.
You are floundering. Correlation is a statistical property. The expectation of the correlation between two random bit strings is zero by definition !

The correlation between streams A and B, ##-1 \le \rho_{ab} = (2c-n)/n \le 1## (where c is the number of coincidences) has expectation zero because the number of coincidences tends to ##n/2##. But the expectaion of ##\rho^2## is not zero, so you get fluctuations.

The way to reproduce an EPR dataset is to have a machine that produces two random streams and a EPR demon that inserts anti-correlated pairs at random intervals. If the demon tells you which are its work, you can pick then out and get perfect anti-correlation. The remaining data will have ##<\rho>=0##. If all the data is used, then ##<\rho>\ \lt 0##. i.e. the expectation of the correlation is negative, not zero.

So a good EPR experiment could comprise a hundred million bit string and give a result like ##\hat{\rho} = -0.14325378 \pm 0.00001## which would show something strange was happening, Time to call Rosencrantz&Guildenstern.
 
Last edited:
  • #70
PeterDonis said:
What are these probabilities? What is P(A, B) and what are P(A) and P(B)?
P(X=1) in the binary case is the ratio of the #bits equal to 1 relative to the total #samples.
P(A=1,B=1) is the ratio of pairs of bits that are both 1 compared to the total #samples (pairs).
PeterDonis said:
What does "total correlation" mean? How would it be expressed in terms of probabilities like P(A, B) or P(A) or P(B)?
P(A=1,B=1)=P(A=1|B=1)P(B=1), where P(A=1|B=1)=1 in case of total correlation. That is: P(A=1,B=1)=P(A=1)=P(B=1).

@Mentz114: It could be my limited mastering of the english language, but, with all due respect, I am afraid I don't understand what you mean.
PeterDonis said:
No, that's called cherry-picking your data.
Yes. I ment it as example of a non-random cause.

You can claim that the entropy of the random content decreases in fractional bits, but that could also mean that the amount of randomness decreases in favor of non-randomness.
 
Last edited:

Similar threads

Back
Top