Preserving causality in the EPR experiment

In summary, the conversation discusses the poster's previous attempt to simulate local realism in the EPR experiment, which they have now given up on. However, they have not given up on causality and have attached code that simulates the EPR experiment and gives the same result as quantum mechanics. The algorithm used is inspired by the quantum eraser experiment and involves the first entangled photon immediately giving the other entangled photon its result of interaction. The poster also asks about the possibility of entanglement being involved in all particle interactions and its potential connection to the complex nature of probability amplitudes in quantum mechanics. They hope for answers to their questions and acknowledge that some level of speculation can aid in understanding science.
  • #1
kurt101
284
35
I have previously posted Preserving local realism in the EPR experiment
.
I have since given up on simulating local realism since I now understand it is impossible. However I have not given up on causality. Attached is code that simulates the EPR experiment and gives the same result as what quantum mechanics predicts.

The algorithm is simple and inspired by the quantum eraser experiment Quantum_eraser_experiment
In other words, I could only understand the result of the quantum eraser experiment by rationalizing it with the following algorithm:

The first entangled photon interacts with its polarizer and immediately gives the other entangled photon the result of its interaction. Other than this, the entangled photons interact normally with their respective polarizers.

This algorithm preserves causality when simulating the EPR and quantum eraser experiments and perhaps it would be correct to say it also preserves local reality since the mechanism is instant or FTL. Or am I twisting definitions here?

Here is the code that simulates this.

Code:
   // Return the result of a particle interacting with a polarizer and then the detector.  The result is either true or false.
    // The iteraction also results in the particleAngle taking on the value of the polarizerAngle
    // and the particleVector being rotated.
    bool InteractWithPolarizer(float polarizerAngle, ref float particleAngle, ref Vector3 particleVector)
    {
        Quaternion rotation;
        float angleDifference;
        float dotProduct;
        float rotationAngle;
        bool result = false;

        // Determine how much to rotate the particleVector
        angleDifference = particleAngle - polarizerAngle;
        particleAngle = polarizerAngle;
        rotationAngle = Mathf.Pow(Mathf.Sin(angleDifference * Mathf.Deg2Rad), 2f) * Mathf.PI / 2f * Mathf.Rad2Deg;
        rotation = Quaternion.AngleAxis(rotationAngle, leftParticle.direction);  // angle is in degrees
        particleVector = rotation * particleVector;
        particleVector.Normalize();

        // Compare with detector to determine if final state is up or down
        // (the angle of the detector vector does not matter)
        dotProduct = Vector3.Dot(particleVector, Vector3.up);
        if ((dotProduct * dotProduct) > 0.5f)
        {
            result = true;
        }
        return result;
    }

    // One particle uses the result of the other particle as a result of entanglement.
    void RunMethod45(out bool leftUp, out bool rightUp)
    {
        // Properties of the entangled particle
        float particleAngle = this.GetRandom(-180f, 180f);
        Vector3 particleVector = this.GetRandomVector(true);

        // Determine left particle
        leftUp = this.InteractWithPolarizer(leftPolarizer.angle, ref particleAngle, ref particleVector);

        // Determine right particle
        // Right particle inherits the properties of the left entangled particle after it interacted with its polarizer
        // because the left entangled particle interacted with its polarizer first (the order does not matter)
        rightUp = this.InteractWithPolarizer(rightPolarizer.angle, ref particleAngle, ref particleVector);
    }

The result of the simulation gives the same result as quantum mechanics regardless of what angles I use for the respective polarizers. The blue spheres represent the quantum mechanics prediction and the red spheres represent the result of the simulation.
EPR_Simulation.PNG


I assume that this type of mechanism has been explored by others. What are the problems with it?

Is entanglement considered a special case or is entanglement a part of any particle interaction?

If entanglement is involved in all particle interactions, could it explain why the probability amplitudes in quantum mechanics are complex functions that require the complex conjugates to resolve the probability?

Thanks for answers. I hope my questions are not too speculative in nature to get moderated. I certainly think some level of speculative questions can help in understanding science.
 

Attachments

  • EPR_Simulation.PNG
    EPR_Simulation.PNG
    26 KB · Views: 812
Physics news on Phys.org
  • #2
kurt101 said:
I have previously posted Preserving local realism in the EPR experiment
.
I have since given up on simulating local realism since I now understand it is impossible. However I have not given up on causality. Attached is code that simulates the EPR experiment and gives the same result as what quantum mechanics predicts.

The algorithm is simple and inspired by the quantum eraser experiment Quantum_eraser_experiment
In other words, I could only understand the result of the quantum eraser experiment by rationalizing it with the following algorithm:

The first entangled photon interacts with its polarizer and immediately gives the other entangled photon the result of its interaction. Other than this, the entangled photons interact normally with their respective polarizers.

[]

The result of the simulation gives the same result as quantum mechanics regardless of what angles I use for the respective polarizers. The blue spheres represent the quantum mechanics prediction and the red spheres represent the result of the simulation.View attachment 219643

I assume that this type of mechanism has been explored by others. What are the problems with it?

Is entanglement considered a special case or is entanglement a part of any particle interaction?

If entanglement is involved in all particle interactions, could it explain why the probability amplitudes in quantum mechanics are complex functions that require the complex conjugates to resolve the probability?

Thanks for answers. I hope my questions are not too speculative in nature to get moderated. I certainly think some level of speculative questions can help in understanding science.
What you've done certainly shows the effects of the entanglement but it is not enough to simulate the singlet state completely.

Can you calculate the probabilities of coincidences ? The singlet state only tells us that one thing.
 
  • Like
Likes kurt101
  • #3
Mentz114 said:
What you've done certainly shows the effects of the entanglement but it is not enough to simulate the singlet state completely.

Can you calculate the probabilities of coincidences ? The singlet state only tells us that one thing.

Yes, the simulation calculates the coincidences and graphs them.

I don't have the entire code of the simulation posted, but just what was important. I could upload the entire simulation to github, but it would take me some time and I don't know that anyone would care. The simulation fixes one of the polarizer angles and then runs through a range of the other polarizer angles. It does not matter what angle I use for the fixed polarizer and always results in the same coincidences as predicted by quantum mechanics.

Anyone should be able to take the code I posted, modify it for their language of choice, and get the same outcome I have. In other words, I posted everything one would need to know to reproduce the same result.
 
  • #4
Your code refers to a value "leftParticle" that isn't defined anywhere.

Regardless, what you're doing in this code is generating some random values ahead of time:

Code:
particleVector = this.GetRandomVector(true)

and then consuming those random values when you need a measurement result:

Code:
dotProduct = Vector3.Dot(particleVector, Vector3.up);
if ((dotProduct * dotProduct) > 0.5f)) {...}

What you will find is that this works fine for the first measurement, maybe even the first few, but then the results being returned as you simulate passing the photon through multiple polarizers start to be correlated instead of independent. You "run out of randomness".

In the real world, if you keep alternating between passing a photon through a vertical polarizer and then a diagonal polarizer, the photon always has a 50% chance of making it through, regardless of past results. In your simulation, I think you'll find that the chance of passing through a vertical polarizer isn't 50% after you condition on the first two measurements.

I think the idea you're groping towards is "what if there was a giant pre-existing invisible list of arbitrary numbers, and every time we had a "random" result it was just nature looking up the next number?". This idea doesn't solve the fact that there is unpredictability, it just smuggles the unpredictability into the big invisible list.
 
  • #5
kurt101 said:
Yes, the simulation calculates the coincidences and graphs them.

I don't have the entire code of the simulation posted, but just what was important. I could upload the entire simulation to github, but it would take me some time and I don't know that anyone would care. The simulation fixes one of the polarizer angles and then runs through a range of the other polarizer angles. It does not matter what angle I use for the fixed polarizer and always results in the same coincidences as predicted by quantum mechanics.

Anyone should be able to take the code I posted, modify it for their language of choice, and get the same outcome I have. In other words, I posted everything one would need to know to reproduce the same result.
I've already done a simulation and analysis that satisfies me and key thing is that the first photon to interact will cause the other one to have the polarization of the first photons filter. So now the other end has information about the first interaction. I also find that this gives the correct coincidence probabilities. But, the contingency tables predicted by the models are shown in the pic ( I couldn't get the Latex to display).

Can you calculate those probabilities algebraically from your simulation ?
eprb-tables-1.png

(There is a factor of 1/2 missing in those tables. Please compensate !)
 

Attachments

  • eprb-tables-1.png
    eprb-tables-1.png
    31 KB · Views: 778
Last edited:
  • #6
Strilanc said:
Your code refers to a value "leftParticle" that isn't defined anywhere.

In the real world, if you keep alternating between passing a photon through a vertical polarizer and then a diagonal polarizer, the photon always has a 50% chance of making it through, regardless of past results. In your simulation, I think you'll find that the chance of passing through a vertical polarizer isn't 50% after you condition on the first two measurements.

I think the idea you're groping towards is "what if there was a giant pre-existing invisible list of arbitrary numbers, and every time we had a "random" result it was just nature looking up the next number?". This idea doesn't solve the fact that there is unpredictability, it just smuggles the unpredictability into the big invisible list.

The code is part of a class and the leftParticle/rightParticle is a member of the class. The only thing it contains that is relevant to this algorithm is the direction that the photon is heading (i.e. the photon is heading perpendicular to the polarizer). Likewise, the leftPolarizer/rightPolarizer is also a member of the class and the only thing relevant to this algorithm is the polarizer angle.

In this simulation, for a given polarizer angle, the up and down result is always near 50/50 just like the real world. Previous measurements have no impact on future measurements in this algorithm.

I am just a beginner at learning quantum mechanics, but I have done many of these EPR simulations and believe I have a good grasp of what a bad and good simulation looks like. I am not asking you to trust me, but I don't think I am misleading anyone here.

I don't understand your comments about randomness. The randomness in the algorithm are just the initial conditions of the entangled photons. I assume that is a pretty typical expectation for this kind of experiment.
 
  • #7
Use your code to simulate passing through several polarizers in sequence. See if you get each list of results at the right frequency. For example, if you're alternating vertical and diagonal polarizers and 1 means "passed" and 0 means "absorbed" then you should get 1011100 about 1/128'th of the time. Your code will fail to meet this test; you will find that most sequences of outputs can't occur at all.
 
  • Like
Likes kurt101
  • #8
kurt101 said:
The algorithm is simple and inspired by the quantum eraser experiment Quantum_eraser_experiment
In other words, I could only understand the result of the quantum eraser experiment by rationalizing it with the following algorithm:

The first entangled photon interacts with its polarizer and immediately gives the other entangled photon the result of its interaction. Other than this, the entangled photons interact normally with their respective polarizers.

You can simulate the correlations of any Bell-type experiment between two distant locations with faster-than-light communication. This is already known and you don't need a simulation to show it. Basic probability theory is enough. Just write the joint probabilities as $$P(ab | xy) = P(a | b x y) P(b | y) \,.$$ This tells you how to simulate the joint probability ##P(ab | xy)## of getting outcomes ##a## and ##b## given that measurements ##x## and ##y## are selected:
  1. If measurement ##y## is selected on "Bob"'s side, generate the result ##b## with probability ##P(b | y) = \sum_{a} P(ab | xy)##.
  2. Communicate ##b## and ##y## to "Alice"'s side, and generate the outcome ##a## with probability ##P(a | b x y) = \frac{P(ab | xy)}{P(b | y)}## depending on the measurement ##x## selected on Alice's side.
You can do this in either direction for probability distributions that satisfy the no-signalling constraints, i.e., whose marginals ##P(a | x) = \sum_{b} P(ab | xy)## and ##P(b | y) = \sum_{a} P(ab | xy)## are the same independent of the measurement selected on the other side. The correlations that quantum physics can predict for Bell-type experiments is a subset of the possible correlations that satisfy these constraints.
 
  • Like
Likes kurt101 and Mentz114
  • #9
wle said:
You can simulate the correlations of any Bell-type experiment between two distant locations with faster-than-light communication. This is already known and you don't need a simulation to show it. Basic probability theory is enough. Just write the joint probabilities as $$P(ab | xy) = P(a | b x y) P(b | y) \,.$$ This tells you how to simulate the joint probability ##P(ab | xy)## of getting outcomes ##a## and ##b## given that measurements ##x## and ##y## are selected:
  1. If measurement ##y## is selected on "Bob"'s side, generate the result ##b## with probability ##P(b | y) = \sum_{a} P(ab | xy)##.
  2. Communicate ##b## and ##y## to "Alice"'s side, and generate the outcome ##a## with probability ##P(a | b x y) = \frac{P(ab | xy)}{P(b | y)}## depending on the measurement ##x## selected on Alice's side.
You can do this in either direction for probability distributions that satisfy the no-signalling constraints, i.e., whose marginals ##P(a | x) = \sum_{b} P(ab | xy)## and ##P(b | y) = \sum_{a} P(ab | xy)## are the same independent of the measurement selected on the other side. The correlations that quantum physics can predict for Bell-type experiments is a subset of the possible correlations that satisfy these constraints.
Yes. And following that recipe with an application of the polarizer equation, one gets the probabilities in the table I showed above. This does not satisfy the marginal requirements but has the correct trace so it predicts correctly all the coincidence based stuff.

But getting the probabilities symmetrical like the singlet table requires a dash of quantum indeterminacy.
 
  • #10
Mentz114 said:
Yes. And following that recipe with an application of the polarizer equation, one gets the probabilities in the table I showed above. This does not satisfy the marginal requirements but has the correct trace so it predicts correctly all the coincidence based stuff.

I don't follow. What "marginal requirements" are you referring to, that you think are not satisfied?
 
  • #11
T
wle said:
I don't follow. What "marginal requirements" are you referring to, that you think are not satisfied?
The table for the singlet state only has one piece of information so the marginal probabilities are always 1/2.
The marginals of the other distribution are not always 1/2 and depend on settings. To this extent the classical+entanglement model fails to reproduce the singlet state.

Given all this it is possible to write a computer program which gives realistic results from a simulated experiment. Saturating the CHSHB inequality, for instance.

[But to get the simulated results to have the equal marginals takes a code intervention which cannot be justified classically ?]

Well, this is interesting because I just found by running my simulation that if I randomise the initial photon polarization then I don't need the magic !

Back to the drawing board.
 
  • #12
Mentz114 said:
The marginals of the other distribution are not always 1/2 and depend on settings. To this extent the classical+entanglement model fails to reproduce the singlet state.

I still don't see what you mean. If you're referring to the conditional distribution ##P(a | b xy)## from my post then simulating this classically (assuming a "sufficiently good" way of generating randomness or pseudo-randomness) doesn't present any fundamental problem if you have the values of ##b##, ##x##, and ##y## available. For example, if you enumerate the outcomes ##a = 0, 1, 2, \dotsc## and you can generate a random number ##\lambda## uniformly in the range ##0 < \lambda \leq 1## then you could generate the outcome on Alice's side by selecting the outcome ##a## for which $$\sum_{a' < a} P(a' | b xy) < \lambda \leq \sum_{a' \leq a} P(a' | bxy) \,,$$ for whatever conditional distribution ##P(a' | bxy)## you want to simulate.

Given all this it is possible to write a computer program which gives realistic results from a simulated experiment. Saturating the CHSHB inequality, for instance.

You could simulate any value of up to ##4## for CHSH with the scheme I described, beyond the maximum of ##2 \sqrt{2}## possible in quantum physics.
 
  • #13
wle said:
I still don't see what you mean. If you're referring to the conditional distribution ##P(a | b xy)## from my post then simulating this classically (assuming a "sufficiently good" way of generating randomness or pseudo-randomness) doesn't present any fundamental problem if you have the values of ##b##, ##x##, and ##y## available. For example, if you enumerate the outcomes ##a = 0, 1, 2, \dotsc## and you can generate a random number ##\lambda## uniformly in the range ##0 < \lambda \leq 1## then you could generate the outcome on Alice's side by selecting the outcome ##a## for which $$\sum_{a' < a} P(a' | b xy) < \lambda \leq \sum_{a' \leq a} P(a' | bxy) \,,$$ for whatever conditional distribution ##P(a' | bxy)## you want to simulate.
You could simulate any value of up to ##4## for CHSH with the scheme I described, beyond the maximum of ##2 \sqrt{2}## possible in quantum physics.
That's cheating.
This is really frustrating because I've explained explicitly what I'm talking about but clearly it is not the same thing you are.
I need to go and integrate out a random variate so I'll leave it there.
 
  • #14
Mentz114 said:
This is really frustrating because I've explained explicitly what I'm talking about

No you haven't. You said I wasn't simulating some "marginal distribution" and I don't know what that is because you haven't told me. I can't read your mind. So I guessed "If you're referring to the conditional distribution ##P(a | bxy)##..." which is an invitation for you to clarify if you meant something different.

In a (two-party) Bell-type experiment you have outcomes ##a##, ##b## and these depend on measurement settings ##x##, ##y##, in a way summarised by a joint probability distribution ##P(ab | xy)##. Assuming the no-signalling constraints (which always hold for probabilities computed using quantum physics for a Bell-type experiment, regardless of the state or local measurement operators), the only marginals of these are ##P(a | x)## and ##P(b | y)##. Are you concerned about cases where these are different from ##1/2##? That's not a problem -- I didn't assume ##P(a | x) = 1/2## or ##P(b | y) = 1/2## in my post. Are you referring to some other marginal distribution? Then you will have to tell me what that is because I have no idea what other marginal distribution there is that you could possibly be referring to.
 
Last edited:
  • #15
Have a loo
wle said:
No you haven't. You said I wasn't simulating some "marginal distribution" and I don't know what that is because you haven't told me. I can't read your mind. So I guessed "If you're referring to the conditional distribution ##P(a | bxy)##..." which is an invitation for you to clarify if you meant something different.

In a (two-party) Bell-type experiment you have outcomes ##a##, ##b## and these depend on measurement settings ##x##, ##y##, in a way summarised by a joint probability distribution ##P(ab | xy)##. Assuming the no-signalling constraints (which always hold for probabilities computed using quantum physics for a Bell-type experiment, regardless of the state or local measurement operators), the only marginals of these are ##P(a | x)## and ##P(b | y)##. Are you concerned about cases where these are different from ##1/2##? That's not a problem -- I didn't assume ##P(a | x) = 1/2## or ##P(b | y) = 1/2## in my post. Are you referring to some other marginal distribution? Then you will have to tell me what that is because I have no idea what other marginal distribution there is that you could possibly be referring to.
OK, have a look at the tables in the picture in post#5. In the cells (2x2) there are probablilities. The marginal probabilities are the sums of two of the inner cells ( a row or a column).

My latest thought is that I should include the intial polarization ##\theta_0## in the cells so that for instance ##P(00;\alpha,\beta)## becomes ##(\cos(\alpha-\theta_0)^2+\cos(\beta-\theta_0)^2)\cos(\alpha-\beta)^2 ##
Now I should be able to integrate out the variate ##\theta_0 \in[0..2\pi]## and get rid of the terms with ##\theta_0##.
 
  • #16
Mentz114 said:
OK, have a look at the tables in the picture in post#5. In the cells (2x2) there are probablilities. The marginal probabilities are the sums of two of the inner cells ( a row or a column).

Are the four cells supposed to be probabilities of different outcomes, ##P(00 | \alpha \beta)##, ##P(01 | \alpha \beta)##, ##P(10 | \alpha \beta)##, ##P(11 | \alpha \beta)## for the same settings ##\alpha##, ##\beta##? If so (and this would be confusing, since ##A_{0}##, ##A_{1}##, ##B_{0}##, and ##B_{1}## normally refer to different measurements in the literature) then the probabilities in the second table are not possible according to quantum theory. Why would you want to simulate them?
 
  • #17
wle said:
Are the four cells supposed to be probabilities of different outcomes, ##P(00 | \alpha \beta)##, ##P(01 | \alpha \beta)##, ##P(10 | \alpha \beta)##, ##P(11 | \alpha \beta)## for the same settings ##\alpha##, ##\beta##? If so (and this would be confusing, since ##A_{0}##, ##A_{1}##, ##B_{0}##, and ##B_{1}## normally refer to different measurements in the literature) then the probabilities in the second table are not possible according to quantum theory. Why would you want to simulate them?
Yes, those are the probabilities of the outcomes as calculated. ##A_0## meaning A has a '0' and so on.

I find your notation difficult. I want to write ##P(x,y;\alpha, \beta)## for the probability of getting outcome ##xy \in [00,01,10.11]## given settings ##\alpha, \beta##.

Expanding thus* ##P(x,y;\alpha, \beta)=P(x;\alpha)P(y;\alpha,\beta)## we get the probabilities in the table. My problem all along has been that this is not the singlet state for the reasons I've given.

However, during this discussion I have found how to get rid of the unwanted terms and now there is complete agreement.

I must make it clear that nothing is being simulated in the sense you mean. The process generates the conditional distributions by combining random events correctly.

[edit]
*The expansion is actually an average ##P(x,y;\alpha, \beta)=\tfrac{1}{2}P(x;\alpha)P(y;\alpha,\beta) + \tfrac{1}{2}P(y;\beta)P(x;\alpha,\beta)## over two possibilities.
 
Last edited:
  • #18
wle said:
You can simulate the correlations of any Bell-type experiment between two distant locations with faster-than-light communication. This is already known and you don't need a simulation to show it.

The reason simulating EPR is interesting to me is because I want rules that are deterministic. I want to take some initial condition, apply the rules, and know exactly where a particle is going to end up.

I have read about many interpretations of quantum mechanics, but most of them are not satisfying and don't provide a deterministic aspect to them.

Are there legitimate theories, interpretations, or active efforts going on, that accept the premise of the instant action at a distance (FTL), and are trying to come up with a deterministic theory that explains the probabilistic nature of quantum mechanics? Or have we mostly given up on this?
 
  • #19
Mentz114 said:
I've already done a simulation and analysis that satisfies me and key thing is that the first photon to interact will cause the other one to have the polarization of the first photons filter. So now the other end has information about the first interaction. I also find that this gives the correct coincidence probabilities. But, the contingency tables predicted by the models are shown in the pic ( I couldn't get the Latex to display).

Can you calculate those probabilities algebraically from your simulation ?View attachment 219650
(There is a factor of 1/2 missing in those tables. Please compensate !)

I am not quite following. Are you satisfied that my simulation produces the correct result in all scenarios of the EPR experiment? If not, can you give a specific example of where you think the simulation will produce the wrong result? Thanks.
 
  • #20
Strilanc said:
Use your code to simulate passing through several polarizers in sequence. See if you get each list of results at the right frequency. For example, if you're alternating vertical and diagonal polarizers and 1 means "passed" and 0 means "absorbed" then you should get 1011100 about 1/128'th of the time. Your code will fail to meet this test; you will find that most sequences of outputs can't occur at all.

Good idea, I will give it a try. I had not given any consideration to this.

I don't understand your measurment of 1011100 about 1/128'th of the time, but it just needs to be consistent with Malus's law, right?
 
  • #21
kurt101 said:
I am not quite following. Are you satisfied that my simulation produces the correct result in all scenarios of the EPR experiment? If not, can you give a specific example of where you think the simulation will produce the wrong result? Thanks.
It depends what you mean by 'the correct result'. From my understanding of your code, you allow the first photon to be projected to influence the next interaction - and this is enough to get the correct correlations. If you take into account the randomness of the preparation then you will get everything.

Your presentation is very sparse. What type of experiment do you simulate - beam splitting polarizers or one-port polarizers ?
What does the graph show ? It has no captions.
 
  • Like
Likes kurt101
  • #22
Mentz114 said:
It depends what you mean by 'the correct result'. From my understanding of your code, you allow the first photon to be projected to influence the next interaction - and this is enough to get the correct correlations. If you take into account the randomness of the preparation then you will get everything.

Your presentation is very sparse. What type of experiment do you simulate - beam splitting polarizers or one-port polarizers ?
What does the graph show ? It has no captions.

My intent of the simulation was to support the idea that the EPR experiment can be simulated in a deterministic manner with cause and effect. Yes, the main feature of the simulation is that the first photon that interacts with its polarizer, projects its state from the interaction to the other photon that is yet to reach its polarizer. This seemed like a necessity because the quantum eraser experiment seems to imply you can change the result of the other side, no matter how far away, no matter how much time later, you insert the change in the experiment. That defies common sense! Leading me to think that the only way it can happen is with the projection mechanism.

I think the EPR uses what is called a linear polarizer. Other than that, I did not think it was important for the experiment. So I would say I am simulating a linear polarizer.

Sorry about the bad graph and poor explanation. It is one of the outputs of the simulation. It is a plot of coincidence percent between the polarizers in the experiment. The polarizer angle of one side is fixed and the other side goes from 0 to 360. The blue dots are what the equation predicts and the red dots are what the simulation predicts. It helps me determine quickly if I am on the wrong track, but I also run, compare, and print out a few other experiment scenarios of different angles as well as all of the counts.
 
  • #23
kurt101 said:
My intent of the simulation was to support the idea that the EPR experiment can be simulated in a deterministic manner with cause and effect.

Sure, as long as what you describe in the quote below counts as "in a deterministic manner with cause and effect".

kurt101 said:
the quantum eraser experiment seems to imply you can change the result of the other side, no matter how far away, no matter how much time later, you insert the change in the experiment. That defies common sense!

But it is what happens in experiments, so it is your common sense that needs to change.
 
  • Like
Likes Geo
  • #24
kurt101 said:
My intent of the simulation was to support the idea that the EPR experiment can be simulated in a deterministic manner with cause and effect.
[]
I think it can but with some subtleties. The traditional cause/effect idea can't be applied when only probabilities are known. The quantum model using the entangled singlet state has no information at all of that nature. What we can do is derive the emergent probabilities in a event model and deal with causality with superposition.
For instance, we cannot tell from the quantum model which arm (A or B) interacts first, so we average our model probabilities over both possibilities. By removing the unwanted dof in this way from the probabilities, the quantum model is recovered.
Yes, the main feature of the simulation is that the first photon that interacts with its polarizer, projects its state from the interaction to the other photon that is yet to reach its polarizer.
Without this every correlation has expectation 0.
 
  • #25
PeterDonis said:
But it is what happens in experiments, so it is your common sense that needs to change.
I have accepted the result of the quantum eraser experiment and I have accepted the spooky action at a distance. These understandings are the basis for my simulation and the basis for what I am trying to learn more about in this thread. Can you clarify what you mean when you say that my common sense needs to change?

Do you think there is NOT an underlying mechanism for the wave function, superposition, and the probabilistic nature of quantum mechanics? If so, in a nut shell, why? Is this the consensus view? Do you think looking for an underlying mechanism is a dead end? Why?

Thanks!
 
  • #26
kurt101 said:
Can you clarify what you mean when you say that my common sense needs to change?

If your simulation, based on your "common sense" assumptions, is not giving the same results as actual experiments, then your common sense needs to change.

If your simulation is giving the same results as actual experiments, then what defies common sense?
 
  • #27
kurt101 said:
Good idea, I will give it a try. I had not given any consideration to this.

I don't understand your measurment of 1011100 about 1/128'th of the time, but it just needs to be consistent with Malus's law, right?

Right. You lose cos(45)^2 = 1/2 of the photons after each polarizer, because when you alternate between vertical and diagonal each polarizer will be 45 degrees off with respect to the photon's polarization due to the previous polarizer. 1011100 is seven symbols long, meaning seven polarizers, meaning you lose 1/2 seven times which is 1/128.
 
  • Like
Likes kurt101
  • #28
PeterDonis said:
If your simulation, based on your "common sense" assumptions, is not giving the same results as actual experiments, then your common sense needs to change.

If your simulation is giving the same results as actual experiments, then what defies common sense?

Ok then my sense did change. My point was that the result of the quantum eraser is very perplexing to me and I suspect most people and that the desire for me to make sense of the quantum eraser experiment guided the algorithm I chose to use for the simulation.
 

FAQ: Preserving causality in the EPR experiment

1. What is the EPR experiment?

The EPR experiment, also known as the Einstein-Podolsky-Rosen experiment, is a thought experiment that was proposed by Albert Einstein, Boris Podolsky, and Nathan Rosen in 1935. It is used to illustrate the concept of entanglement and the paradox of quantum mechanics known as the "spooky action at a distance".

2. What is meant by "preserving causality" in the EPR experiment?

Preserving causality in the EPR experiment refers to the idea that an effect cannot occur before its cause. In other words, the outcome of the experiment should not be affected by the measurement of one entangled particle happening before the measurement of the other entangled particle.

3. How does the EPR experiment challenge our understanding of causality?

The EPR experiment challenges our understanding of causality because it suggests that the measurement of one particle can instantaneously affect the state of another particle, even if they are separated by large distances. This goes against our classical understanding of causality, where cause and effect must be in close proximity to each other.

4. How is causality preserved in the EPR experiment?

Causality is preserved in the EPR experiment through the concept of non-local hidden variables. This means that the particles may have hidden properties that determine their behavior, and these properties are shared between the entangled particles. Therefore, the measurement of one particle is not influencing the other particle, but rather revealing its pre-existing properties.

5. What are the implications of preserving causality in the EPR experiment?

The implications of preserving causality in the EPR experiment are significant for our understanding of quantum mechanics and the nature of reality. It suggests that there may be hidden variables at play in the quantum world, and that the effects of measurements on entangled particles are not as mysterious as they may seem. This has implications for the concept of free will and the role of consciousness in the universe.

Similar threads

Replies
225
Views
12K
Replies
81
Views
7K
Replies
29
Views
3K
Replies
47
Views
4K
Replies
87
Views
6K
Replies
7
Views
2K
Back
Top