Wondering about QM whether things are actually all deterministic?

  • Thread starter jeebs
  • Start date
  • Tags
    Qm
In summary: Non-local, deterministic, with hidden variables -- e.g. Bohmian mechanics is a deterministic theory that relies on the assumption of hidden variables.
  • #1
jeebs
325
4
Hi,
As I'm sure you are all well aware, QM is all about probability in your measurement. When we scale up to the classical world, the probabilities disappear, we calculate one specific state for your system given a certain set of starting conditions.
What I was wondering is, is the QM world really not deterministic? I don't mean in the sense of the way we calculate things, I mean in reality. Take the uncertainty principle situation where you measure a particle's position/momentum by bouncing a photon off that particle, thus affecting its position/momentum and introducing the uncertainty in your calculation.
Does this really say anything about how particle interactions actually happen in reality though? Imagining two snooker balls colliding, we can be pretty certain about what they are doing because we bounce photons off them to see them, but they are so massive that this observation does not affect them in any significant way. Whether or not we observed, the outcome of the collision is essentially the same. Say we collided two protons. I am aware we have to deal with interaction cross sections and solid angle scattering and all that stuff. What if we weren't trying to observe them/predict outcomes? Would that interaction always be happening the same way, given the same initial starting conditions?
If we focus on the reality of the situation rather than our attempts to understand it, are things really deterministic and it's just the observation (or perhaps our incomplete grasp of physics?) that makes it seem otherwise?

If I haven't explained this question too well, I'll summarise: If we restarted time right back at the big bang, would everything play out the same way even if we were unable to predict it? As far as my undergrad education has taught me, I'm leaning towards (and hoping for if I'm honest) the answer being yes. Anyone out there who really knows QM inside out that could confirm or deny?
 
Last edited:
Physics news on Phys.org
  • #2
The current state of thinking, (except for fringe elements) is that no. Von Neumann, Bell, Kochen-Specher all proved that no hidden variable theory could reproduce the results of quantum mechanics. Kochen-Specher even discounted non-local hidden variables.

That said, I do think there is something else going on that we haven't been able to put our finger on yet.
 
  • #3
The OP was asking about determinism, not hidden variables.


Among the main interpretations of QM, there are:
  • Non-deterministic -- e.g. typical wave-function collapse interpretations
  • Local, deterministic, but lacking counter-factual definiteness -- e.g. MWI
  • Non-local, deterministic, with hidden variables -- e.g. Bohmian mechanics
so it's a sort of "pick your favorite flavor" sort of thing.
 
  • #4
I had forgotten all about MWI and focused on Bohmian mechanics as a deterministic theory.
 
  • #5
Hurkyl said:
The OP was asking about determinism, not hidden variables.Among the main interpretations of QM, there are:
  • Non-deterministic -- e.g. typical wave-function collapse interpretations
  • Local, deterministic, but lacking counter-factual definiteness -- e.g. MWI
  • Non-local, deterministic, with hidden variables -- e.g. Bohmian mechanics
so it's a sort of "pick your favorite flavor" sort of thing.

Bohmian mechanics is deterministic as regards trajectories but not as regards the initial probability distribution even if it can be, to some extent, reproduced by some kind of complex dynamics is a couple of simplified toy models.

As for wave function collapse - they are not only interpretations, they also provide calculational models of processes involving single quantum systems.
 
  • #6
jeebs said:
What I was wondering is, is the QM world really not deterministic? I don't mean in the sense of the way we calculate things, I mean in reality.

It is certainly possible that QM is based on a deterministic and even local mechanism. You will hear claims to the contrary but all of them are based on faulty logic. All no-go theorems (Bell and the like) need to make same assumptions that are highly questionable if not plainly wrong.
 
  • #7
The main conceptual problem with QM is that information is casual, but information correlation is not. Suppose we have two processes far away from each other. Not all combinations of these processes are allowed - if one chooses one path, the other is also determined, like they communicated faster than light. But when treated separately, each one is local and casual.

That's why I don't like non-casual theories like dBB. They introduce nonlocality of information, which is wrong. They should operate on different level, the correlation.
 
  • #8
ueit said:
It is certainly possible that QM is based on a deterministic and even local mechanism. You will hear claims to the contrary but all of them are based on faulty logic. All no-go theorems (Bell and the like) need to make same assumptions that are highly questionable if not plainly wrong.

Can you provide a bit more explanation or perhaps a link?
 
  • #9
comote said:
Can you provide a bit more explanation or perhaps a link?

The no-go theorems make use of the assumption that you can split a physical system into subsystems that evolve independently. In an EPR experiment it is assumed that the two detectors and particle source are independent. While seemingly intuitive, this assumption contradicts known physics (the fact that quantum particles interact at a distance through fields like EM and gravity).
 
  • #10
Hello,

Hurkyl said:
Among the main interpretations of QM, there are:
  • Non-deterministic -- e.g. typical wave-function collapse interpretations
  • Local, deterministic, but lacking counter-factual definiteness -- e.g. MWI
  • Non-local, deterministic, with hidden variables -- e.g. Bohmian mechanics
so it's a sort of "pick your favorite flavor" sort of thing.

The status of Many-World Interpretations is unclear. For me, they are non-local (the splitting occurs instantly), and non-deterministic in the sense that they don't provide a way to predict what we observe.
They can be considered as local if the splitting is considered as non-realistic. But the same can be said about wave-function collapse.
And they can be considered as deterministic if we completely drop Born's rule. But in this case, they are not interpretations, but sub-theories of QM, that is in this case a more general and more predictive theory than MW.

ueit said:
The no-go theorems make use of the assumption that you can split a physical system into subsystems that evolve independently.

This is not an assumption required by the theorems, this is a variable of the theorems.

The theorems say that if you can split a physical system into subsystems that evolve independently, then you can't reproduce QM's predictions.
 
  • #11
Pio2001 said:
Hello,

This is not an assumption required by the theorems, this is a variable of the theorems.

The theorems say that if you can split a physical system into subsystems that evolve independently, then you can't reproduce QM's predictions.

The assumption is that a local-realistic, hidden-variable theory allows such a splitting. Sure, the theorem remains valid but it doesn't have much to say about locality, or realism or determinism. The only theories that are ruled-out are theories that do not contain fields, like the Newtonian mechanics of the rigid body. This makes Bell's theorem pretty irrelevant given that nobody would propose such a theory as an explanation for entanglement.
 
  • #12
The exact assumptions are :

-If A and B are two space-time regions separated by a space-like interval, then nothing that is done in A can have an effect in B and conversely.
-The hypothetical complete description of the initial state is in terms of hidden variables lambda with probability distribution rho(lambda) for the given quantum-mechanical state.

These are compatible not only with Maxwell electromagnetism, but also with special and general relativity.

The fact that we can "split a physical system into subsystems that evolve independently" is not only true for spacetime regions separated by space-like intervals, but required by special relativity.

The incompatibility with quantum mechanics has led to redefine the word "evolve", which has not the same meaning if we talk about observable events or about a wave function.
 
  • #13
ueit said:
It is certainly possible that QM is based on a deterministic and even local mechanism. You will hear claims to the contrary but all of them are based on faulty logic. All no-go theorems (Bell and the like) need to make same assumptions that are highly questionable if not plainly wrong.

This is a completely personal opinion and is not shared in mainstream physics today. Although ueit prefers to view this position as "correct", I think "fringe" is more accurate. There are literally thousands of references saying that locality and various forms of determinism are incompatible with the predictions of QM.
 
  • #14
Pio2001 said:
QM, that is in this case a more general and more predictive theory than MW.
More general (generally) means less predictive!

I like the (classical) theory of gases as an example. We could consider two theories:
  • Theory #1 consists of particle mechanics and continuum fluid mechanics. We empirically study the properties of particles and fluids and how they interact with each other.
  • Theory #2 consists only of particle mechanics

Now, theory #1 is more general, in the sense that there is a greater variety of classical mathematical universes that could be described by it. However, the fact that #2 is consistent with reality (via the kinetic theory of gas) -- and can tell us the things that our theories of continuum fluids couldn't -- makes it a much better choice for a foundational basis of physics.



The discovery of relative quantum states demonstrated the surprising fact that simple unitary evolution of the wave-function could reproduce the appearance of classical probabilities. The discovery of decoherence suggests a physical mechanism by which it could happen. It is therefore rather reasonable to consider interpretations by which the appearance of collapse is an emergent phenomenon rather than axiomatic and literal truth.

MWI is minimalistic in the sense that it works very hard to avoid hypothesizing anything other than wave-functions evolving unitarily, and so researchers spend a lot of time working out how that may reproduce the appearances of things you might like to assert axiomatically.


they don't provide a way to predict what we observe.
The biggest conceptual obstacle people have, it seems, is that people want to insist that "The universe is in a state where I saw a particular outcome" is true, and that "The universe is in a state that has amplitudes on several components, each of which I see different outcomes" is false -- despite the fact there is no (feasible) experiment that can distinguish between the two.

The second, I think, is an insistence on trying to force the classical implementation of frequentism onto MWI, rather than trying to work out how frequentism is more appropriately applied to MWI.
 
  • #15
I think this thread is interesting. I am going to attempt to answer your question but I am not interested in getting into a debate about hidden variables; so I will leave that part of the discussion aside.

My view is that QM is no more deterministic than flipping a coin is. When I flip a coin once, then I really don't know what controls how it lands that one time; all I know is that statistically there will be a certain distribution for the results; and that the more times I flip it that I am mathematically guaranteed to approach a certain distribution. Is that deterministic?

Flipping a coin is random because there is something symmetrical about the coin that makes the probability of landing heads and tails the same. Symmetry is the most important mathematical concept for understanding theoretical physics, and it's role in QM is even more profound than in other branches.

What I was wondering is, is the QM world really not deterministic? I don't mean in the sense of the way we calculate things, I mean in reality.

The question of what "in reality" means is interesting. According to the "orthodox" view, the only reality are things that we can measure from the results of experiments. It has been found by experience that at best we can predict the probability of the outcome of experiments, this is why QM was invented in the first place! If we accept the orthodox viewpoint, then it makes sense to say that "in reality" QM is non-deterministic.

Take the uncertainty principle situation where you measure a particle's position/momentum by bouncing a photon off that particle, thus affecting its position/momentum and introducing the uncertainty in your calculation.

The thing you need to consider is that if we don't "bounce" photons off our electrons before they move, as they move, or after they move; then we have in principle no idea what they are doing, or even whether or not they are even there!

When we first look at an electron, this affects the subsequent motion. And just as importantly, if we have some apparatus which can measure the arrival of an electron, then this will affect that electron's motion before it arrives at the detector! Feynman does a wonderful job of spelling this point out in his lectures on physics vol 3.

Landau, who is my favorite author and who subscribes to the orthodox interpretation of QM, has a very interesting concept, in which he say things along the lines of "concept X has no physical meaning". What does it mean for something to "have no physical meaning?" it means that in principle there is no experiment which can ever measure concept X. The idea is that if we can't measure it in principle, it has no physical meaning.

So for example he says, "the concept of the path of a particle in QM has no physical meaning". Now you should know that this statement actually conflicts fundamentally with the hidden variables point of view.

Imagining two snooker balls colliding, we can be pretty certain about what they are doing because we bounce photons off them to see them, but they are so massive that this observation does not affect them in any significant way. Whether or not we observed, the outcome of the collision is essentially the same.

You need to more carefully study the concept of identical particles. The thing is, I can label the snooker balls either by painting them a certain color, or else simply listing them according to the types of fluctuations of atoms on the surface; in any case the point is I can never label electrons or photons because in principle there is no way to tell two electrons apart, they are identical in the most profound possible sense. This leads to the resolution of the well known Gibb's paradox, which you would be well advised to thoroughly understand.

Say we collided two protons...What if we weren't trying to observe them/predict outcomes? Would that interaction always be happening the same way, given the same initial starting conditions?

No. According to classical mechanics, once we know the initial positions and velocities for all the particles, then in principle the entire motion can be determined. In QM we can only compute probabilities.
 
Last edited:
  • #16
jeebs said:
If I haven't explained this question too well, I'll summarise: If we restarted time right back at the big bang, would everything play out the same way even if we were unable to predict it? As far as my undergrad education has taught me, I'm leaning towards (and hoping for if I'm honest) the answer being yes. Anyone out there who really knows QM inside out that could confirm or deny?

I'd say no, but you'd have a similar looking universe with planets, stars, galaxies etc, just distributed differently. Life may evolve or not, although it seems probable that it would evolve somewhere after 14 billion years.

I think it will eventually be accepted that determinism does not apply at sufficiently microscopic scales (sub-planckian perhaps), but this question is not currently settled by science, so no one can give you a definite answer.
 
  • #17
comote said:
The current state of thinking, (except for fringe elements) is that no. Von Neumann, Bell, Kochen-Specher all proved that no hidden variable theory could reproduce the results of quantum mechanics. Kochen-Specher even discounted non-local hidden variables.
This is simply wrong. This is definitely NOT the current state of thinking (even if someone, like you, still thinks so). In particular, the Kochen-Specker does NOT discount non-local hidden variables. It discounts NON-CONTEXTUAL hidden variables. So the current state of thinking is that hidden variables, if exist, must be non-local and contextual.
 
  • #18
Demystifier said:
In particular, the Kochen-Specker does NOT discount non-local hidden variables. It discounts NON-CONTEXTUAL hidden variables. So the current state of thinking is that hidden variables, if exist, must be non-local and contextual.

What does contextual mean in this context (no pun intended)? That the measurements are dependent on the context (i.e. measuring apparatus)?
 
  • #19
http://www.mth.kcl.ac.uk/~streater/lostcauses.html#XI

I'll quote:
"most physicists accept the Copenhagen interpretation, in which quantum probability does not obey Einstein's concept of reality, but is both local and non-contextual"

I was not commenting on the correctness of "most physicists" rather the state of thinking of "most physicists". "Most physicists" have not given up on non-contextuality.
 
Last edited by a moderator:
  • #20
inflector said:
What does contextual mean in this context (no pun intended)? That the measurements are dependent on the context (i.e. measuring apparatus)?

As far as I understand contextual means, for instance, this:

You can measure x and y, or you can measure x and py. The probability distribution for x alone can be different. In other words probability distribution for a given observable in a given state may depend on the complete system of commuting observables in which your observable is embedded. In practice this means that different measuring devices that measure a given physical quantity may give different probability distribution for this quantity. The results of such an experiment may therefore be not confirmed in a different lab. Such phenomena seem to happen once in a while.
 
Last edited:
  • #21
Hurkyl said:
More general (generally) means less predictive!

Ok, I used "general" with another meaning. I was considering quantum mechanics with Born's rule (the usual formulation), and the same without Born's rule, which is nothing else than a very basic version of Many Worlds, less predictive.

Then, you can build a more elaborate version of many-worlds adding the postulate that the frequencies of observations across a given universe line is, in average, proportionnal to the squared modulus of the wave function that described that particular result over the squared modulus of the wave function that described all possible results before the splitting occurs.

The first version can be considered as local, but it does not predict some experimental results that are predicted by the usual quantum mechanics.
The second version is an interpretation that predicts exactly the same thing as the usual one, but it is non local, because it describes a multiverse where, in average, Bell's inequality is violated along most universe lines.

I can demonstrate this statement if you want, but maybe in another thread.

Hurkyl said:
The biggest conceptual obstacle people have, it seems, is that people want to insist that "The universe is in a state where I saw a particular outcome" is true, and that "The universe is in a state that has amplitudes on several components, each of which I see different outcomes" is false -- despite the fact there is no (feasible) experiment that can distinguish between the two.

I never made such a statement that would be, I agree with you, unscientific.

I just said that if a Many-Worlds interpretation embeds some version of Born's rule, EPR statistics must be the same, as well as Bell's inequality violation, and that leads to non-locality.

JesseM and Vanesch have given a Many-World version that avoids this problem and remains completely local here :
https://www.physicsforums.com/showthread.php?t=206291#11
https://www.physicsforums.com/showthread.php?t=207961&page=5#72

And I have then axiomatised it, but these ideas remains unpublished to my knownledge. That's why, until a peer-reviewed publication summarizes all this, I state that a correct Many-worlds interpretation is non-local.
 
  • #22
inflector said:
What does contextual mean in this context (no pun intended)? That the measurements are dependent on the context (i.e. measuring apparatus)?

It means that the hidden variables on which the measurement results are supposed to depend cannot be some properties of the quantum system measured alone.

If local hidden variables were possible, it would have been natural to suppose that these contextual variables lie in the quantum state of the measuring device, or its immediate environment.
But since they must in addition be non local, their existence remains a mystery.

comote said:
I'll quote:
"most physicists accept the Copenhagen interpretation, in which quantum probability does not obey Einstein's concept of reality, but is both local and non-contextual"

Uh ?!
I'd have said, on the opposite, that these probabilities are non local and contextual, but since they don't obey Einstein's concept of reality, it doesn't matter, as it does not actually violate any physical law.
 
  • #23
  • #24
To my knowledge, the description of an EPR experiment in a relative states interpretation is strictly non-local.
The descriptions given by JesseM and Vanesch are local, but can't be formalized with state vectors while still predicting Bell's inequality violation.
 
  • #25
Locality is seen in the relative states because, through Alice's measurement experiment, the final relative state of the (Alice, Alice's particle) subsystem is completely determined from the initial relative state. One needs only to know the conditions in and near Alice's laboratory to predict how things evolve over time in Alice's laboratory!

The joint (Alice, Alice's particle, Bob, Bob's particle) state cannot be reconstructed from knowledge of both the (Alice, Alice's particle) relative state and the (Bob, Bob's particle) relative state, but that's a different issue.



This is pretty much one of the axioms of local quantum field theory -- for any space-time region U, the relative state of the universal wave-function restricted to the causal completion of U is completely determined by the relative state restricted to just U. (The formulation I linked says this in terms of the observable algebras)
 
  • #26
Hurkyl said:
This is pretty much one of the axioms of local quantum field theory -- for any space-time region U, the relative state of the universal wave-function restricted to the causal completion of U is completely determined by the relative state restricted to just U. (The formulation I linked says this in terms of the observable algebras)

Are these relative state supposed to be normalized ?

If so, Bell's inequality is not violated, which is in contradiction with quantum mechanics predictions.

If their amplitude is not normalized, then the amplitudes of Bob's device's relative states are trigonometric functions of alpha, the angle chosen by Alice, and the amplitudes of the relative states of Alice's device are trigonometric functions of beta, the angle chosen by Bob.

These trigonometric functions alone carry the information that leads to the experimentally verified prediction that S = 2 x square root (2). Without them, you can't violate Bell's inequality.

I didn't know the local quantum field theory, but it seems to me that this is a serious problem with the axiom that you quoted.

The detailed trigonometric functions, as well as the way to get Bell's inequality violation from them, are given here : https://www.physicsforums.com/showthread.php?t=387951&page=3#40
 
  • #27
Any ket that is not normalized does not represent a quantum state!

Relative states are usually mixed states, and so are not* representable by kets anyways. For the state
[tex]\frac{1}{\sqrt{2}} \left( |00\rangle + |11\rangle \right)[/tex]​
the relative state of the first particle is a classical statistical mixture of weight 50% of the state represented by |0> and weight 50% on the state represented by |1>. (there are many equivalent ways to describe this state in such a fashion) The density matrix of this state in the (|0>, |1>) basis, if you're familiar with such things, is
[tex]\left( \begin{matrix}\frac{1}{2} & 0 \\ 0 & \frac{1}{2} \end{matrix} \right)[/tex]​


*: In the usual Hilbert space anyways. I'm pretty sure there are other Hilbert space representations of the state space that contain kets that can represent states like this.
 
  • #28
I agree so far.

In order to predict the experimental result [tex]S = 2\sqrt{2}[/tex], however, we need the non-separable state of both sides, and this state evolves non-locally, the whole instantly becoming a function of both angles [tex]\alpha[/tex] and [tex]\beta[/tex] when Alice and Bob performs their mesurements.
 
  • #29
Pio2001 said:
If A and B are two space-time regions separated by a space-like interval, then nothing that is done in A can have an effect in B and conversely.

Let's take the case of classical electromagnetism. We have a group of point charges in A (subsystem A) and another one in B (subsystem B). When you say that something "is done in A" you probably mean that there is an object C (probably a physicist) that interacts with the subsystem A. But in this case we are not discussing the theory of classical electromagnetism, but an experiment that uses an unknown parameter (object C) which falls outside of the scope of classical electromagnetism.

These are compatible not only with Maxwell electromagnetism, but also with special and general relativity.

Yes, but if we limit ourselves to systems that are fully described by the two theories (point charges for EM, point masses for GR) there is no way to "do something" to any of the two subsystems. Each of them will continue to evolve according to the laws of EM or GR. But those laws require that the motion of any point charge/mass is determined by the resultant field of both subsystems, A and B so that they are not and they will never become independent.
 
  • #30
Pio2001 said:
In order to predict the experimental result [tex]S = 2\sqrt{2}[/tex], however, we need the non-separable state of both sides, and this state evolves non-locally,
The first claim is clear, and the second is not. For the interactions involved, the final relative state of the joint (Alice, Alice's particle, Bob, Bob's particle) system is completely determined by its initial state, is it not?

And in regards to the local structure of space-time, the final state of your experiment is completely determined from information contained in the past light-cone of the events over which the experiment takes place.



There is non-locality involved, but it's in a very different sense -- it's in the fact that one cannot reconstruct the entire (Alice, Alice's particle, Bob, Bob's particle) state using knowledge of just the (Alice, Alice's particle) state and the (Bob, Bob's particle) state.
 
  • #31
ueit said:
When you say that something "is done in A" you probably mean that there is an object C (probably a physicist) that interacts with the subsystem A.

Not necessarily. For example the charges in A can emit an electromagnetic wave by themselves if they are in an initial configuration that has enough potential energy.

ueit said:
But those laws require that the motion of any point charge/mass is determined by the resultant field of both subsystems, A and B so that they are not and they will never become independent.

That's true in electrostatic. In electrodynamics, those laws require that electromagnetic waves travel at a speed equal to c.
 
  • #32
good grief... it's going to be years before I have any real appreciation of half of this thread. the phrase "can of worms" comes to mind.
 
  • #33
Hurkyl said:
For the interactions involved, the final relative state of the joint (Alice, Alice's particle, Bob, Bob's particle) system is completely determined by its initial state, is it not?

The mechanism that transforms [tex]|O\rangle_i[/tex], the observer and its measurement device in its initial orientation, into [tex]|O\phi\rangle_i[/tex], the observer and its measurement device in its final orientation [tex]\phi[/tex], is not explicit, but yes, we can assume that this is the result of a quantum evolution (with myriads of other sub-universes created meanwhile).

Hurkyl said:
And in regards to the local structure of space-time, the final state of your experiment is completely determined from information contained in the past light-cone of the events over which the experiment takes place.

Yes, but since by definition of an EPR setup the space-time region into which the "events take place" is basically a space-like slice of space-time (the reunion of the spatially separated A and B regions), looking at its past light cone is not relevant. We can demonstrate that if these events are described by a state vector, then the sub-regions A and B of this space-time region evolve in violation of locality :

After the emission of the particles and before they reach Alice and Bob, our quantum state is

[tex]|O\rangle_1 \otimes |O\rangle_2 \otimes (|+-\rangle - |-+\rangle )[/tex]

Then, Alice and Bob choose new orientations for their detectors. Our state splits because of all the quantum events that turning the devices involve, but one of the resulting copies is

[tex]|O\alpha\rangle_1 \otimes |O\beta\rangle_2 \otimes (|+-\rangle - |-+\rangle )[/tex]

The evolution is local.

Then, after the particles has entered the detectors (and have been destroyed), but before the future light-cones of each measurement meet, our new state is

[tex]f_{++}(\alpha, \beta)(|O\alpha +\rangle_1 \otimes |O\beta +\rangle_2)[/tex]
[tex]+f_{+-}(\alpha, \beta)(|O\alpha +\rangle_1 \otimes |O\beta -\rangle_2)[/tex]
[tex]+f_{-+}(\alpha, \beta)(|O\alpha -\rangle_1 \otimes |O\beta +\rangle_2)[/tex]
[tex]+f_{--}(\alpha, \beta)(|O\alpha -\rangle_1 \otimes |O\beta -\rangle_2)[/tex]

The description of the observer 1 (Alice) has become a function of [tex]\beta[/tex] (the angle chosen by Bob), that is itself the result of the evolution of [tex]|O\rangle_2[/tex] into [tex]|O\beta\rangle_2[/tex], that occurred outside its past light-cone.
 
Last edited:
  • #34
Said more properly, the non-separable object that Alice and Bob represents evolves in a time t that is inferior to x/c, with x being its spatial extention, which is a violation of special relativity.
 
  • #35
Pio2001 said:
The description of the observer 1 (Alice) has become a function of [tex]\beta[/tex] (the angle chosen by Bob),
What you wrote is* a description of the joint state of both Alice and Bob.

To get a description of Alice only, you have to take a partial trace to eliminate the second component. If I've computed correctly, this is a statistical mixture of (the states named by)
  • [tex] f_{++}(\alpha, \beta) |O\alpha+ \rangle + f_{-+}(\alpha, \beta) |O\alpha- \rangle[/tex]
  • [tex] f_{+-}(\alpha, \beta) |O\alpha+ \rangle + f_{--}(\alpha, \beta) |O\alpha- \rangle[/tex]
weighted with probability 50% on each state.

This could depend on beta. However, doesn't [itex]f_{\cdot \cdot}(\alpha, \beta)[/itex] factor into [itex]g_{\cdot}(\alpha) g_{\cdot}(\beta)[/itex]? In this case, the above state would be independent of [itex]\beta[/itex]. (Because multiplying a ket by a scalar doesn't change what state it represents)


*: I'll take your word for it that the state is pure -- I hate computing partial traces
 

Similar threads

Replies
7
Views
1K
Replies
80
Views
4K
  • Quantum Physics
Replies
24
Views
2K
Replies
0
Views
537
Replies
5
Views
729
  • Quantum Interpretations and Foundations
Replies
15
Views
2K
Replies
13
Views
1K
Replies
6
Views
2K
Replies
11
Views
2K
  • Quantum Physics
2
Replies
69
Views
5K
Back
Top