Is Polarisation Entanglement Possible in Photon Detection?

In summary, the conversation discusses the polarisation state of a photon before detection and whether it is reasonable to assume that it is in a superposition of all possible states. It is clarified that the polarization state can be a single pure state or a mixture of multiple pure states, and in order to determine the polarization, multiple measurements need to be taken. It is also noted that if the photon is entangled with another photon, it is in a pure state but the individual photons are in a mixture of states. There is a discussion about the difference between superposition and mixture in relation to opposite states, and it is ultimately concluded that for entangled photons, neither photon is in a pure state.
  • #211
Demystifier said:
Just because the assignment of probability to a single event is subjective and cannot be checked does not mean it's meaningless. Such a Bayesian subjective assignment of probability may be useful in making decisions. This is something that people do (often intuitively and unconsciously) every day. (For instance, I have to buy shoes for my wedding (and I was never buying wedding shoes before), so have to decide which shop I will visit first. I choose the one for which I estimate a larger probability of finding shoes I will be satisfied with.)
Yes, but buying shoes is not physics.
 
  • Like
Likes Mentz114
Physics news on Phys.org
  • #212
zonde said:
You don't have to speak about actualities. Meaning you don't care how to give realistic model of interference.

Well, it's a pure fact that quantum mechanical probabilities involve summing over possibilities. That's the basis for Feynman's path integral formulation, but it's true for any formulation: the probability amplitude [itex]\psi(A,B,t_0, t)[/itex] to go from state [itex]A[/itex] at time [itex]t_0[/itex] to state [itex]B[/itex] at time [itex]t[/itex] is equal to the sum over a complete set of intermediate states [itex]C[/itex] of [itex]\psi(A,C, t_0, t_1) \psi(C, b, t_1, t)[/itex] where [itex]t_0 < t_1 < t[/itex].
 
  • #213
stevendaryl said:
If the actual ensemble is finite (which it always is), then in reality, you have the same problem as single events, which is how to make judgments based on finite data.
If the finite number is large, there is no problem at all - the law of large numbers makes things reliable to a fairly high precision.. This is why thermodynamics makes definite predictions since it averages over 10^23 molecules.

And this is why repeatability is the hallmark of scientific work. If something is not repeatable in 1 out of 1000 cases, one uually ignores the signe exception, attributing it to side effects unaccounted for (which is indeed what it boils down to since we can't know the precise state of the unverse, which evolves deterministically).
 
  • #214
stevendaryl said:
You assign (subjective) probabilities to initial conditions, and then you evolve them in time using physics, to get derived probabilities for future conditions. There's plenty of physics involved.
But probabilities of single events are meaningless, hence the derived probabilities are as subjective and meaningless as the initial ones. garbage in garbage out.

Subjective probabilities for single events cannot be tested since different subjects can assign arbitrary probabilities but there will be only one outcome independent of anyone's probability.

And under repetition, there will be only one relative frequency, and among all subjective probabilities only those are scientific that match the observed relative frequency within the statistically correct uncertainty. All others are unscientific though subjectively they are allowed. Therefore subjective probability is simply prejudice, sometimes appropriate and sometimes inappropriate to the situation.

Whereas physics is about what really happens, independent of our subjective impressions.
 
  • #215
A. Neumaier said:
If the finite number is large, there is no problem at all - the law of large numbers makes things reliable to a fairly high precision..

I disagree. There are many aspects to assessing data that are subjective. Is it really the case that it is an ensemble of identically prepared system? Is the system really in equilibrium?

I think you're wrong on two counts: (1) that subjectivity makes it nonscientific, and (2) that it is possible to eliminate subjectivity. I don't think either is true.
 
  • Like
Likes Demystifier
  • #216
A. Neumaier said:
But probabilities of single events are meaningless, hence the derived probabilities are as subjective and meaningless as the initial ones. garbage in garbage out.

I'm saying that they are not meaningless, and in fact it is inconsistent to say that they are meaningless. If probabilities for single events are meaningless, then probabilities for 10 events are meaningless, and probabilities for 10,000 events are meaningless. Any finite number of events would be equally meaningless.

Garbage in: Probabilities for single events are meangingless.
Garbage out: Probabilities for any finite number of events are meaningless.
 
  • #217
For me the probability of a single coin toss giving heads is the limit of the ratio no of heads/no of tosses as the no of tosses increases indefinitely.
 
  • #218
Mentz114 said:
For me the probability of a single coin toss giving heads is the limit of the ratio no of heads/no of tosses as the no of tosses increases indefinitely.
but it can increase only to one, as otherwise one has multiple coin tosses.
 
  • #219
A. Neumaier said:
but it can increase only to one, as otherwise one has multiple coin tosses.
You misunderstand what I wrote. I amend it thus

... the probability of a single coin toss giving heads is the limit of the ratio no of heads/no of tosses as the no of tosses increases indefinitely, if I performed this.

It is the empirical definition of probability. I thought it was standard.
 
  • #220
Mentz114 said:
For me the probability of a single coin toss giving heads is the limit of the ratio no of heads/no of tosses as the no of tosses increases indefinitely.

But (1) there is no guarantee there is such a limit, and (2) we can't actually measure the limit; we can only approximate it with a large but finite number.
 
  • #221
stevendaryl said:
But (1) there is no guarantee there is such a limit, and (2) we can't actually measure the limit; we can only approximate it with a large but finite number.
1) if there is no limit then the distribution has no first moment i.e. <x> is undefined and no predictions are possible ( for instance Cauchy pdf)
2) Yes. Just like ##\pi## we can only get estimates.
 
Last edited:
  • #222
Mentz114 said:
if I performed this.
The result of unperformed tosses cannot be observed, and if you performed more than one toss you are no longer talking about a single coin toss.
 
  • #223
Mentz114 said:
It is the empirical definition of probability. I thought it was standard.
The empirical definition of probability applies only in the case where many repetitions are performed - in physicists' terms, for an ensemble; in statisticians' terms, for a large sample of i.i.d. realizations.
 
  • #224
Mentz114 said:
1) if there is no limit then the distribution has no first moment i.e. <x> is undefined and no predictions are possible ( for instance Cauchy pdf)

What I mean is that there is no guarantee that when flipping a coin repeatedly that the relative frequency of "heads" approaches any kind of limit. What you can say is that if the probability of a coin flip yielding "heads" is [itex]p[/itex], then the probability that [itex]N[/itex] independent coin flips will yield a relative frequency of heads much different from [itex]p[/itex] goes to zero, in the limit as [itex]N \rightarrow \infty[/itex]. In other words, if you flip a coin many, many times, you will probably get a relative frequency that is close to the probability, but it's not a guarantee.
 
  • #225
A. Neumaier said:
The result of unperformed tosses cannot be observed, and if you performed more than one toss you are no longer talking about a single coin toss.
But I am talking about a single coin toss. It makes sense to me to define the single toss probability in terms of an ensemble of coins.
 
  • #226
stevendaryl said:
What I mean is that there is no guarantee that when flipping a coin repeatedly that the relative frequency of "heads" approaches any kind of limit. What you can say is that if the probability of a coin flip yielding "heads" is [itex]p[/itex], then the probability that [itex]N[/itex] independent coin flips will yield a relative frequency of heads much different from [itex]p[/itex] goes to zero, in the limit as [itex]N \rightarrow \infty[/itex]. In other words, if you flip a coin many, many times, you will probably get a relative frequency that is close to the probability, but it's not a guarantee.
Have you got some equations to back this up ?
I guess I'll just have ride my luck.
 
  • #227
Mentz114 said:
But I am talking about a single coin toss. It makes sense to me to define the single toss probability in terms of an ensemble of coins.
Then it is a property of the latter but not of the former.

It is like defining the color of a single bead in terms of the colors of an ensemble of different unseen beads. In which sense is this a definition that applies to the single bead?
 
  • #228
Mentz114 said:
Have you got some equations to back this up ?
I guess I'll just have ride my luck.

It's pretty standard. If you make [itex]N[/itex] trials, each trial has a probability of success of [itex]p[/itex], then the probability that you will get [itex]m[/itex] successes is:

[itex]p_{m,N} = \frac{N!}{m! (N-m)!} p^{m} (1-p)^{N-m}[/itex]

Now, write [itex]m = N (p + x)[/itex]. Using Sterling's approximation, we can estimate this for small [itex]x[/itex] to be:

[itex]p_{m,N} \approx e^{- \frac{x^2}{\sigma^2}}[/itex]

where (if I've done the calculation correctly) [itex]\sigma = \sqrt{\frac{p(1-p)}{N}}[/itex]

If [itex]N[/itex] is large, the probability distribution for [itex]x[/itex] (which measures the departure of the relative frequency from [itex]p[/itex]) approaches a strongly peaked Gaussian, where the standard deviation [itex]\sigma \rightarrow 0[/itex].
 
  • #229
A. Neumaier said:
It is like defining the color of a single bead in terms of the colors of an ensemble of different unseen beads. In which sense is this a definition that applies to the single bead?
If we choose a bead at random from N beads, then the probability of our selection being of color n is (number of beads of color n)/N.

Note that these definitions are not subjective.
 
  • #230
stevendaryl said:
It's pretty standard. If you make [itex]N[/itex] trials, each trial has a probability of success of [itex]p[/itex], then the probability that you will get [itex]m[/itex] successes is:If [itex]N[/itex] is large, the probability distribution for [itex]x[/itex] (which measures the departure of the relative frequency from [itex]p[/itex]) approaches a strongly peaked Gaussian, where the standard deviation [itex]\sigma \rightarrow 0[/itex].

If I recall correctly, the sample mean of a random sample from a Gaussiam pdf is the maximum likelihood estimator of the mean ##\mu## and is also unbiased. So the expected value of ##x## is zero.

I have not checked the bias of the binomial estimators ##n/N## but in the large sample limit I'll bet (:wink:) they are unbiased also.
 
  • #231
Mentz114 said:
If we choose a bead at random from N beads, then the probability of our selection being of color n is (number of beads of color n)/N.

Note that these definitions are not subjective.
But this is a definition for the probability of selecting an arbitrary bead from the N beads at random. Thus it is a property of the ensemble, not of any particular bead; in particular not of the bead that you have actually drawn (since this one has a definite color).

Consider the probability of a man (heavy smoker, age 60) to die of cancer within the next 5 years. If you take him to be a member of the ensemble of all men, you get a different probability than if you take him to be a member of the ensemble of all heavy smokers, another probability if you take him to be a member of all men of age 60, and yet another probability if you take him to be a member of the ensemble of all heavy smokers of age 60. But it is always the same man. This makes it clear that the probability belongs to the ensemble considered and not to the man.
 
  • #232
Mentz114 said:
If I recall correctly, the sample mean of a random sample from a Gaussiam pdf is the maximum likelihood estimator of the mean ##\mu## and is also unbiased. So the expected value of ##x## is zero.

Right, that's the way I defined it. [itex]x =\frac{m}{N} - p[/itex]. So [itex]x=0[/itex] corresponds to the relative frequency [itex]\frac{m}{N}[/itex] being equal to the probability [itex]p[/itex].

Anyway, the point is that when [itex]N[/itex] is large, [itex]\frac{m}{N}[/itex] is very likely to be nearly equal to [itex]p[/itex]. But there is no guarantee.
 
  • #233
A. Neumaier said:
But this is a definition for the probability of selecting an arbitrary bead from the N beads at random. Thus it is a property of the ensemble, not of any particular bead; in particular not of the bead that you have actually drawn (since this one has a definite color).

Consider the probability of a man (heavy smoker, age 60) to die of cancer within the next 5 years. If you take him to be a member of the ensemble of all men, you get a different probability than if you take him to be a member of the ensemble of all heavy smokers, another probability if you take him to be a member of all men of age 60, and yet another probability if you take him to be a member of the ensemble of all heavy smokers of age 60. But it is always the same man. This makes it clear that the probability belongs to the ensemble considered and not to the man.
Naturally this is entirely correct. So it is sensible to talk about a single case when the ensemble is specified.

The ensemble of identically tossed identical coins is one ensemble and its members 'inherit' from only this ensemble. So obviously a probability distribution belongs to the ensemble, but describes the individuals. So it is sensible to talk about an indivdual.

The statement "it is nonsense to ascribe probability to a single event" is too extreme for me.

Likewise @stevendaryl s assertion that subjective probabilities are essential in physics.
 
  • #234
Mentz114 said:
The statement "it is nonsense to ascribe probability to a single event" is too extreme for me.

Then you're probably more of a Bayesian at heart o0)

I think it's meaningful to talk about probabilities of single events too. It seems to be a common position of so-called frequentists to assert that the probability of a single event is meaningless. I have no idea why a statement like "the probability that a photon is detected in this output arm of my 50:50 beamsplitter when I input a single photon is 1/2" should be considered to be meaningless.

Of course if we want to experimentally determine a probability then a single event is somewhat useless, and we're going to need lots of trials. But I don't see why that should prevent us from talking meaningfully about probabilities applied to single events.

Getting a precise technical definition of probability (or perhaps more specifically randomness) is also, surprisingly perhaps, non-trivial and essentially recursive as far as I can see.

David MacKay discusses these issues and gives some great examples of the Bayes vs. Frequency approaches in his fantastic book "Information Theory, Inference and Learning Algorithms" which you can read online

http://www.inference.phy.cam.ac.uk/itprnn/book.pdf
 
  • #235
A. Neumaier said:
But probabilities of single events are meaningless, hence the derived probabilities are as subjective and meaningless as the initial ones. garbage in garbage out.

Subjective probabilities for single events cannot be tested since different subjects can assign arbitrary probabilities but there will be only one outcome independent of anyone's probability.

And under repetition, there will be only one relative frequency, and among all subjective probabilities only those are scientific that match the observed relative frequency within the statistically correct uncertainty. All others are unscientific though subjectively they are allowed. Therefore subjective probability is simply prejudice, sometimes appropriate and sometimes inappropriate to the situation.

Whereas physics is about what really happens, independent of our subjective impressions.
This is also a wrong argument you hear very often. Only, because the notion of state in QT has a probabilistic meaning, that doesn't mean that the association of a state to a physical situation is subjective. This becomes clear when you step back from the formalism for a moment and think about what the state means concretely in the lab: In an operational sense it's an equivalence class of preparation procedures, and the preparation can be completely determining the state, which is described in the formalism by a pure state. That means by your preparation procedure you determine a complete set of compatible observables with certain values. This is possible only for very simple systems, e.g., the protons in the LHC which have a pretty well-determined momentum. Already their polarization is not determined, and thus you have not a complete preparation of the proton state, but you associate them as being unpolarized. This can, of course, be checked in principle. If you find a polarization, you correct your probabilistic description, but it's not subjective. You can always gain information about a system (sometimes implying that you change the state due to the interaction between measurement apparatus and system which is necessary to gain the information you want). Other systems, particularly macroscopic many-body systems are very difficult to prepare in a pure state, and thus you associate mixed states based on the (incomplete) information you have. Here the choice of the statistical operator is not unique, but you can use objective concepts to determine one, e.g., the maximum-entropy principle, which associates the state of "least prejudice" taking into account the constraints given by the available information on the system. Whether this state is a good guess or not is again subject to observations, i.e., you can test the hypothesis, again with clear objective statistsical methods, given by the association of a statistical operator to the information and refine this hypothesis. E.g. if you have a cup of tea on your desk, sitthing there for a while, so that at least it's not moving somehow anymore, it's a good hypothesis to assume that it is in (local) thermal equilibrium. Then you measure its temperature (maybe even at different places within the cup) and check whether the hypothesis is good or not. You can also determine the temperature of the surrounding to see, whether the cup of tea is even in equilibrium with the rest of your office and so on and so on. I'd rather call it "uncertainty" than "subjectivity" to determine the state, if you don't have complete information. At the end always experiments and careful observations have to verify your "educated guess" about the association of a statistical operator with the real situation in nature. Physics is an empirical (and objective!) natural science!
 
  • #236
A. Neumaier said:
The empirical definition of probability applies only in the case where many repetitions are performed - in physicists' terms, for an ensemble; in statisticians' terms, for a large sample of i.i.d. realizations.
Sure, that's why I never understood all this talk about Bayesianism, let alone the extreme form in QT, known as qbism ;-)). If I want to test a probabilistic statement, I have to "collect enough statistics" to test the hypothesis. That's the easy part of empirical science: You repeat the experiment on a large sample of equally prepared (as good as you can at least) objects and measure as good as you can the observables in question to test the hypothesis. Much more complicated is the reliable estimate of the systematic errors ;-).
 
  • #237
A question for @A. Neumaier :

Suppose I perform a measurement on which I have no theoretical knowledge, except that only two results are a priori possible: result A and result B. Suppose that I repeat the measurement 10 times and get A each time. Now I want to use this result to make a prediction about future measurements. What is the confidence that I will get A when I perform the measurement next time?

Now consider a variation in which I perform only one measurement and get A. What is now the confidence that I will get A when I perform the measurement next time?
 
  • #238
Simon Phoenix said:
I have no idea why a statement like "the probability that a photon is detected in this output arm of my 50:50 beamsplitter when I input a single photon is 1/2" should be considered to be meaningless.
This is indeed meaningful since, according to standard grammar, ''a photon'' is an anonymous photon from an ensemble, just like ''a person'' doesn't specify which person.

Once one carefully defines the language one gets rid of many of the apparent paradoxes caused by sloppy conceptualization. See also the thread Quantum mechanics is not weird, unless presented as such.
 
Last edited:
  • #239
Mentz114 said:
So obviously a probability distribution belongs to the ensemble, but describes the individuals.
It describes the anonymous individuals collectively (as an ensemble) but no single one. To use the property of the ensemble for a particular case is a common thing but has no basis in the formalism and therefore leads to paradoxes when pushed to the extreme.
 
  • #240
For ensembles we have statistics. Probability is model for individual case based on statistics of ensemble.
 
  • #241
zonde said:
For ensembles we have statistics. Probability is model for individual case based on statistics of ensemble.
No. Probability is the theoretical tool in terms of which statistics is formulated. For individual cases we just have observations, together with a sloppy (or subjective) tradition of misusing the notion of probability.
 
  • #242
A. Neumaier said:
No. Probability is the theoretical tool in terms of which statistics is formulated. For individual cases we just have observations, together with a sloppy (or subjective) tradition of misusing the notion of probability.

The subjective treatment of probability is anything but sloppy. It's much more careful than the usual frequentist approach.
 
  • #243
zonde said:
With physical collapse you mean that measurement of Alice's photon changes Bob's photon polarization? Meaning that if initially we model Bob's mixed state as statistical mixture of orthogonal pure states H/V then after Alice's measurement in H'/V' basis Bob's mixed state components change to H'/V' basis, right?
Let's say that Alice always "measures" first. Then when the photon pair interacts with her polarizer, it prepares the state for both Alice and Bob. I think Simon has said more or less the same thing.

Interestingly, I saw yesterday a Danish TV program from 2013, where the main message seemed to be that people should just accept the non-locality a la Bohr. I did not recognize other people talking there but they did have Zeilinger talking there. They also had "Bohr" and "Einstein" traveling back and forth with a train discussing Bohr's ideas and whether moon is there when nobody is looking. "Bohr" just said that "Einstein" can't prove it.
 
  • #244
vanhees71 said:
the preparation can be completely determining the state, which is described in the formalism by a pure state.
In most cases, when the model is sufficiently accurate, only by a mixed state. Whatever is prepares, the state is objectively given by the experimental setting. No subjective interpretation enters, except for the choice of a level of detail and accuracy with which the situation is modeled.
vanhees71 said:
the protons in the LHC which have a pretty well-determined momentum
Even the state of protons will generally be mixed states, since their position/momentum uncertainty is larger than that required for a pure state.
vanhees71 said:
you associate mixed states based on the (incomplete) information you have.
No. Otherwise the state would change if the experimenter gets a stoke and forgets the information, and the assistant who completes the experiment has not yet read the experimental logbook where this information was recorded.

One associates mixed states based on the knowledge (or hope) that these mixed states correctly describe the experimental situation. The predictions with a mixed state will be correct if and only if this mixed state actually describes the experiment, and this is completely independent of the knowledge various people have.

Introducing talk about knowledge introduces a nonscientific subjective aspect into the setting that is completely spurious. What counts is the knowledge that Nature has, not the one of one of the persons involved in an experiment. Whose knowledge should count in case of collision experiments at CERN where most experimental information is gathered completely automatically, and nobody ever looks at all the details?
 
  • #245
A. Neumaier said:
In most cases, when the model is sufficiently accurate, only by a mixed state. Whatever is prepares, the state is objectively given by the experimental setting. No subjective interpretation enters, except for the choice of a level of detail and accuracy with which the situation is modeled.

That's like saying no subjective interpretation enters, other than the parts that are subjective.
 
Back
Top