Hanbury Brown and Twiss effect explanation

In summary, the Hanbury Brown and Twiss effect is a clever way to measure the variance of the photon number distribution of your light field.
  • #1
inkskin
12
0
I've visited a great many sites and looked at papers to fully understand this, and still have some confusion regarding the Hanbury Brown and Twiss effect.

Classically, speaking, we treat photons as waves. the math yields a correlation function which boils down to a constant term and a cos term. then what?

Quantum mechanically speaking, the wave function reaching either detector is 1/sqrt(2)*(A(1)B(2)+B(1)A(2)), A's and B's being wavefunctions of the photons on path 1 and 2 on detectors A and B. Is this the reason? the wavefunction is multiple states? I don't quite understand this fully. Or is it because bosons have unit spin, and hence must have symmetric space in their wave function(thus leading to bunching) and the opposite for fermions.

Also, does the light have to be non coherent for it to work? Something to do with lasers having a poisson distribution, and one need super-poisson(non coherent) light for the expt. why?

I realize this is a lot of question, but I'm utterly confused. Any help would be appreciated. thanks
 
Physics news on Phys.org
  • #2
The wave function is due to symmetrization for bosons. There is the opposite effect for fermions.

The Hanbury Brown and Twiss effect doesn't work for lasers, because the effect assumes that the two sources are uncorrelated.

Take a look at section IV and V of http://arxiv.org/abs/nucl-th/9804026 .
 
  • #3
inkskin said:
Also, does the light have to be non coherent for it to work? Something to do with lasers having a poisson distribution, and one need super-poisson(non coherent) light for the expt. why?

The HBT experiment is more or less a clever way to measure the variance of the photon number distribution of your light field. Or using a different wording: One looks for correlations.

To check whether there are correlations present, one will typically compare the joint detection rates to the joint detection rate expected if there are no correlations present - the latter will just be the product of the mean detection rates n. Assume for simplicity that we simply sample the same field twice. Then this ratio will be:

[tex]g^{(2)}=\frac{\langle: n^2 :\rangle}{\langle n \rangle^2}.[/tex]

This is the correlation function. A value of 1 will indicate the absence of correlations. The thing you measure in the HBT-effect is the normal-ordered correlation function (the two ":" are there to indicate that). Normal-ordering of the underlying photon operators just means that you correctly incorporate the effect of the detection of the first photon: the light field will now have one photon less:

[tex]g^{(2)}=\frac{\langle n (n-1) \rangle}{\langle n \rangle^2}.[/tex]

Now the instantaneous photon number will be the mean photon number with some deviation added:

[tex]g^{(2)}=\frac{\langle (\langle n \rangle +\delta) (\langle n \rangle +\delta -1) \rangle}{\langle n \rangle^2}.[/tex]

Now you can evaluate all these terms. The mean value of the deviation should vanish. The expectation value of the square of the deviation survives. This is the variance of the photon number distribution. That leaves us with three surviving terms:

[tex]g^{(2)}=\frac{\langle n \rangle^2 +\langle \delta^2 \rangle- \langle n\rangle }{\langle n \rangle^2}=1+\frac{\langle \delta^2 \rangle}{\langle n \rangle^2}-\frac{1}{\langle n \rangle}.[/tex]

Now one can perform a sanity check and evaluate this result for three typical states of the light field:

a) A single photon. A non-classical single photon state has no noise at all and a photon number of 1. This will leave us with a g2 of 0. The correlation is negative. If you have exactly one photon and detect it, it is gone and you will not be able to detect another photon afterwards.

b) Thermal light. Thermal light follows the Bose-Einstein distribution. For a mean of <n> it has a variance of n^2 +n. That leaves us with a value of 2. The detection events are correlated.

c) Coherent light. This follows a Poissonian distribution. For a mean value of <n>, this distribution has a variance of sqrt(<n>). Inserting that into g2, you will find that the last two terms cancel exactly and you are left with a value of 1. The shot noise contributions exactly cancel the contribution you get by destroying a photon with the first detection. This is also the reason why coherent states are eigenstates of the photon annihilation operator. They are immune to loss. So the detections will be completely uncorrelated for laser light.

The interpretation of g2 is straightforward. It gives you the probability of detecting a photon pair normalized to the same probability for a light field of the same mean photon number,but all photons being statistically completely independent.

So the HBT effect does not work for laser light because the two light beams are uncorrelated for laser light, while you need the classical correlations of thermal light to see the HBT effect in terms of bunching.

There are more ways of getting the same result,for example the interference approach first introduced by Fano, but in my opinion it is easier to first digest the simple scenario given above before going to more complicated explanations.
 
  • #4
Thank you so much for your responses. They have been incredibly useful and cleared some doubts. However one thing I am still not absolutely certain of is that why do fermions and bosons act differently? Is it because of what property of their wavefunction?
Also, in setting up the hbt expt, the ideal source would be a single photon source, right? however, if I'm unable to obtain that, can a laser (inspite of being coherent) be attenuated with filters to give very few photons and hence be used as a source? can any other kind of laser be used? or some other source? and while collecting the data, experimentally, do they all display bunching?
 
  • #5
inkskin said:
Also, in setting up the hbt expt, the ideal source would be a single photon source, right? however, if I'm unable to obtain that, can a laser (inspite of being coherent) be attenuated with filters to give very few photons and hence be used as a source? can any other kind of laser be used? or some other source? and while collecting the data, experimentally, do they all display bunching?

G. I. Taylor was able to get down to single photon levels by very simple and inexpensive means ... in 1908! See http://spiff.rit.edu/classes/phys314/lectures/dual3/dual3.html
 
  • #6
inkskin said:
Thank you so much for your responses. They have been incredibly useful and cleared some doubts. However one thing I am still not absolutely certain of is that why do fermions and bosons act differently? Is it because of what property of their wavefunction?

When you evaluate the probability amplitudes for two indistinguishable two-detection events (like two indistinguishable photons or electrons being detected at two different detectors), you get an interference term which contains a commutator. As the symmetry properties of Bosons and Fermions are different (symmetric vs. antisymmetric wave functions for example), this term will get a different sign for Bosons and Fermions. For Bosons indistinguishable pathways leading to the same final result will interfere constructively, while you get destructive interference for Fermions.

inkskin said:
Also, in setting up the hbt expt, the ideal source would be a single photon source, right?

No, this is pretty much the worst source you can use. At least if you want to see the HBT-effect in terms of photon bunching. If you have a look at my last post, you will see that single photons show a g2 of 0, which is antibunching - the exact opposite of the HBT effect. This result is pretty trivial. A single photon will never be detected at two detectors simultaneously, so you will never get simultaneous clicks in the detector. So pretty much the only thing a HBT setup can do with single photons is showing that they really are single photons. This is a standard way of using the setup in labs all around the world as this experiment is the gold standard for showing that photons are particles and that one really has a single photon source, but you will not get photon bunching that way.

inkskin said:
however, if I'm unable to obtain that, can a laser (inspite of being coherent) be attenuated with filters to give very few photons and hence be used as a source? can any other kind of laser be used? or some other source? and while collecting the data, experimentally, do they all display bunching?

No, an attenuated laser will never give you single photons. A real single photon source will give you one photon at most at a single time. A weak laser can give you one photon on average, but you can never switch off the fluctuations.

And: no, as I wrote in my last post, only thermal light will show bunching (g2=2). Laser light and single photons will not show that. However, most readily available thermal light sources are spectrally broad and thus have a short coherence time, so typical detector time resolutions will be too short to see it.

The most practical approach for creating a good light source to see the HBT effect is a pseudothermal Martienssen lamp. You take some laser light (work safely - it may be dangerous) and focus it onto a ground glass disk. You will see some characteristic speckle pattern. Now mount the ground glass disk on some rotating motor (those designed for model planes work well) and rotate it. You will now see a rotating speckle pattern. If you use a narrow pinhole and just use a very small part of the scattered light, this light field will behave exactly like thermal light and will show photon bunching. The coherence time will depend mostly on how fast the ground glass disk gets rotated. Typically you will get coherence times in the low microsecond range.
 
  • Like
Likes PierronicCrystal
  • #7
Cthugha said:
. Assume for simplicity that we simply sample the same field twice. Then this ratio will be:

[tex]g^{(2)}=\frac{\langle: n^2 :\rangle}{\langle n \rangle^2}.[/tex]

If you were to NOT sample the same field. And if you were to look at the 2 detectors, for joint detection. Then <:n^2:> wouldn't become <n(n-1)> right, because the mean detection rate at each detector is still n. this comes in after the detection of one photon right, so for the detection of the second photon, we look to the second detector, at which a photon hasn;t been detected before, and hence it still gives n. and the first detector that gives n and it's not necessary to account for the previous detection of a photon here already. we're only looking for co-incidences in the second detector, now that the first has already detected a photon.

ALso, if the value of the correlation function is 2, why does this imply correlation?
 
  • #8
inkskin said:
If you were to NOT sample the same field. And if you were to look at the 2 detectors, for joint detection. Then <:n^2:> wouldn't become <n(n-1)> right, because the mean detection rate at each detector is still n.

Yes, but the HBT-effect is about autocorrelation. If you take two completely independent light fields - no interaction with the same system, not originating from the same source, not subject to the same noise sources - nothing will happen.

inkskin said:
ALso, if the value of the correlation function is 2, why does this imply correlation?

g2 is a direct measure of deviations from statistical independence. If you take the time resolved version g2(tau), you can directly interpret it as the relative probability to detect a second photon at time tau if you already detected one at time zero. This relative probability is normalized to the probability you would get for equivalent light fields of the same intensity and statistically independent photon detection events.

So by definition, a value of g2=1 means that you have no correlations. Any deviation means you will get correlations (g2(tau)>1) or anticorrelations (g2(tau)<1). g2>1 just means that it is more probable to detect a second photon at a time delay tau. Consider some kind of cascaded decay that starts randomly, where some system will go from an excited state to an intermediate state and from there down to the ground state. Let us assume that it will emit a photon of the same energy during both processes.

As the process starts randomly, when repating many times you will get some equally distributed mean intensity over time. However, the intermediate state will have some finite lifetime, so the second photon will have a rather fixed delay compared to the first one. If you measure g2, you will find, that it will have a pretty large value at a delay tau that corresponds to the lifetime of that intermediate state.

Or consider a light field that is incredibly noisy. It has some mean intensity, but the photon number is fluctuating by huge amounts. In that case, if you detect a photon now, the probability that the instantaneous photon number is way above the mean photon number is large and the probability to detect a second photon at short delays will be huge, just because you are still way above the mean photon number. This will give a light field with g2>1 for small delays tau.
 
  • #9
thanks a lot. lastly, I must ask, the quantum interpretation of this is still not very clear to me. I have read a couple of things about 2 sources and 2 detectors, and they probability of photon at detector 2 being released by source 1 or 2, or a superposition. But is there any concrete way to prove this with an operator? how does that work
 
  • #10
inkskin said:
thanks a lot. lastly, I must ask, the quantum interpretation of this is still not very clear to me. I have read a couple of things about 2 sources and 2 detectors, and they probability of photon at detector 2 being released by source 1 or 2, or a superposition. But is there any concrete way to prove this with an operator? how does that work

I am not exactly sure what you are asking here. Do you want some deeper understanding on why Bosons and Fermions give different results? This is a consequence of the spin-statistics theorem (http://en.wikipedia.org/wiki/Spin–statistics_theorem) and the symmetry/antisymmetry of many-body wavefunctions.

Or do you rather want a more intuitive picture of the double detection process? In that case, things are similar to standard qm problems. If you want to calculate the probability for some process happening, you add all the indistinguishable probability amplitudes leading to the same result and square it to get the probability density (e.g. if there is no way at all to find out whether a photon was emitted by source 1 or 2). Afterwards you can add all the probability amplitudes for distinguishable events leading to the same final result (e.g. if you know whether a photon was emitted from source 1 or 2).

This is not too different from the double slit. If you know which slit a particle went through, you will not get interference. Only if that information is not available, you will get interference terms. The HBT-effect is similar, but works "one order higher". Instead of interferences in the field, you get interferences in the intensities. The quantum interpretation of the HBT effect is given in this paper: http://scitation.aip.org/content/aapt/journal/ajp/29/8/10.1119/1.1937827 (U. Fano, Am. J. Phys. 29, 539 (1961)), but it involves quite a bit of math. I am not sure, whether it is appropriate for your level of physics education or not.
 
  • #11
Cthugha said:
Yes, but the HBT-effect is about autocorrelation. If you take two completely independent light fields - no interaction with the same system, not originating from the same source, not subject to the same noise sources - nothing will happen.
so the n here represents the no of counts of photons in each arm. am i right? joint detection rates depend upon ensemble average of detection at each arm <n^2>. say one photon has been detected in detector 1, given this, in a small time tau, we are looking at detector 2, which has an probability of anyone of the n photons being detected.

So it should still be <n^2> right? the first detection is irrelevant now, and hence doesn't have to be accounted for. it only acts as a 'timer' of sorts
 
  • #12
also, you suggested using a pseudothermal Martienssen lamp. why this? why won't light from any thermal source suffice to display correlations? is noise the problem?
 
  • #13
inkskin said:
so the n here represents the no of counts of photons in each arm. am i right? joint detection rates depend upon ensemble average of detection at each arm <n^2>. say one photon has been detected in detector 1, given this, in a small time tau, we are looking at detector 2, which has an probability of anyone of the n photons being detected.

n is the instantaneous photon number without any averaging. It will fluctuate over time. Maybe even strongly so. So when you have 20 photons and one gets detected, you will have 19 left to detect on the other side. When you have only one photon and it gets detected, you will have none left on the other side. What you do now is averaging correctly. You take the photon number distribution of your light field and for each possible photon number you calculate the probability for that photon number to occur, then the probability of a photon being detected at detector one, consider the remaining photons and consider the probability that another one gets detected at detector two.

inkskin said:
So it should still be <n^2> right? the first detection is irrelevant now, and hence doesn't have to be accounted for. it only acts as a 'timer' of sorts

It does not only act as a timer. Photon detection is a statistical process and the probability that one actually gets detected may be somewhat low. So the probability that a first photon gets detected at all is significantly higher if the instantaneous photon number right now is high. Therefore, the first detection means that you have a rather high probability of having a large photon number right now.

inkskin said:
also, you suggested using a pseudothermal Martienssen lamp. why this? why won't light from any thermal source suffice to display correlations? is noise the problem?

Oh, it will display correlations, but these will vanish on a timescale on the order of the coherence time. For light from the sun this means 100 femtoseconds or so. Typical semiconductor thermal light sources are somewhere in the picosecond range. The typical time resolution of good photo diodes is in the nanosecond range. Trying to measure the correlations that way is like trying to measure the thickness of a hair using a standard ruler. It will not work. You can use an atom based laser (not a semiconductor laser) below threshold, but I doubt you have one available. The Martienssen lamp is cheap and convenient.
 
  • #14
sorry for the delayed response. I was out of town. Thank you for all this. You've been a HUGE help in understanding this.

Why is it that i cannot attenuate a laser enough to generate single photons . I understand that it's a quantum state. but if i were to space and time gate a tubelight. then in principle i should!
 
  • #15
inkskin said:
Why is it that i cannot attenuate a laser enough to generate single photons . I understand that it's a quantum state. but if i were to space and time gate a tubelight. then in principle i should!

For a laser the exact photon number in any time interval is not exactly definded and dimming intensity is not a deterministic, but a stochastic process.

Let us say for simplicity that you start with around 1000000 photons. The photon number fluctuations will be on the order of the square root of that. Now attenuation is a process similar to putting a beam splitter in. For each photon in the beam you get a certain probability that it will be removed from the beam. You can repeat this up to the point that you will have only one photon or less on average, but due to the stochastic nature of attenuation, some finite probability of having more than one photon present will necessarily remain.

For "real" single photons you need a non-linearity. Take a single atom. It has one excited state and one ground state, so it saturates at an excitation number of one. If it emits a photon it returns to the ground state and you need some time to pump it back to the excited state again. During this time no second photon can be emitted. So here the blockade ensures that no second photon can be emitted in terms of a deterministic process, not just a stochastic one.
 
  • #16
okay, i understand that. but if i were to hypothetically gate an already attenuated source, from say a tubeligand gate it in time and space such that i allow exactly one quantum state. One photon. Then that would be it, right? Physically may not be possible, but theoretically!
 
  • #17
Well, you would need something like an "active gate". As you never know how many photons are present just opening and closing a gate for some amount of time will not help. So you need a non-linear gate: Something that closes if more than one photon is present and transmits light if only one photon is present.

People are working on that. Consider for example a cavity which transmits in a narrow spectral window. If the refractive index of the material inside the cavity changes, the transmission window will change, too. The refractive index depends on intensity, so you can create a gate that transmits small photon numbers, but not large ones. Now if you find some system where even the difference between one and two photons present induces such a shift, you have your non-linear gate. People have done similar stuff, for example in the group of Lukin. See "Quantum nonlinear optics with single photons enabled by strongly interacting atoms", Nature 488, 57–60 (02 August 2012). http://www.nature.com/nature/journal/v488/n7409/full/nature11361.html

There might be a free version available on the ArXiv in case you are interested and do not have a subscription.
 
Last edited:
  • #18
>Now you can evaluate all these terms. The mean value of the deviation should vanish. The expectation >value of the square of the deviation survives. This is the variance of the photon number distribution. That >leaves us with three surviving terms:
>
>[tex]g^{(2)}=\frac{\langle n \rangle^2 +\langle \delta^2 \rangle- \langle n\rangle }{\langle n >\rangle^2}=1+\frac{\langle \delta^2 \rangle}{\langle n \rangle^2}-\frac{1}{\langle n \rangle}.[/tex]
>Now one can perform a sanity check and evaluate this result for three typical states of the light field:

What if we send equal superposition of 0 and 1 states? \delta is 0.5 then and so is <n>. We end up with g2=-1. On the other hand it seem obvious that we should get g2=1, since there is no possible cross-corellation.
 
  • #19
YuryM said:
>

What if we send equal superposition of 0 and 1 states? \delta is 0.5 then and so is <n>. We end up with g2=-1. On the other hand it seem obvious that we should get g2=1, since there is no possible cross-corellation.

No, you do not end up with -1. g2 even cannot become smaller than 0. The first term trivially gives a +1 all the time. As you noted, the expectation value of the absolute of delta and the photon number n are the same. Therefore, the second term also is +1. The third term is just the inverse mean photon number, which gives a -2 here. So in sum you get 0. This is trivially the result you will get for any mixture just containing 0-photon Fock states and 1-photon Fock states as one will never detect photon pairs from this light field.
 
  • #20
Cthugha said:
No, you do not end up with -1. g2 even cannot become smaller than 0. The first term trivially gives a +1 all the time. As you noted, the expectation value of the absolute of delta and the photon number n are the same. Therefore, the second term also is +1. The third term is just the inverse mean photon number, which gives a -2 here. So in sum you get 0. This is trivially the result you will get for any mixture just containing 0-photon Fock states and 1-photon Fock states as one will never detect photon pairs from this light field.

Thank you for quick response.
Oops, of course, 0. Or perfect anti-correlation. Is this what it should be? Sure, one never detects photon pairs, and when one detector is 1 the other is 0, but if one is 0, the other is not necessarily 1. Sound like partial anti-correlation.

Actually, I am struggling with counter-intuitive (to me) fact that half-mirror does not change g_2 when splitting of thermal light at very low photon counts. If one exposes two detectors to the same beam or if the beam is split and each half falls onto its own detector, g_2 is the same. I understand that a mirror never changes photon number distribution or its autocorrelations, however the fact that it does not change cross-correlation is strange to me.

Another question - what determines g_2 in HBT experiment? The photon number distribution is not sufficient, is it? For example if I feed thermal, i.e. exponential aka geometric distribution to a "mirror simulator" (photon goes either left or right, wave properties are ignored), I get g2 which goes to 0 as <n>->0. Am I making mistake in my simulations somewhere? Apart of treating photons as particles and forgetting entirely about the wave part of quantum mechanics?
 
  • #21
YuryM said:
Thank you for quick response.
Oops, of course, 0. Or perfect anti-correlation. Is this what it should be? Sure, one never detects photon pairs, and when one detector is 1 the other is 0, but if one is 0, the other is not necessarily 1. Sound like partial anti-correlation.

Well, you need to consider what one can express in qm and what we are talking about. There is an operator that describes detecting (and destroying) a photon, but there is of course no operator for no photon being detected, so one can only derive mathematical expressions for the former case. So loosely speaking, an intuitive picture of the meaning of g2 is: If I detect a photon now, what is the probability of detecting a second photon at some time delay tau compared to the same probability distribution for completely independent photon detection events at the same mean photon number. So no detection of any photon pairs indeed implies g2=0.

YuryM said:
Actually, I am struggling with counter-intuitive (to me) fact that half-mirror does not change g_2 when splitting of thermal light at very low photon counts. If one exposes two detectors to the same beam or if the beam is split and each half falls onto its own detector, g_2 is the same. I understand that a mirror never changes photon number distribution or its autocorrelations, however the fact that it does not change cross-correlation is strange to me.

Indeed the half-mirror does not change g2 if you use classical light and just one input port. It can change cross-correlations for non-classical light when using several input ports, though. Consider for example Hong-Ou-Mandel interference, where two n=1 Fock states arrive at the two input ports of the beam splitter and one n=2 state exits from either of the two output ports - you will never find one photon leaving from each output port.

YuryM said:
Another question - what determines g_2 in HBT experiment? The photon number distribution is not sufficient, is it? For example if I feed thermal, i.e. exponential aka geometric distribution to a "mirror simulator" (photon goes either left or right, wave properties are ignored), I get g2 which goes to 0 as <n>->0. Am I making mistake in my simulations somewhere? Apart of treating photons as particles and forgetting entirely about the wave part of quantum mechanics?

The photon number distribution is fully sufficient. This is easy to see if you model a light source which fires pulses with photon numbers as given by the thermal photon number distribution. It will be this large fluctuation in the initial photon number which gives you the large g2, if you do the math correctly. However, there are several possibilities to get the math wrong. It is hard to estimate, where the problem is without seeing the calculations.

If you have a look at the three terms of g2 which you cited above, you will find that the first term is 1 anyway and the third term will go to zero for large n. Now the second term is essentially the variance of the photon number distribution compared to the mean. For a thermal distribution term 2 and 3 will necessarily sum up to 1 as the ratio of the variance to the mean is fixed for the thermal photon number distribution.
 
  • #22
Cthugha said:
Well, you need to consider what one can express in qm and what we are talking about. There is an operator that describes detecting (and destroying) a photon, but there is of course no operator for no photon being detected, so one can only derive mathematical expressions for the former case. So loosely speaking, an intuitive picture of the meaning of g2 is: If I detect a photon now, what is the probability of detecting a second photon at some time delay tau compared to the same probability distribution for completely independent photon detection events at the same mean photon number. So no detection of any photon pairs indeed implies g2=0.

I use simple formula - each detector measures number of photons, which arrives during its reaction time, n_i. Then I calculate ##g_2=\frac{<(n_1-<n_1>) (n_2-<n_2>)>}{\sqrt{<(n_1-<n_1>)^2> <(n_2-<n_2>)^2>}##

Cthugha said:
The photon number distribution is fully sufficient. This is easy to see if you model a light source which fires pulses with photon numbers as given by the thermal photon number distribution. It will be this large fluctuation in the initial photon number which gives you the large g2, if you do the math correctly. However, there are several possibilities to get the math wrong. It is hard to estimate, where the problem is without seeing the calculations.

If you have a look at the three terms of g2 which you cited above, you will find that the first term is 1 anyway and the third term will go to zero for large n. Now the second term is essentially the variance of the photon number distribution compared to the mean. For a thermal distribution term 2 and 3 will necessarily sum up to 1 as the ratio of the variance to the mean is fixed for the thermal photon number distribution.

OK. But one has to take into account interference of the wavefunction from two photons? In the beginning I thought that the only effect of QM is granularity of light and one can observe the same correlations with classical particles. If I am not mistaken, this is not true. However, some bunching does occur for exponentially distributed particles.
 
  • #23
YuryM said:
I use simple formula - each detector measures number of photons, which arrives during its reaction time, n_i. Then I calculate ##g_2=\frac{<(n_1-<n_1>) (n_2-<n_2>)>}{\sqrt{<(n_1-<n_1>)^2> <(n_2-<n_2>)^2>}##

Hmmm, but why would this work. I am not really convinced.

YuryM said:
OK. But one has to take into account interference of the wavefunction from two photons? In the beginning I thought that the only effect of QM is granularity of light and one can observe the same correlations with classical particles. If I am not mistaken, this is not true. However, some bunching does occur for exponentially distributed particles.

Yes, there are few cases, where one needs to take that into account. However, that is all about non-classical light fields. Thousands of people have checked the physics for thermal light and everybody ended up at the same results.
 
  • #24
Cthugha said:
Hmmm, but why would this work. I am not really convinced.

You are right. It would not. In reality the denominator is just for gain calibration, and is later replaced by measured total power. I've forgot about this in simulations. With <n_1><n_2> in the denominator, as it should be g_2 for thermal light does not depend on <n>.

Cthugha said:
Yes, there are few cases, where one needs to take that into account. However, that is all about non-classical light fields. Thousands of people have checked the physics for thermal light and everybody ended up at the same results.

Now I am lost. If this works for classical particles with proper number distribution, why people (Fano, for example) bother with interference explanation of bunching?
 
  • #25
YuryM said:
You are right. It would not. In reality the denominator is just for gain calibration, and is later replaced by measured total power. I've forgot about this in simulations. With <n_1><n_2> in the denominator, as it should be g_2 for thermal light does not depend on <n>.

Ah, okay. I see. Do you get reasonable results this way?

YuryM said:
Now I am lost. If this works for classical particles with proper number distribution, why people (Fano, for example) bother with interference explanation of bunching?

Do you mean Fano's 1961 paper? The interference explanation is of course the fundamental one. You need it to describe the total photon emission process and derive the photon number distribution correctly. It is of course also the most intuitive picture for Fock states. However, if you have the right distribution, this means that interference is automatically included. If that was not the case, you would run into problems describing different light sources. For example light from a light bulb will show bunching if it is filtered spectrally, but laser light will never show bunching. This is of course not a consequence of interference being switched off for laser light.
 
  • #26
Cthugha said:
Ah, okay. I see. Do you get reasonable results this way?

Sorry for the delay. I did not know quite what to reply. Now I do (I think). Not quite, the problem was that the calibration signal was polluted by post-detector noise. And the calibration assumed thermal noise. Knowing this I will change receiver/detector channel and/or fit to thermal+gaussian noise (old data fit perfectly).
Cthugha said:
Do you mean Fano's 1961 paper? The interference explanation is of course the fundamental one. You need it to describe the total photon emission process and derive the photon number distribution correctly. It is of course also the most intuitive picture for Fock states. However, if you have the right distribution, this means that interference is automatically included. If that was not the case, you would run into problems describing different light sources. For example light from a light bulb will show bunching if it is filtered spectrally, but laser light will never show bunching. This is of course not a consequence of interference being switched off for laser light.

That and others. OK, now I inderstand, interference _creates_ exponential distribution for chaotic light. Thank you again.

One final (or rather another final) puzzle. Semi-transparent mirror conserves g2. However in simulations I found that it is only true for g2=<n (n-1)>/<n>^2 in single channel "experiment". I.e. if I calculate g2 for a single beam according to this formula, then send it to a mirror "simulator" and then calculate <n3 n4>/(<n3><n4>) the results are identical. Does this mean that a divider (QPC or a demon at a gate) does not conserves g2 for "real" particles, such as electrons or steel balls, which can be detected non-destructively? For those g2=<n^2>/<n>^2, is it not?
 
  • #27
YuryM said:
One final (or rather another final) puzzle. Semi-transparent mirror conserves g2. However in simulations I found that it is only true for g2=<n (n-1)>/<n>^2 in single channel "experiment". I.e. if I calculate g2 for a single beam according to this formula, then send it to a mirror "simulator" and then calculate <n3 n4>/(<n3><n4>) the results are identical. Does this mean that a divider (QPC or a demon at a gate) does not conserves g2 for "real" particles, such as electrons or steel balls, which can be detected non-destructively? For those g2=<n^2>/<n>^2, is it not?

There are several points here. First, as a disclaimer, electrons are fermions and as such will show a different g2 anyway, but I guess you are aware of that. However, the statistics still apply somewhat. There was a study showing that He3 and He4 behave differently (bunching and antibunching),because one is a composite boson and one the other is a composite fermion. I can look it up if you like.

Second, yes, the way g2 is defined for photons directly takes into account that the first detection of a photon may change the light field. This is of course necessary for massless particles, for which there are no wavefunctions, but one can treat things differently for massive particles. Mathematically speaking, this corresponds to the ordering of the operators in the four-operator-term in g2. For photons, one uses normal ordering, which means that all creation operators are placed on the left and the annihilation operators are placed on the right. One has to change this point when going to particles which can be measured repetitively. However, one has to pay more attention to the experimental setup here. For example for a single atom pointing at an atom beam splitter (whatever that may be), the cross-correlation between the two output ports would still yield g2=0 like in the case of photons because the atom cannot be in both output ports simultaneously. The autocorrelation of different times at a single output port, will, however, be non-zero, which is different from the photonic case.
 
  • #28
Yes, this is good example with a single atom, thank you. Makes it quite obvious that g2 is not conserved by an atom beam splitter.
 
  • #29
Cthugha said:
Yes, but the HBT-effect is about autocorrelation. If you take two completely independent light fields - no interaction with the same system, not originating from the same source, not subject to the same noise sources - nothing will happen.

How would HBT type correlations work for a binary star? Would the two stars count as independent light fields?
 
Last edited:

FAQ: Hanbury Brown and Twiss effect explanation

1. What is the Hanbury Brown and Twiss effect?

The Hanbury Brown and Twiss effect, also known as the HBT effect, is a phenomenon in quantum optics that describes the correlation between two photons emitted from the same source.

2. How does the HBT effect work?

The HBT effect is based on the principle of quantum interference, where two photons from the same source interfere with each other. This interference can result in either constructive or destructive interference, which affects the probability of detecting the photons at different locations.

3. What is the significance of the HBT effect in scientific research?

The HBT effect has been used in various experiments to study the properties of light, such as its coherence and polarization. It has also been used to measure the size and brightness of distant stars and galaxies, as well as to investigate the behavior of particles in quantum systems.

4. How was the HBT effect first discovered?

The HBT effect was first observed in 1956 by Robert Hanbury Brown and Richard Twiss, who were studying the intensity fluctuations of light from distant stars. They noticed that the intensity of light from these sources was correlated, indicating the presence of quantum interference between the photons.

5. What are some potential applications of the HBT effect?

The HBT effect has potential applications in quantum computing and communication, as well as in imaging and sensing technologies. It has also been used in the development of single-photon sources, which are crucial for secure communication and quantum cryptography.

Similar threads

Back
Top