- #1
m.e.t.a.
- 111
- 0
For this question I am considering a slit diffraction experiment set up as follows:
{Monochromatic source} ------> {Single slit} ------> {Diffraction grating with [itex]N[/itex] slits} ------> {Screen with small movable detector}
The monochromatic light source emits photons one at a time. The principal interference maximum occurs at position [itex]x=0[/itex] on the screen. The detector is placed at some point, [itex]x[/itex], on the screen where the probability of detecting the photon is non-zero (also: [itex]x \ne 0[/itex]). The detector detects all photons which arrive between positions [itex]x[/itex] and [itex]x + \Delta x[/itex].
Photons are emitted one by one at a slow rate. Every time a photon is emitted, a stopwatch is started. If the photon is detected at the detector then the stopwatch is stopped and that time measurement, [itex]T[/itex] is logged. If the photon is not detected then no measurement is recorded and the experiment is run again with a new photon.
The experiment is repeated many times. Finally, a probability distribution is plotted: {[itex]T[/itex]} vs. {probability of [itex]T[/itex]}. (I presume that this probability distribution will be approximately Gaussian in shape, although its exact shape is not important here.) This probability distribution will be centred around some mean value of [itex]T[/itex], [itex]{T_{mean}}[/itex].
Suppose that the experiment is run three times with different numbers of slits:
(i) [tex]N=1[/tex]
(ii) [tex]N=2[/tex]
(iii) [tex]N \to \infty [/tex]
My question: will [itex]{T_{mean}}[/itex] vary in each case? And if it will vary, how so?
(This is a stripped-down version of a longer question I posted a few days ago, https://www.physicsforums.com/showthread.php?p=2695689#post2695689.)
{Monochromatic source} ------> {Single slit} ------> {Diffraction grating with [itex]N[/itex] slits} ------> {Screen with small movable detector}
The monochromatic light source emits photons one at a time. The principal interference maximum occurs at position [itex]x=0[/itex] on the screen. The detector is placed at some point, [itex]x[/itex], on the screen where the probability of detecting the photon is non-zero (also: [itex]x \ne 0[/itex]). The detector detects all photons which arrive between positions [itex]x[/itex] and [itex]x + \Delta x[/itex].
Photons are emitted one by one at a slow rate. Every time a photon is emitted, a stopwatch is started. If the photon is detected at the detector then the stopwatch is stopped and that time measurement, [itex]T[/itex] is logged. If the photon is not detected then no measurement is recorded and the experiment is run again with a new photon.
The experiment is repeated many times. Finally, a probability distribution is plotted: {[itex]T[/itex]} vs. {probability of [itex]T[/itex]}. (I presume that this probability distribution will be approximately Gaussian in shape, although its exact shape is not important here.) This probability distribution will be centred around some mean value of [itex]T[/itex], [itex]{T_{mean}}[/itex].
Suppose that the experiment is run three times with different numbers of slits:
(i) [tex]N=1[/tex]
(ii) [tex]N=2[/tex]
(iii) [tex]N \to \infty [/tex]
My question: will [itex]{T_{mean}}[/itex] vary in each case? And if it will vary, how so?
(This is a stripped-down version of a longer question I posted a few days ago, https://www.physicsforums.com/showthread.php?p=2695689#post2695689.)