Diffraction Effects and Artifacts in Telescopes like the JWST

In summary: I believe we are in agreement on this. There was a question about diffraction artifacts, and an example of a star in the image showing those artifacts. That hexagonal artifacts (was well as any diffraction spikes) have a different cause than the actual dust rings. That's all I meant to say.
  • #71
Two stars in the image have been manipulated to have equal lengths but other stars in the field show none at all! Regards Andrew
 
Astronomy news on Phys.org
  • #72
andrew s 1905 said:
Two stars in the image have been manipulated to have equal lengths but other stars in the field show none at all! Regards Andrew
The image was manipulated so they have roughly equal brightness (even though they originally were not). The adjustment was done to make it an apples to apples comparison. The equal sizes naturally fell out of that.

Here is the original image before the stars were adjusted for equal brightness. (But without adjusting for brightness, this is an apples to oranges comparison).
d58a53fd-47f0-46ae-b2a1-16434fd63c18-jpeg.jpg

[Edit: I forgot to mention the credit: Image is courtesy of a post by @Devin-M ]
 
Last edited:
  • #73
collinsmark said:
How might you suggest informing astronomers that use JWST, Hubble (HST), and pretty much any telescope around the world, that their stacking algorithms -- algorithms that they've been using for decades -- are all failures? Do you propose invalidating the countless academic papers that relied on astronomical data that invariably was produced, in part, using same general mathematical principles and theorems discussed here?
This is a perfect straw man argument. "Failures"? I am simply pointing out that the world of digital (discrete) sampling is inherently non-linear and that, for example, stacking using the median value of samples introduces more non-linearity. (Did I suggest that's a bad thing?) If you use any well known amateur astro software (Say Nebulosity) there are several options of stacking algorithms. One of them is based on the median of the pixel values. (I could ask you whether you know what stacking is.)
collinsmark said:
this is an apples to oranges comparison)
You would deny that it's a so-called fair test if we are discussing visibility?
If you could only afford a four bit ADC for the sensor, would you still be able to tinker with the two different parts of the image and squeeze out two equal spike lengths? Once you fall below the minimum sig bit, you have lost that information for ever.
collinsmark said:
Where does the math fail?
The above shows failure - that's just a practical issue that the basic maths doesn't consider. Is the word 'failure' too judgmental for you? I can't think of a better description for what happens in practice.
 
  • #74
In this JWST image, the dimmer stars diffraction spikes (circled red) are the same length as the brighter star’s diffraction spikes, because both extend all the way to the edge of the image frame:

a_00001_0-2-copy-jpg.jpg


With long enough integration time this will visibly & detectably be the case for every single star in the image, demonstrating the spikes are the same length but different brightness.

These spikes are caused by edges and obstructions in the optical train, not the sensor or the star. See below, I used wires in front of the lens to create a 6 spike diffraction pattern.

Devin-M said:
f6f48fdb-3ca4-437c-892f-86357c297f51-jpeg.jpg

d43f88c3-c435-4a9e-a4b5-0240255511bb-jpeg.jpg
 
Last edited:
  • Like
Likes collinsmark
  • #75
In this image I took of 2 different brightness stars, adjusted to have equal image intensity (in the 1st image), we’re only seeing the first 3 orders of diffraction on both stars. With equal image intensity both have the same shape size and appearance despite having different apparent observable brightness.

60a8c309-ca23-46a9-9f9a-051e4c3f1f85-jpeg-jpg.jpg

d58a53fd-47f0-46ae-b2a1-16434fd63c18-jpeg-jpg.jpg

This diffraction / interference pattern was created by placing a window screen in front of the lens...
30D894CA-5099-420D-A8B4-200B11B7EDD4.jpeg


On the other hand, here is a laser passing through a pinhole (a pinhole which is equivalent to a telescope’s aperature), and we’re seeing no less than 27 orders of diffraction…
10CFC89E-0C80-4733-9425-F26BF1378B35.jpeg

https://en.m.wikipedia.org/wiki/Airy_disk
 
Last edited:
  • #76
Also, due to shot noise, the diffraction pattern doesn't form from the inside out, but rather across the entire image frame, somewhat randomly...

SlitAndOne.jpg

220px-Double_slit_interference.png

Devin-M said:
5f3469c5-9ce0-4e27-a3eb-0eb616cec19c-jpeg.jpg

8c6d1092-2572-4490-b30e-b1ac2af22337-jpeg.jpg

418b7ad4-93e8-4a1d-a786-8ae000b91975-jpeg.jpg
 
Last edited:
  • #77
sophiecentaur said:
I am simply pointing out that the world of digital (discrete) sampling is inherently non-linear

No, it is not nonlinear when the input to the system is already discrete, such as discrete photons. We're not sampling some continuous, analog signal here. We're merely counting photons. That can be done linearly.

sophiecentaur said:
and that, for example, stacking using the median value of samples introduces more non-linearity. (Did I suggest that's a bad thing?) If you use any well known amateur astro software (Say Nebulosity) there are several options of stacking algorithms. One of them is based on the median of the pixel values. (I could ask you whether you know what stacking is.)

Using the median values is not a good way of stacking the bulk of the data for typical purposes if you care about the fine detail of the signal. It has its niche applications though. Using the median value as the stacking is a good way of quickly wiping out noise if you don't mind taking out the fine minutiae of the signal along with it. So there are some fringe purposes for using the median (quick noise reduction where the signal detail isn't a big concern), but it's not typical.

That said, In a typical staking algorithm, median values are commonly only used for bad pixel/cosmic ray detection and elimination. Statistical outliers relative to the median (rather than the mean) in a particular subframe are replaced with median values, or simply rejected from the stacking altogether, for that subframe. So the median is important in that way. (But only for cosmic ray/bad pixel rejection -- otherwise the mean is used.)

Using the median as a reference for pixel rejection, followed by using the mean as the end result of the stacking is a very good way of eliminating bad pixels and cosmic rays, while maintaining the statistical minutia in the signal.

And yes, before you ask, this bad pixel/cosmic ray rejection is nonlinear. That's the one aspect of this that's nonlinear. But cosmic rays and bad pixels are not stationary (statistically) anyway, and almost certainly not part of the signal, so it's a good idea to get rid of them, linearity be damned. But only a tiny proportion of pixels are rejected in this way, so it doesn't negatively affect the final image all that much.

And keep in mind, without stacking, you couldn't even do that. There are no statistical outliers between subframes with only a single subframe. You'd be stuck with that bad pixel or cosmic ray. Trying to eliminate cosmic rays and bad pixels with only a single subframe is not nearly as robust.

I would love to discuss stacking algorithms such as the ins and outs of the Drizzle algorithm. (Probably in a different thread, though.) Drizzle is an advanced stacking algorithm that does everything I've described in this thread about stacking, plus quite a bit more. Many modern implementations of Drizzle are flexible, so if you wanted to keep it simple, such as the methods I discussed in this thread, you could. But you can also use its full power so long as the subframes are dithered during acquisition (commanding the telescope to point in a slightly [or even not so slightly] different direction and/or orientation between subframes).

The Drizzle algorithm is a core component in JWST's pipeline (see the section on "Imaging combination"), It's the algorithm that JWST uses for stacking.

I could discuss stacking algorithms all day. It's something I find fascinating and something I am interested in. Understanding Drizzle requires a firm grasp of basic stacking principles first. But unless we're all on the same page there, it would be like discussing Riemannian geometry without a firm grasp of Euclidean geometry. Something tells me you wouldn't be interested in reading my posts anyway though. Suffice it to say Drizzle involves stacking.

sophiecentaur said:
You would deny that it's a so-called fair test if we are discussing visibility?

Sure, as long as the data is scaled accordingly, equalizing the relative brightness, so that it becomes an apples-to-apples comparison, even visually.

Once again, I have no qualms with saying that spikes were not detected in some particular image. What I object to is the claim that spikes are not detectable.

sophiecentaur said:
If you could only afford a four bit ADC for the sensor, would you still be able to tinker with the two different parts of the image and squeeze out two equal spike lengths?

Would I "still"? I don't understand, why would I even tinker with two different parts of the same image in the first place? I wouldn't do that. I would be concerned with corresponding pixels between different subframes, not different pixels in the same subframe.

----

All else being equal, if I switched from a 16 bit ADC to 4 bit ADC, the smart thing to do is reduce the subframe exposure time to avoid saturation in the subframes. To compensate, a greater number of subframes would be taken. That includes a some extra subframes to compensate for the extra read noise. The total integration time would be a little longer, but other that it would be a statistically equivalent system.

sophiecentaur said:
Once you fall below the minimum sig bit, you have lost that information for ever.
No, because I've compensated for that by choosing the exposure time per subframe and number of subframes accordingly. Nothing is lost, statistically speaking.

sophiecentaur said:
The above shows failure - that's just a practical issue that the basic maths doesn't consider. Is the word 'failure' too judgmental for you?

"Fails" is the word that you chose, not me.

And no, there isn't any failure if you divide up the necessary total integration time between the subframe length and number of subframes wisely, which is something that you would need to do anyway. 'Unless you choose them like an idiot, and then there's failure.
 
Last edited:
  • #78
Suppose we have 3 different monochrome laser sources of varying intensity but the same wavelength shining through a diffraction grating and we take 3 exposures of consistent exposure length and consistent processing... Since the photons are discrete & arrive randomly, we end up with three diffraction patterns, all the same angular and pixel dimension size, but varying only in brightness...
pattern.jpg

^The three lasers were different brightnesses, exposure times were the same, and all the diffraction patterns ended up the same size with identical image processing...
 
  • Like
Likes Lord Crc and sophiecentaur
  • #79
Devin-M said:
In this JWST image, the dimmer stars diffraction spikes (circled red) are the same length as the brighter star’s diffraction spikes, because both extend all the way to the edge of the image frame:
I think you would agree that the spikes get progressively dimmer, the further out (that's what we find with diffraction). So, if every star in that image has detectable spikes extending to the edge, there must be stars available with images that are as bright as the dimmest end of the dimmest spike. @collinsmark seems to imply saying that those 'dimmest' stars will still have spikes going out to the edge? In which case, those are not the dimmest stars available. I really don't see how anyone can argue with that logic.

I don't understand how one can say that the single photon quantisation (a quantum efficiency of 100%) is the same as the smallest step of the sensor ADC. The photon energy corresponds to the range of the frequencies involved but the digital value is what it is.
Devin-M said:
varying only in brightness..
Yes and if the light level is reduced, you can lose the faintest fringe altogether.
 
  • #80
sophiecentaur said:
Yes and if the light level is reduced, you can lose the faintest fringe altogether.
Not necessarily. Count 10 photons from a bright source that passes through a pin hole and only 2 from a dim source. Same exposure time and the sensor is an array of single photon counters.

Suppose the 10 photons from the bright source arrive by random chance within the 0th, 3rd, 5th and 6th diffraction orders.

The 2 photons from the dim source arrive by chance within the 8th and 12th diffraction orders. The dimmer source's spikes in this case are longer than the bright source's spikes.

diffgrat.png

http://hyperphysics.phy-astr.gsu.edu/hbase/phyopt/grating.html

10cfc89e-0c80-4733-9425-f26bf1378b35-jpeg.jpg
 
Last edited:
  • #81
sophiecentaur said:
I don't understand how one can say that the single photon quantisation (a quantum efficiency of 100%) is the same as the smallest step of the sensor ADC. The photon energy corresponds to the range of the frequencies involved but the digital value is what it is.

I'll try to help you understand how that works, using examples. I'll use JWST my first example, and my own camera as a second example:

JWST:

First, you can find the specs here:
https://jwst-docs.stsci.edu/jwst-ne...detector-overview/nircam-detector-performance

Note the "Gain (e-/ADU)" and the "Quantum efficiency (QE)" in Table 1.

As you can see, JWST uses two different gain settings, one for short wavelengths and another for long.

The Quantum Efficiency (QE) is naturally dependent on the wavelength.

So let's say that were presently using JWST's F405M filter, which is a long wavelength filter with a passband at 4 microns.

Going back to Table 1, we see that
Gain: 1.82 [e-/ADU]
QE: 90% [e-/photon]We can convert that to photons by dividing the two:
[tex] \frac{1.82 \ \mathrm{[e^- /ADU]}}{0.9 \ [e^- / \mathrm{photon}]} = 2.02 \ \mathrm{photons/ADU}[/tex]

So for when JWST is using its F405M filter, it takes on average about 2 photons to trigger an ADU increment.

My Deep Sky Camera: ZWO ASI6200MM-Pro:

You can find the specs here:
https://astronomy-imaging-camera.com/product/asi6200mm-pro-mono

ASI6200-Performance.png


6200QE.png
So, for example, if I'm using my Hydrogen-alpha (Hα) filter, which as a wavelength of 656.3 nm, we can see that the QE is about 60%.

But if I set my camera gain to 0, corresponding to around 0.8 [e-/ADU], that gives

[tex] \frac{0.8 \ \mathrm{[e^- /ADU]}}{0.6 \ [e^- / \mathrm{photon}]} = 1.33 \ \mathrm{photons/ADU}[/tex]

So with this gain setting and when using the Hα filter, it takes on average about 1.33 photons to increment an ADU.

(I like to set my camera gain to something higher for better read noise, but I'll just use a gain setting of 0 here for the example.)

Had I instead set the gain setting to around 25, which corresponds to somewhere close to 0.6 [e-/ADU], that would correspond to roughly 1 photon/ADU on average.
 
Last edited:
  • #82
@collinsmark thanks for that information. It's interesting that the efficiency seems to be so high. However, isn't there something fundamental that means having gain too high will just increase the effect of shot noise wasting one bit of bit depth?
I do more or less take your point about equating a lsb to the arrival of one (or two) photons if the gain is suitable.
Devin-M said:
The 2 photons from the dim source arrive by chance within the 8th and 12th diffraction orders. The dimmer source's spikes in this case are longer than the bright source's spikes.
I don't understand how the result of such an arbitrary set of events would be relevant to the statistics of real events producing an identifiable spike pattern on a regular basis. Your instance doesn't produce a spike; it triggers a single pixel. You seem to suggest that a bright star would not be expected to produce a 'clearer' pattern of photons arriving at the sensor than a (very) dim one. I know that quantum effects / statistics of small numbers can sometimes give surprising results but normal principles of signal to noise ratio start to kick in at low numbers. Relative numbers of 'events' still tend to correlate with the continuum of analogue values that diffraction integrals give you.
Devin-M said:
With equal image intensity both have the same shape size and appearance despite having different apparent observable brightness.
That should be no surprise to anyone. That large image of many stars shows long spikes for many stars that are not actually near saturation but the fainter ones do not have identifiable spikes. When the absolute value of the spike pattern from a low intensity star falls below 1 increment then probability of it causing a hit gets less and less. I really don't see why this isn't obvious.
 
  • #83
Here's a red diffraction pattern extending to the edge of the telescope's image circle:

red2.jpg


Here's a blue diffraction pattern extending to the edge of the telescope's imaging circle:
blue2.jpg


Blue Channel:
blue-grey.jpg


Red Channel:
red-grey.jpg


Here's both superimposed in the same viewing frame:
image_circle.jpg


The center of the red diffraction pattern has an RGB value of 226,0,0 and the center of the blue diffraction pattern has an RGB value of 0,0,150... Which diffraction pattern is "larger?"
 
Last edited:
  • #84
sophiecentaur said:
@collinsmark thanks for that information. It's interesting that the efficiency seems to be so high. However, isn't there something fundamental that means having gain too high will just increase the effect of shot noise wasting one bit of bit depth?
@sophiecentaur Yes, the gain setting does affect the noise, but it's a bit counterintuitive, and might not be what you think.
  • Firstly, it's mostly all a matter of read noise. I'll get to shot noise in a second, but read noise is affected more by the gain setting.
  • The read noise contribution is actually lower with higher gain, not the other way around. So if you want to lower your sensor's noise, and you're not at risk of saturation, increase the gain.
  • The gain value does little to nothing to the pixel detection or the electron well. It really doesn't make the sensor more "sensitive," as one might think. Rather, in most cases it merely affects the ADC operation, which is on the back-end of the pixel, not the front.

Yeah, I know, that might sound counterintuitive that increasing the gain lowers the noise. Especially if you're a typical photographer. This same thing applies to a DSLR camera. Higher ISO settings actually produce less noise, not more.

If you tell that to a photographer friend, they will yell back, "no way! When I take a photo with higher ISO it always comes back noisier!"

But what they're forgetting, is that when they increase the sensor gain, they naturally compensate by lowering the exposure: either reducing the exposure time (a.k.a., shutter speed), lower the aperture (a.k.a., f/stop), or both. Lowering the exposure reduces the number of photons accumulated, and that is what increases the noise: in the form of shot noise, since there's less signal. And for that matter, the signal to noise ratio (SNR) is reduced because, well, there's less signal. But this entire test is an apples-to-oranges comparison. Too many variables.

To make an apples-to-apples comparison, you need to lock down the exposure (f/stop and shutter speed) and compare pictures with only changing the gain setting (ISO). If you take a photo of a dim object (to avoid saturation even at high gain) and analyze the results with both gains, you'll find the one with high gain actually has the higher SNR!

Read noise vs. gain characteristics are typically nonlinear and vary quite a bit from camera model to camera model. Look at this plot showing the read noise vs. gain for my new planetary camera, ZWO ASI585MC. (Notice that it's different than the camera from my last post. These characteristics do vary quite a bit by camera model.

585.png


One thing that seems to be true in every camera I've ever seen is while although the read noise vs gain may be nonlinear, it's always decreasing monotonically: higher gain means lower read noise.

Yeah, if you're like me, this is difficult to wrap one's head around. I've created an analogy that might help.

Imagine that you keep a pile of marshmallows (representing electrons) in a silo (the well). And you have a measuring stick (the ADC) that coverts the physical height of the marshmallow pile to number (the output), representing the height.

A rabid badger (representing the source of read noise) is always grabbing the measuring stick and yanking it back and forth and all over, so it's difficult to get a steady reading off of the measuring stick.

Slide2.jpg

Now here's the neat thing: You have the power change the size of the measuring stick and the rabid badger. Increasing the "gain" setting, has the effect of shrinking the measuring stick, and the badger gets smaller too, to some extent.

Slide3.jpg


This however, has some ramifications:
  • By increasing the gain (and thus shrinking the measuring stick), the stick can no longer measure as high. The maximum height of the measuring stick is reduced. In other words, it's dynamic range (DR) is reduced.
  • Higher gain settings shrink the size of the rabid badger too! The smaller rabid badger doesn't do as much shaking in reference to the actual, physical height of the marshmallow stack. In other words, the important measure: the amount of shaking of the stick in units of marshmallow height, is reduced.
  • The noise of the output value (i.e., the final output number) might change with greater ADU numbers, but not relatively so. If you divide the standard deviation of the output number by the mean of the output number, you'll find the result is less when the gain setting is higher.
  • The size of the rabid badger does not necessarily shrink the same amount as the measuring stick. If you halve the size of the stick, you don't necessarily halve the size of the rabid badger, but it does get get smaller.
  • Notice that changing the gain setting really has nothing to do with the marshmallows, silo, or anything that comes before that.
Some of the increased read noise (at lower gain) might arguably be the result of higher quantization at lower gain settings (i.e., posterization). But I'm lead to believe that while posterization might play a role in explaining that, there is more to it.

Whatever the case, here's a word of advice: if you're far from saturation, and for what ever reason you don't want to or can't increase the exposure time, increase the gain instead to maximize SNR.
 
Last edited:
  • Like
Likes Lord Crc and sophiecentaur
  • #85
Devin-M said:
Which diffraction pattern is "larger?"
The colour images could be misleading because our sensitivity / perception of colours is a 'human quality. Personally, I can't see the blue fringes at all clearly on my phone. An astrophotographer, using a monochrome camera would be better placed to come to a meaningful conclusion. If the monochrome images do not follow the maths then there's something wrong, once they've been normalised. If there's a ratio, of two in the wavelengths then there should be a factor of two in the spacing (for small angles, at least).
 
  • #86
Devin-M said:
Blue Red Channel:
blue-grey-jpg.jpg


Red Channel:
red-grey-jpg.jpg
Assume they both go all the way to the edge but the same color. One is dimmer, which is “larger?”
 
  • #87
Here's some proof light can diffract across an entire sensor on the JWST...
IMG-00356361.jpg


Or even 2 sensors...

IMG-00356361-2.jpg


screen-shot.jpg
 
  • #88
Devin-M said:
which is “larger?”
Sorry, I don't get where you are coming from. If they both 'go to the edge' then the edge defines their extent. What does this demonstrate?
The theoretical curve never goes to zero.
Devin-M said:
Here's some proof light can diffract across an entire sensor on the JWST..
The word "proof" implies that there is some doubt. You notice the star centre is well burnt out; there is no limit to how far out the spike could go. The (reflecting) struts supporting the reflector are the long line sources and the pattern from a line will be of the form (sin(x)/x)2 in the direction normal to the strut. Their reflecting area is pretty significant. The envelope falls off as 1/x2, which takes a lot of peaks before it starts to disappear.
 
  • #90
5-jpg-jpg.jpg

Followup question. Above I’ve taken the same image of the same star and split the image into just the red light (red channel) on the left and just the blue light (blue channel) on the right. We can see the apparent position of the dimmer stars with no visible diffraction spikes are the same in both images (bottom left, top right, and the one just below the right side diffraction spike), but the points of maximum brightness of the diffraction spikes are in different positions depending on color.

Does it mean that only photons of the same color/ wavelength interfere with each other?

For example suppose I use a red monochrome laser as a point source viewed through my aperture, and note the distances of the maxima from the apparent source. Next I aim at a star (broadband) and take an exposure sufficiently long to see the same number of maxima, and look at the red channel. Will the red spots be same distance from the apparent position in both cases, even though in the star/broadband case there are additional wavelengths of light traveling through the aperature? Will the blue wavelengths for example change the apparent positions of the red interference maxima?
30d894ca-5099-420d-a8b4-200b11b7edd4-jpeg.jpg
 
Last edited:
  • #91
Devin-M said:
Does it mean that only photons of the same color/ wavelength interfere with each other?
The mechanism of interference relies on two (or more) sources of precisely the same wavelength. If a red photon is detected at the same time as a blue photon then the interference patterns associated with each wavelength will be totally different; they are independent.

I suggest you read around about the mechanism of interference / diffraction and you will find that photons do not come into it. It is always described in terms of waves. (I think this has already been pointed out in this thread.)
 
  • #92
Is it true that the dark areas in the spikes are destructive interference and this is only possible because starlight has both spatial and temporal coherence? (In other words, if a point source did not have spatial and temporal coherence would the dark areas in the spikes vanish?

5-jpg-jpg-jpg.jpg
 
  • #93
How could the spatial coherence not be perfect for a point? Do you know what coherence means?
 
  • #94
Well from the below source, light from the sun isn’t coherent, but light from the stars is apparently, so presumably light from the sun would be coherent if viewed from a great distance, but I’m not sure why a non coherent source becomes coherent when viewed from increasing distance.

https://www.mr-beam.org/en/blogs/news/kohaerentes-licht
 
  • #95
the light from a star is hardly monochromatic so how can it be “coherent’? Diffraction fringes are not visible unless you filter the light.
But coherence is a quantity with a continuum of values; everything from excellent to zero.
 
  • #96
I thought destructive interference was causing the dark regions in the spikes, but isn’t coherence a prerequisite for destructive interference?
 
  • #97
Devin-M said:
I thought destructive interference was causing the dark regions in the spikes, but isn’t coherence a prerequisite for destructive interference?
Variations, not just minima require some degree of coherence but it's not just on/off. As coherence decreases, so the patterns become less distinct. You will get a hold of this better if you avoid taking single comments that you hear / read and try to follow the details of the theory. It may be that the argument you heard relates to twinkling stars. Fair enough but the details count and a statement about one situation may not cover all cases.
 
  • #98
In this video he’s able to obtain a destructive interference pattern with just sunlight & a double slit… (3:29)
 
  • #99
Devin-M said:
In this video he’s able to obtain a destructive interference pattern with just sunlight & a double slit… (3:29)
And your point is?
The equipment he uses is introducing some coherence with a collimating slit which is causing the formation of diffraction effects. You don't only get diffraction effects with a high quality laser.
And I think we have gone past the stage, in this discussion, where we should be quoting links showing ripples on water. To discuss the limitations imposed on the basic calculations of diffraction, you need to introduce the concept and effects of coherence.
The coherence length of any source is the mean length over which all the individual bursts of light exist - very long for a good laser, short for a discharge tube and even shorter from a hot object. Interference will only occur when the path differences involved are shorter than the coherence length. That will be about boresight when you use sunlight.
 
  • Like
Likes vanhees71
  • #100
Starlight is coherent over the size of your setup. That includes sunlight from a given direction - but an unfiltered Sun will come from an angle of half a degree, usually that makes the blob of sunlight much wider than any interference patterns.
Devin-M said:
Does it mean that only photons of the same color/ wavelength interfere with each other?
Light is linear so you can always treat interference as something that happens photon by photon (the double-slit experiment has been done with single-photon sources, and also with electrons). If each photon only interferes with itself then it's obvious why different colors (different wavelengths) can have different patterns.
 
  • Like
Likes hutchphd
  • #101
mfb said:
If each photon only interferes with itself
For a 'bright' highly coherent source, there would be many photons involved and, although I really don't like this, you could say that many of the photons will 'interfere with each other'.
I always have a problem with the attempt to interpret wave phenomena in terms of photons; doesn't it call for a real strange concept of the coherence of a single photon? The photon could only actually 'arrive' at one place, which is not a pattern. But the place where it arrives could be at a peak (or null?) in the theoretical pattern and the value will be one.

A stream of single photons will form a pattern, over time. What would determine the spread of the diffraction pattern from the ideal case? Perhaps it would have to be the bandwidth of the original light source, before the 'photon gate' is operated
 
  • #102
I’m amazed by the “volume” of photons it took to produce the pattern— it was a 5 minute exposure so we’re talking about a “beam length” of 55.8 million miles. Considering the aperture diameter was 66 millimeters, that gives a “beam volume” of roughly 29.5 cubic miles of photons to produce the image.
 
  • #103
Devin-M said:
I’m amazed by the “volume” of photons it took to produce the pattern— it was a 5 minute exposure so we’re talking about a “beam length” of 55.8 million miles. Considering the aperture diameter was 66 millimeters, that gives a “beam volume” of roughly 29.5 cubic miles of photons to produce the image.
That tells you how much Energy is in that space but I think that looking at it as a spray of photons is not in the spirit of QM. There is no way that any particular photon is actually inside that cone as photons have no extent not position. Only when a photon interacts with a sensor element can you say when and where the photon existed.
 
  • #104
sophiecentaur said:
Only when a photon interacts with a sensor element can you say when and where the photon existed.
Can we say the photons in the middle spike more likely than not went through the set of slits on the left half of the bahtinov mas?
8c6d1092-2572-4490-b30e-b1ac2af22337-jpeg.jpg


418b7ad4-93e8-4a1d-a786-8ae000b91975-jpeg.jpg

1668362641339.png

https://en.m.wikipedia.org/wiki/Bahtinov_mask
 
  • #105
Devin-M said:
Can we say the photons in the middle spike more likely than not went through the set of slits on the left half of the bahtinov mas?

I know this is not intuitive but how can you say a photon ‘went through’ / may or may not have taken a path? Remember that Scientists with greater ability than me or (with respect) you struggled with this business. It has been agreed that this approach goes nowhere (nearly a hundred years ago). You need to change the model in your head or you will continue to be confused.

Devin-M said:
( I can see I have written this in the wrong place. Forgive me. Blame it on the iPhone!
 

Similar threads

  • Astronomy and Astrophysics
Replies
25
Views
2K
  • Quantum Physics
Replies
4
Views
1K
  • Astronomy and Astrophysics
Replies
1
Views
1K
  • Astronomy and Astrophysics
Replies
1
Views
1K
  • Science and Math Textbooks
Replies
2
Views
763
  • Astronomy and Astrophysics
Replies
28
Views
4K
Replies
1
Views
3K
  • Advanced Physics Homework Help
Replies
2
Views
1K
  • Astronomy and Astrophysics
Replies
5
Views
4K
  • Astronomy and Astrophysics
Replies
9
Views
3K
Back
Top