- #71
andrew s 1905
- 238
- 95
Two stars in the image have been manipulated to have equal lengths but other stars in the field show none at all! Regards Andrew
The image was manipulated so they have roughly equal brightness (even though they originally were not). The adjustment was done to make it an apples to apples comparison. The equal sizes naturally fell out of that.andrew s 1905 said:Two stars in the image have been manipulated to have equal lengths but other stars in the field show none at all! Regards Andrew
This is a perfect straw man argument. "Failures"? I am simply pointing out that the world of digital (discrete) sampling is inherently non-linear and that, for example, stacking using the median value of samples introduces more non-linearity. (Did I suggest that's a bad thing?) If you use any well known amateur astro software (Say Nebulosity) there are several options of stacking algorithms. One of them is based on the median of the pixel values. (I could ask you whether you know what stacking is.)collinsmark said:How might you suggest informing astronomers that use JWST, Hubble (HST), and pretty much any telescope around the world, that their stacking algorithms -- algorithms that they've been using for decades -- are all failures? Do you propose invalidating the countless academic papers that relied on astronomical data that invariably was produced, in part, using same general mathematical principles and theorems discussed here?
You would deny that it's a so-called fair test if we are discussing visibility?collinsmark said:this is an apples to oranges comparison)
The above shows failure - that's just a practical issue that the basic maths doesn't consider. Is the word 'failure' too judgmental for you? I can't think of a better description for what happens in practice.collinsmark said:Where does the math fail?
Devin-M said:
Devin-M said:
sophiecentaur said:I am simply pointing out that the world of digital (discrete) sampling is inherently non-linear
sophiecentaur said:and that, for example, stacking using the median value of samples introduces more non-linearity. (Did I suggest that's a bad thing?) If you use any well known amateur astro software (Say Nebulosity) there are several options of stacking algorithms. One of them is based on the median of the pixel values. (I could ask you whether you know what stacking is.)
sophiecentaur said:You would deny that it's a so-called fair test if we are discussing visibility?
sophiecentaur said:If you could only afford a four bit ADC for the sensor, would you still be able to tinker with the two different parts of the image and squeeze out two equal spike lengths?
No, because I've compensated for that by choosing the exposure time per subframe and number of subframes accordingly. Nothing is lost, statistically speaking.sophiecentaur said:Once you fall below the minimum sig bit, you have lost that information for ever.
sophiecentaur said:The above shows failure - that's just a practical issue that the basic maths doesn't consider. Is the word 'failure' too judgmental for you?
I think you would agree that the spikes get progressively dimmer, the further out (that's what we find with diffraction). So, if every star in that image has detectable spikes extending to the edge, there must be stars available with images that are as bright as the dimmest end of the dimmest spike. @collinsmark seems to imply saying that those 'dimmest' stars will still have spikes going out to the edge? In which case, those are not the dimmest stars available. I really don't see how anyone can argue with that logic.Devin-M said:In this JWST image, the dimmer stars diffraction spikes (circled red) are the same length as the brighter star’s diffraction spikes, because both extend all the way to the edge of the image frame:
Yes and if the light level is reduced, you can lose the faintest fringe altogether.Devin-M said:varying only in brightness..
Not necessarily. Count 10 photons from a bright source that passes through a pin hole and only 2 from a dim source. Same exposure time and the sensor is an array of single photon counters.sophiecentaur said:Yes and if the light level is reduced, you can lose the faintest fringe altogether.
sophiecentaur said:I don't understand how one can say that the single photon quantisation (a quantum efficiency of 100%) is the same as the smallest step of the sensor ADC. The photon energy corresponds to the range of the frequencies involved but the digital value is what it is.
I don't understand how the result of such an arbitrary set of events would be relevant to the statistics of real events producing an identifiable spike pattern on a regular basis. Your instance doesn't produce a spike; it triggers a single pixel. You seem to suggest that a bright star would not be expected to produce a 'clearer' pattern of photons arriving at the sensor than a (very) dim one. I know that quantum effects / statistics of small numbers can sometimes give surprising results but normal principles of signal to noise ratio start to kick in at low numbers. Relative numbers of 'events' still tend to correlate with the continuum of analogue values that diffraction integrals give you.Devin-M said:The 2 photons from the dim source arrive by chance within the 8th and 12th diffraction orders. The dimmer source's spikes in this case are longer than the bright source's spikes.
That should be no surprise to anyone. That large image of many stars shows long spikes for many stars that are not actually near saturation but the fainter ones do not have identifiable spikes. When the absolute value of the spike pattern from a low intensity star falls below 1 increment then probability of it causing a hit gets less and less. I really don't see why this isn't obvious.Devin-M said:With equal image intensity both have the same shape size and appearance despite having different apparent observable brightness.
@sophiecentaur Yes, the gain setting does affect the noise, but it's a bit counterintuitive, and might not be what you think.sophiecentaur said:@collinsmark thanks for that information. It's interesting that the efficiency seems to be so high. However, isn't there something fundamental that means having gain too high will just increase the effect of shot noise wasting one bit of bit depth?
The colour images could be misleading because our sensitivity / perception of colours is a 'human quality. Personally, I can't see the blue fringes at all clearly on my phone. An astrophotographer, using a monochrome camera would be better placed to come to a meaningful conclusion. If the monochrome images do not follow the maths then there's something wrong, once they've been normalised. If there's a ratio, of two in the wavelengths then there should be a factor of two in the spacing (for small angles, at least).Devin-M said:Which diffraction pattern is "larger?"
Assume they both go all the way to the edge but the same color. One is dimmer, which is “larger?”Devin-M said:BlueRed Channel:
Red Channel:
Sorry, I don't get where you are coming from. If they both 'go to the edge' then the edge defines their extent. What does this demonstrate?Devin-M said:which is “larger?”
The word "proof" implies that there is some doubt. You notice the star centre is well burnt out; there is no limit to how far out the spike could go. The (reflecting) struts supporting the reflector are the long line sources and the pattern from a line will be of the form (sin(x)/x)2 in the direction normal to the strut. Their reflecting area is pretty significant. The envelope falls off as 1/x2, which takes a lot of peaks before it starts to disappear.Devin-M said:Here's some proof light can diffract across an entire sensor on the JWST..
The mechanism of interference relies on two (or more) sources of precisely the same wavelength. If a red photon is detected at the same time as a blue photon then the interference patterns associated with each wavelength will be totally different; they are independent.Devin-M said:Does it mean that only photons of the same color/ wavelength interfere with each other?
Variations, not just minima require some degree of coherence but it's not just on/off. As coherence decreases, so the patterns become less distinct. You will get a hold of this better if you avoid taking single comments that you hear / read and try to follow the details of the theory. It may be that the argument you heard relates to twinkling stars. Fair enough but the details count and a statement about one situation may not cover all cases.Devin-M said:I thought destructive interference was causing the dark regions in the spikes, but isn’t coherence a prerequisite for destructive interference?
And your point is?Devin-M said:In this video he’s able to obtain a destructive interference pattern with just sunlight & a double slit… (3:29)
Light is linear so you can always treat interference as something that happens photon by photon (the double-slit experiment has been done with single-photon sources, and also with electrons). If each photon only interferes with itself then it's obvious why different colors (different wavelengths) can have different patterns.Devin-M said:Does it mean that only photons of the same color/ wavelength interfere with each other?
For a 'bright' highly coherent source, there would be many photons involved and, although I really don't like this, you could say that many of the photons will 'interfere with each other'.mfb said:If each photon only interferes with itself
That tells you how much Energy is in that space but I think that looking at it as a spray of photons is not in the spirit of QM. There is no way that any particular photon is actually inside that cone as photons have no extent not position. Only when a photon interacts with a sensor element can you say when and where the photon existed.Devin-M said:I’m amazed by the “volume” of photons it took to produce the pattern— it was a 5 minute exposure so we’re talking about a “beam length” of 55.8 million miles. Considering the aperture diameter was 66 millimeters, that gives a “beam volume” of roughly 29.5 cubic miles of photons to produce the image.
Can we say the photons in the middle spike more likely than not went through the set of slits on the left half of the bahtinov mas?sophiecentaur said:Only when a photon interacts with a sensor element can you say when and where the photon existed.
Devin-M said:Can we say the photons in the middle spike more likely than not went through the set of slits on the left half of the bahtinov mas?
I know this is not intuitive but how can you say a photon ‘went through’ / may or may not have taken a path? Remember that Scientists with greater ability than me or (with respect) you struggled with this business. It has been agreed that this approach goes nowhere (nearly a hundred years ago). You need to change the model in your head or you will continue to be confused.
Devin-M said:( I can see I have written this in the wrong place. Forgive me. Blame it on the iPhone!