Does Destructive Interference Affect Starlight in a Double Slit Experiment?

In summary: There is nowhere near twice as much red value when adding the second slit. So where did the missing energy go?
  • #71
So you didnt do any calibration? No dark or bias frame subtraction?

Devin-M said:
I also took a representative sample of the average noise and subtracted it from the raw values to give the final values.
Why?
 
Physics news on Phys.org
  • #72
I did… I sampled the dark noise and subtracted it…

Avg noise per pixel was 39.7… that was then subtracted from the signal

Devin-M said:
Single 115.2 avg - 39.7 noise avg = 75.5 avg
Double 173.9 avg - 39.7 noise avg = 134.2 avg (1.77x higher)
8-jpg.jpg

9-jpg.jpg

10-jpg.jpg


7-jpg.jpg
 
  • #73
Devin-M said:
I did… I sampled the dark noise and subtracted it…
I've never heard of this calibration method. Why would you subtract the average background noise? Unless I'm mistaken, that's not going to get rid of the counts from the dark current nor the bias counts. Just so we're on the same page, noise is the random variation of counts or pixel values between pixels in an image or between the same pixel in multiple images.

The point of calibration is to remove counts that aren't due to capturing your target's light. I call these counts 'unwanted signal'. The major sources of unwanted signal are dark current, background light that falls onto the same area of the sensor as your target, and stray light from internal reflections or other optical issues. In addition, the camera itself typically adds some set value to each pixel, called an offset value or bias value.

The bias value itself isn't noise, as it's just a set value added to each pixel and is easily removed. But all sources of unwanted signal add noise because they are real sources and obey the same statistical nature as your target. The funny thing with noise is that it simply cannot be removed. What can be removed is the average dark current and the bias value (the other sources of unwanted signal can also be removed, but it is much more difficult and well beyond the scope of this post).

To remove dark current (not dark noise, which cannot be removed) one must take dark frames, combine them together to get a master dark frame, and then subtract this frame from each target exposure. This will get rid of both the bias value and the average dark current in each pixel. These should be the worst sources of unwanted counts in your images and their removal is just about all you need to do if you're just worried about counting pixel values. You could use flat frames, but as long as we're dealing with only a small part of the sensor and there aren't large dust particles on the filters and sensor window that are casting shadows on this area we don't really need to worry about flats.

Since measuring the average background noise and then subtracting this value from each pixel doesn't remove either dark current or the bias offset I'm confused why you would so so.
 
  • Like
Likes hutchphd
  • #74
That would make sense if I wanted an image as the final output, but all I want is the average ADU/pixel in the area of interest (with average noise ADU/pixel ignored). So I took the average noise per pixel and subtracted that from the average ADU/pixel in the area of interest.
 
  • #75
You are searching for a 1% answer.
Can you not turn all the autoscaling off? You don't need more than 8 bits of useful resolution to answer your query.
Why are pixels saturating? Particularly why are blue and red ones saturating but apparently not green? You need to eliminate them from your calculation, particularly the way you are doing your averages. You really should do the manipulations on a pixel by pixel basis (can you not do that with your software?)
Instead of rerunning the same experiment endlessly, take a close look at your data. How about a histogram of the dark counts for one frame many pixels? You need to develop the ability to change your point of view.
How about a plot of the theoretical predicted graphs for your wavelength and widths? There are many ways to proceed and you seem to have plenty of data, interest, and perseverence. Ask and answer a slightly tangential question and you will learn something..
 
  • #76
There was no scaling, and there was no saturation. If you look at each of the samples, there are two tables of data at the top, on the left is for the full image, and the right is for the selected area of interest. The only scaling occurred for display purposes on this forum, but not in the sample data. You want to be looking at the right table for each of the samples. Specifically you want to be looking at the R average values in the right table. if they were saturating, the max pixel value would read 16,383, but in the right tables for the selected area of interest, none of the pixels are reaching that value.
 
  • #77
It’s also interesting to note if you look on the tables on the right hand side for the single versus double slit, in the selected area of interest, the max pixel value is almost exactly double not quadruple (1.99x higher). (1293 vs 648) @collinsmark
 
  • #78
Devin-M said:
It’s also interesting to note if you look on the tables on the right hand side for the single versus double slit, in the selected area of interest, the max pixel value is almost exactly double not quadruple (1.99x higher). (1293 vs 648) @collinsmark

In terms of the peaks of any of the side-lobes (anything other than the central peak), it depends how the troughs (from destructive interference) from a single slit pattern line up with the additional troughs from having two slits.

And whether they line up at all, or how they relate to each other even if they don't line up, is dependent on a) the width of the individual slits and b) the separation of the slits. In other words, if you were to switch to a new Bahtinov mask by a different manufacturer, they might line up differently.

The troughs (a.k.a. "nulls") of the single slit pattern are a function of the slit widths. If you make the slit width more narrow, the whole pattern becomes wider. (and vice-versa)

The additional, and more frequent, troughs/nulls of the double slit pattern are dependent on the separation of slits. Move the slits closer together (while keeping the slits widths unchanged) and you'll get fewer, additional nulls in the pattern. Move the slits farther apart and you'll get more frequent nulls.

With that, there isn't any general rule about the off-center peaks of the pattern, or even the average pixel value of an off-center lobe. It all depends on how these troughs/nulls line up. [Edit: in other words, I think the specifics of the 1293 vs. 648 observation is largely a matter of coincidence, and based on the fine particulars of your mask's slit width vs. slit separation details.]

---

A bit of a side note: If you're familiar with Fourier analysis, it might help here. As it happens, diffraction patterns are intimately related to Fourier analysis. With making a few assumptions about setup, such as the slit widths and separations being << the distance to the detector; and applying various constants of proportionality, you'll find that in the limiting case, the diffraction pattern is the magnitude squared of the spacial Fourier transform of the slits. [Edit: for a given, fixed, wavelength of light.]

You can at least use this fact qualitatively if you just want a quick idea of what shape you can expect as a diffraction artifact for a given obstruction.

For example, it's no coincidence that diffraction pattern of a single slit (of monochrome light) is more-or-less that of a [itex] \left( \mathrm{sinc}(x) \right)^2 [/itex]. In one dimension, a single slit is that of a rectangular function. What's the Fourier transform of a rectangular function? well, it's in the form of a sinc function. Square that [magnitude squared], and you get the same shape as the diffraction pattern. You can do the same sort of thing for the double slit.

I mention all this here because it might be easier to mess around with the math (at least qualitatively, maybe with the aid of a computer) than painstakingly cut up different masks every time you want to make a minor change to the slit pattern.

Don't get me wrong, I'm not saying that Fourier transforms are the end-all-be-all of diffraction theory. Diffraction theory can be a bit more nuanced. All I'm saying is that if you've built up an intuition with Fourier analysis, that intuition can be useful in predicting the effects of how changing the slits will change the diffraction pattern, at least in a qualitative sort of way.
 
Last edited:
  • #80
Devin-M said:
That would make sense if I wanted an image as the final output, but all I want is the average ADU/pixel in the area of interest (with average noise ADU/pixel ignored). So I took the average noise per pixel and subtracted that from the average ADU/pixel in the area of interest.
You do want an image as the final output, because an image is nothing more than a bunch of data points. But that's mostly beside the point. What I'm getting at is that without carefully calibrating your image (your data) you can't draw meaningful conclusions. You don't know the average ADU for the diffraction pattern because the pattern's ADU's are mixed in with dark current ADU's and possibly a bias offset. Subtracting the average background noise does nothing because noise is not something that can be subtracted. All you're doing is finding the magnitude of the average variation in the pixels in some area and then subtracting that value from other pixels. Which is pointless as far as I know, as you're still left with all of the noise and you've introduced your own offset that has no purpose.
 
  • #81
Devin-M said:
@collinsmark would you be willing to upload your finalized TIFs / FITs of your stacked single / double slit files (and/or a RAW file or 2) here so I can closely inspect them… ?

https://u.pcloud.link/publink/show?code=kZtCzeVZwIlHPwhNQlfVXx7j9TTxPLteswcy
@Devin-M, I've uploaded the cropped versions of the data in TIFF files, in two [now three] different pixel formats.

The more detailed format is 32 bit, IEEE 754 floating point format. This should retain all the detail involved, but not all image manipulation programs work with this level of detail.

The second is with the data converted to 8bit (per color) unsigned integer. This is the more standard/more accessible format, but less detail per pixel. As a result some of the details are lost. For example, a lot of darker pixel values are simply 0. It's probably not a big deal, since most of the data at that level of detail was just noise anyway.

[Edit: I've also uploaded the TIFF files saved with 16bit, unsigned integer format too, if you'd rather work with that.]
 
Last edited:
  • #82
Drakkith said:
You do want an image as the final output, because an image is nothing more than a bunch of data points. But that's mostly beside the point. What I'm getting at is that without carefully calibrating your image (your data) you can't draw meaningful conclusions. You don't know the average ADU for the diffraction pattern because the pattern's ADU's are mixed in with dark current ADU's and possibly a bias offset. Subtracting the average background noise does nothing because noise is not something that can be subtracted. All you're doing is finding the magnitude of the average variation in the pixels in some area and then subtracting that value from other pixels. Which is pointless as far as I know, as you're still left with all of the noise and you've introduced your own offset that has no purpose.
The final output I desire is a ratio of 2 averages. I just tested my method and it works perfectly.

Test setup:
-Piece of paper illuminated by iPhone flashlight
-Nikon D800 with 600mm f/9 lens fitted

Method:
-I put the white paper on the ground (room lights off) and pointed the iPhone flashlight at the paper at close range.
-Next I set up the camera across the room on a tripod and focused.
-I took 2 exposures- one at 1/800th sec and one at 1/1600th sec (both f/9 aperature @ 6400 iso / gain)
-Then I put the lens cap & took 2 more exposures (dark frames), again at 1/800th sec & 1/1600th sec
-Next I imported the unmodified RAW files into the computer application RAWDigger
-Next I selected an area of interest (100x100 pixels) at the center of the image which shows part of the sheet of white paper
-Next I made note of the R AVG values within the area of interest in both the light frames and the dark frames
-After subtracting the noise R AVG/px measured in the area of interest in the dark frames from the R AVG/px measured in the area of interest in the light frames (using the same method I used previously), the 1/800th second exposure R average was exactly 2.00x higher than the 1/1600th second exposure.

1/800th Dark Noise Avg 15.6
Light R Avg 199.6
199.6 - 15.6 = 184.0

1/1600th Dark Noise Avg 18.1
Light R Avg 110.1
110.1 - 18.1 = 92.0

184/92 = 2.00x higher

11.png

12.png

13.png

14.png

2343F25E-B9E0-43D3-B241-BB775CB161A5.jpeg
 
Last edited:
  • #83
Devin-M said:
I just tested my method and it works perfectly.
Your method is flawed and there is a very good reason it appears to work in this situation but will not work on a real astro photo, but I leave that to you to discover, as you don't appear to want my assistance or advice.
 
  • #84
Drakkith said:
Your method is flawed and there is a very good reason it appears to work in this situation but will not work on a real astro photo, but I leave that to you to discover, as you don't appear to want my assistance or advice.
@Drakkith I do want your assistance & advice. I was just testing the error bars on the camera to the best of my abilities.
 
  • #85
Devin-M said:
@Drakkith I do want your assistance & advice. I was just testing the error bars on the camera to the best of my abilities.
Forgive me if I'm a bit snippy. I've had a rough couple of days.

First, I don't understand why you're doing anything with noise. Why are you subtracting the average noise value? What does that accomplish?
 
  • #86
Good clean derivation:
http://lampx.tugraz.at/~hadley/physikm/apps/2single_slit.en.php
Drakkith said:
First, I don't understand why you're doing anything with noise. Why are you subtracting the average noise value? What does that accomplish?
The noise gets rectified and so produces part of the DC offset. A false DC offset in a ratio is a bad thing.
So I think the averaging should be OK for the offset but I do not understand why OP does not look at the pixel by pixel ratios which may indicate why OP results seem junk. Use all the data you have. Can you scale and subtract images in your software?
Also why do the full frame shots above indicate saturated pixels? Are they just "bad" pixels?
 
  • Like
Likes collinsmark
  • #87
I managed to do a comparison between @collinsmark 's measurements and my own.

I looked at just the 1st order maxima (not to be confused with the 0th order maxima) with the single slit and the same region with the double slit in both @collinsmark 's data and my own.

I'm satisfied the results seem consistent twice as many photons with the double slit in the same area where the 1st maxima forms with a single slit.

7.jpg


Devin-M's measurements:
https://u.pcloud.link/publink/show?code=kZcjE2VZR1eBfOCgGAYfuGOv9EMgPH3KIR07
(Images 9205 & 9213)

Single Noise R Avg 39.2
Single Light R Avg 128.8
128.8 - 39.2 = 89.6

Double Noise R Avg 37.1
Double Light R Avg 218.5
218.5-37.1 = 181.4

181.4 / 89.6 = 2.02x higher

3.jpg

4.jpg

5.jpg

6.jpg


@collinsmark 's measurements:
https://u.pcloud.link/publink/show?code=kZtCzeVZwIlHPwhNQlfVXx7j9TTxPLteswcy
Single Slit Avg ADU/px = 115.2
Double Slit Avg ADU/px = 210.9
210.9/115.2 = 1.83x higher

1.jpg

2.jpg
 
  • #88
hutchphd said:
The noise gets rectified and so produces part of the DC offset.
What is the definition of 'noise' here? My understanding is that noise is the random variation in the pixels that scales as the square root of the signal.
 
  • #89
I suspect the slight discrepancy between @collinsmark ’s measurement and mine is his noise subtraction would remove dark current and read noise but the way I subtracted noise would be expected to remove read noise, dark current and the potential background sky glow from light pollution.

For example if some “sky glow” ADUs were removed from @collinsmark ‘s measurements of both the single & double slits, it could result in his measurement for the double slit being even closer to 2.00x higher (his reading was 1.83x, mine 2.02x).

For example if the single slit ADU/px was (for example) 60 and the double slit was 110, and 10 ADU/px was background sky glow, and we subtract 10 ADU/px from each, we go from 110/60=1.83x to exactly 100/50=2.00x.


Edit: nevermind, it probably doesn’t account for the discrepency because noise from the background sky glow would be expected to double with the double slit, so the math wouldn’t work out.
 
Last edited:
  • #90
@collinsmark The only other uncertainty I have… when you cropped those TIF files, perhaps in photoshop, did they possibly pick up a sRGB or Adobe RGB gamma curve?

I believe simply saving a TIF in photoshop with either an sRGB or Adobe RGB color profile will non-linearize the data by applying a gamma curve. This doesn’t apply to RAW files. (see below)

https://community.adobe.com/t5/camera-raw-ideas/p-output-linear-tiffs/idc-p/12464761

95235C8E-C35C-42DD-83E0-2CB6648020D8.jpeg

8DB405F0-9DBF-4C9F-BA5F-BBE80178768A.jpeg

Source: https://www.mathworks.com/help/images/ref/lin2rgb.html
 
  • #91
On second thought, I myself cropped @collinsmark ’s TIFs in photoshop during my analysis so maybe I inadvertently corrupted his data.
 
  • #92
@collinsmark I did a little more digging and found out that the 16bit and 8bit files you uploaded appear to have a “Dot Gain 20%” embedded color profile whereas the 32bit files have a “Linear Grayscale Profile” — and I used the 16bit files for my analysis of your data, so my analysis of your data was probably a bit off.

15.jpg

16.jpg
 
  • #93
Drakkith said:
What is the definition of 'noise' here? My understanding is that noise is the random variation in the pixels that scales as the square root of the signal.
I tend to use "noise" to mean anything that is not signal. Here that would mean stray and scattered light, noise from the photodiode junction (dark current) and associated electronics. It is the nature of these systems that the "noise" will average to a nonzero offset. Not all of it is simply characterized nor signal dependent. But it does need to be subtracted for accuracy of any ratio.
 
  • #94
Devin-M said:
Edit: nevermind, it probably doesn’t account for the discrepency because noise from the background sky glow would be expected to double with the double slit, so the math wouldn’t work out.
What do you define as 'noise' here? What are you actually computing when you compute the 'noise'? I suspect you and I may mean something different when we use that word.
 
  • #95
Drakkith said:
What do you define as 'noise' here? What are you actually computing when you compute the 'noise'?
When the lens cap is on (same exposure settings as the light frame), there is still still ADUs per pixel even through there is no light. With a large enough sample, for example 100x100 pixels, red is 1/4th of the bayer pattern so you have 2500 samples. So you add up the ADUs of all the pixels combined, then divide by the number of pixels. That’s the average noise per pixel. If you look pixel by pixel, the values will be all over the place but when you move the 2500 pixel selection around the image, the average noise is very consistent across the image.
 
  • #96
Devin-M said:
@collinsmark The only other uncertainty I have… when you cropped those TIF files, perhaps in photoshop, did they possibly pick up a sRGB or Adobe RGB gamma curve?

I believe simply saving a TIF in photoshop with either an sRGB or Adobe RGB color profile will non-linearize the data by applying a gamma curve. This doesn’t apply to RAW files. (see below)

Pixinsight gives the option to embed an ICC profile when saving the images as TIFF. I had thought that I had that checkbox unchecked (i.e., I thought that I did not include an ICC profile in the saved files). Even if a ICC profile was included in the file, your program should be able to ignore it; it doesn't affect the actual pixel data directly.

But yes, you're correct that you should not let your image manipulation program apply a gamma curve. That will mess up this experiment. You need to work directly with the unmodified data.

Btw, @Devin-M, is your program capable of working with .XISF files? I usually work with .XISF files from start to finish (well, until the very end, anyway). If your program can work with those I can upload the .XISF files (32 bit, IEEE 754 floating point format) and that way I/we don't have to worry about file format conversions.

[Edit: Or I can upload FITS format files too. The link won't let me upload anything though, presently.]
 
Last edited:
  • #97
Devin-M said:
When the lens cap is on (same exposure settings as the light frame), there is still still ADUs per pixel even through there is no light. With a large enough sample, for example 100x100 pixels, red is 1/4th of the bayer pattern so you have 2500 samples. So you add up the ADUs of all the pixels combined, then divide by the number of pixels. That’s the average noise per pixel. If you look pixel by pixel, the values will be all over the place but when you move the 2500 pixel selection around the image, the average noise is very consistent across the image.
Okay, you're measuring the combined dark current + bias + background signal, using a large sample of pixels to average out the noise (the randomness in each pixel) to get a consistent value. No wonder I've been so confused with what you've been doing.

So yes, your previous method works despite what I said previously as long as you're sampling a background area as free of stars and other background objects as possible.
 
  • #98
collinsmark said:
Btw, @Devin-M, is your program capable of working with .XISF files? I usually work with .XISF files from start to finish (well, until the very end, anyway). If your program can work with those I can upload the .XISF files (32 bit, IEEE 754 floating point format) and that way I/we don't have to worry about file format conversions.

[Edit: Or I can upload FITS format files too. The link won't let me upload anything though, presently.]
I turned uploading back on. Could you spare a raw file of the single slit and one of the double slit? My RawDigger app seems to only open RAW files. It won’t even open a TIF or JPG so I resorted to manually entering all the pixel values into a spreadsheet to get the averages.
 
  • #99
Devin-M said:
I turned uploading back on. Could you spare a raw file of the single slit and one of the double slit? My RawDigger app seems to only open RAW files. It won’t even open a TIF or JPG so I resorted to manually entering all the pixel values into a spreadsheet to get the averages.

I've uploaded the data, this time in 16-bit, unsigned integer, in FITS file format.

I don't know what you mean by "RAW" file format. "RAW" is usually used as an umbrella term for a format with pixel data that hasn't been manipulated. (For example, Nikon's "RAW" file format is actually "NEF.")

The way I gathered the data, N.I.N.A stores the data straight from the camera to XISF file format. XISF file format is similar to FITS, but more extensible. XISF or FITS is about as raw as my system can do.

The files I uploaded though are dark-frame calibrated and stacked.
 
  • #100
This thread just made realize I've been unnecessarily degrading my astro-photos for years...

The mistake? I assumed Adobe Lightroom will convert RAW files into 16-bit tifs linearly and without adding color noise before I stack them... turns out that's not the case. The reason this is important is the next step is stacking 20-60 of those TIFs to reduce the noise, so you definitely don't want to be adding noise before you reduce the noise.

The solution? It turns out the app I've been using in this thread to inspect the RAW files will also convert RAW files into 16-bit tifs (as far as I can tell) linearly and without modifying the image values or adding noise.

New process (converting RAW NEFs to 16 bit TIFs in RawDigger before stacking):
IMG-2.jpg


Old Process (same exact RAW files but converting RAW NEFs to 16 bit TIFs in Adobe Lightroom before stacking):
Casper_M78_620x620_double_stretched.jpg


Wow it's a world of difference.

Details:

Meade 2175mm f/14.5 Maksutov Cassegrain with Nikon D800 on Star Watcher 2i equatorial mount
17x stacked 90 second exposures @ 6400iso + 19 dark calibration frames + 5 flat calibration frames
RAW NEF files converted to 16bit TIFs in RawDigger
Stacking in Starry Sky Stacker
Final Histogram Stretch in Adobe Lightroom AFTER(!!) stacking

5625643.png


5625643-1.png


5625643-2.png


7600095-1.jpeg


7600095.jpeg


Old 100% Crop:
100pc_old.jpg

New 100% Crop:
100pc_new.jpg
 
Last edited:
  • Like
Likes vanhees71, collinsmark and hutchphd
  • #101
collinsmark said:
Here are the resulting plots:

double_plotwithstrip_smallforpf-png.png

Figure 8: Both slits open plot.

lc_plotwithstrip_smallforpf-png.png

Figure 9: Left slit covered plot.

rc_plotwithstrip_smallforpf-png.png

Figure 10: Right slit covered plot.

The shapes of the diffraction/interference patterns also agree with theory.

Notice the central peak of the both slits open plot is interestingly 4 times as high as either single slit plot.

So adding the second slit results in double the total photon detections & roughly 4x the intensity in certain places and significantly reduced intensity in others. That’s consistent with conservation of energy, but how do you rule out destruction of energy in some places and creation of energy in others, proportional to the square of the change of intensity in field strength (ie 2 over lapping constructive waves but 2^2 detected photons in those regions?

To further illustrate the question posed, at 2:38 in the video below, when he uncovers one half of the interferometer, doubling light, the detection screen goes completely dark:
 
  • #102
Devin-M said:
That’s consistent with conservation of energy, but how do you rule out destruction of energy in some places and creation of energy in others, proportional to the square of the change of intensity in field strength (ie 2 over lapping constructive waves but 2^2 detected photons in those regions?
Conservation of energy is all that I require. The rest is your problem.
 
  • Like
Likes Vanadium 50
  • #103
At exactly 3:00 in the video he uncovers the mirror on the right side arm of the interferometer and the light on the detector screen that had been bouncing off the left side mirror of the interferometer goes completely black. You can tell there’s light headed toward that right arm mirror because you can see the red dot on the index card before he uncovers the right mirror. What happens to the light that reflects off the right the hand mirror? It appears as if the portion of that light that reflects off the right side mirror and then passes straight through the beam splitter is being completely destroyed by having a 1/2 wavelength path length difference with the beam reflecting off the left arm mirror.
 
Last edited:
  • #104
Did you watch the entire video? (whose title is "where did the light go?")

?!
 
  • #105
Yes at the very end he end he asks the viewer if the path difference of the 2 mirrors is 1/2 wavelength, why does the light going to one screen cancel out and the other add constructively. He doesn’t answer this question and leaves it for the viewer to answer. It seems like before he adds the second detection screen, half the light from the mirrors is going to the screen and the other half is going back to the laser source. When he adds the second detection screen now 1/2 is going to the 1st screen, 1/4th is going to the 2nd screen, and 1/4th is going back to the laser. As far as why on one screen there’s a half wavelength path difference and the other there isn’t, I believe it’s due to the thickness of the beam splitter. It seems there must be an extra half wavelength path length (or odd multiple of 1/2 wavelength) for the light from the left mirror that goes back through the beam splitter. I’m not yet fully convinced that when the light cancels out its really going back to the source because once its adding destructively the path length back to the source should be identical all the way back to the source so it should add destructively all the way back to the source.

For example with a different thickness beam splitter it may be possible to have the light on both detection screens add constructively at the same time or both destructively at the same time.
 
Last edited:
Back
Top