Optimizing Exposure Times: Balancing Efficiency and Image Quality

  • #1
Andy Resnick
Science Advisor
Education Advisor
Insights Author
7,467
3,227
I'm hoping there's a reasonable answer to this. To summarize, data I acquired when imaging a particular target shows that I can retain 75% of my images for stacking at 10s exposure times, but only 50% of the images taken with 15s exposures. The difference is entirely due to tracking error and no, I am not going to get an autotracker.

Possibly the most important metric (for me) is 'efficiency': what fraction of the total time I spend on any particular night, from setup to takedown, consists of 'stackable integration time'. Factoring in everything, 10s exposures results in an efficiency of 50% (e.g. 4 hours outside = 2 hours integration time), while 15s exposures gives me a final efficiency of 40%.

It's not clear what's best: more images decreases the noise, but longer exposure times increases the signal so I can image fainter objects (let's assume I never saturate the detector). More images means I can generate final images in fewer nights, longer exposures means I have fewer images to process to get the same total integration time.

For what it's worth, in my example I would obtain 480 10s images or 303 15s images per night. My final stacked image would likely consist of a few thousand individual images, obtained over a few weeks.

I haven't seen a quantitative argument supporting one or the other.... thoughts?
 
Astronomy news on Phys.org
  • #2
Andy Resnick said:
I haven't seen a quantitative argument supporting one or the other.... thoughts?
This depends on so many things like the camera specs, clouds, dark skies, tripod stability, tracking ability, etc., that I would experiment and trust the results from those experiences far more than any theoretical calculation.
 
  • #3
Hmmm....

It seems to me that you want to first and foremost, maximize the total exposure. In principle, the two options are given, but that assumes there isn't any other reason to toss a frame - like a transient of one sort or another. That would suggest more, shorter exposures would be preferable to fewer, longer ones.
 
  • #4
There's a quantitative answer here, but I'm not certain off the top of my head what it is. I believe the relationship for noise reduction is quadratic, but I'm not certain of that (I probably should be...). In other words, twice as many subs is equivalent to 1.4x exposure length. So losing 50% more subs would outweigh the longer exposure by about 20% if I got the relationship right.
 
  • #5
I think the sky quality is one of the criteria, the better it is the longer exposures are possible.

As to noise reduction, it increases with the root of the number of the exposures. But if one compares short and long exposures from the same object taken at the same time the difference is remarkable and seems to follow the same quadratic dependence.
 
  • #6
heh.. it is a tricky question! Thanks for the responses so far.

Sky quality/tracking accuracy/etc are constant across the comparison, so they shouldn't figure into the consideration.

Right- noise reduction goes as √(number of images), so that seems to imply more images = better. But longer exposures imply increased sensitivity since the lower limit of detection is around 1 signal photon/exposure.

A simple example when I get all wrapped around myself: longer exposures mean a brighter sky background. On one hand, the background is subtracted so it doesn't matter. On the other, sky background noise (per frame) scales as average background intensity (Poisson distribution), so this source of noise *increases* with longer exposure times and thus is not fully compensated for by increasing the number of frames.

Part of the reason I am thinking about this because when I look back to when I started astrophotography, my efficiency was around 5%. No tracking mount meant that the exposure time was only 0.5s but I kept nearly every single frame; I could just barely make out magnitude 12 or 13 stars. Currently, I am able to reliably image stars of magnitude 18 or 19 using the same camera and lens.
 
  • #7
If you have 4x the exposure time, you get 2x the noise and 4x the signal.
If you have 4x the number of exposures, you get 2x the noise and 4x the signal.
Right?
 
  • #8
Vanadium 50 said:
If you have 4x the exposure time, you get 2x the noise and 4x the signal.
If you have 4x the number of exposures, you get 2x the noise and 4x the signal.
Right?
I don't think that's entirely correct. Since the setup is the same for all images, a single pixel in a single image can be parameterized by a given background level (from the diffuse sky illumination) and signal level (from any luminous objects: stars, nebulae, etc). Multiple frames introduces the additional idea of background and signal variance. If I can make the variances infinitesimal, then I can separate signal and background in the entire image. If I make the exposures shorter and shorter, the signal level approaches the background level so I can't detect objects.

Averaging many frames decreases the variances by (# frames) ^(1/2) but maintains the same average background/signal levels, so increasing the number of averaged frames does not increase the signal, only the SNR. Increasing the exposure time increases both the background and signal levels as well as the variances. Perhaps these can be made to nearly cancel each other out, but it would seem that I need more frames at long exposures to get the same level of variance reduction.

Even worse, it takes more nights obtaining long exposures to generate equivalent (do they have to be equal?) numbers of short and long exposure frames.

I could just 'do the experiment', but that would potentially cost me several years' worth of observations.
 
Last edited:
  • #9
Andy Resnick said:
heh.. it is a tricky question! Thanks for the responses so far.

Sky quality/tracking accuracy/etc are constant across the comparison, so they shouldn't figure into the consideration.
That would make one option consistently better, but which one is better could still depend on the value of those constants.
Andy Resnick said:
Right- noise reduction goes as √(number of images), so that seems to imply more images = better. But longer exposures imply increased sensitivity since the lower limit of detection is around 1 signal photon/exposure.
It depends on whether the shorter exposures can capture your image. You have not said if you are using a digital camera with noise reduction at higher ISO settings and how well it works.
Andy Resnick said:
A simple example when I get all wrapped around myself: longer exposures mean a brighter sky background. On one hand, the background is subtracted so it doesn't matter. On the other, sky background noise (per frame) scales as average background intensity (Poisson distribution), so this source of noise *increases* with longer exposure times and thus is not fully compensated for by increasing the number of frames.

Part of the reason I am thinking about this because when I look back to when I started astrophotography, my efficiency was around 5%. No tracking mount meant that the exposure time was only 0.5s but I kept nearly every single frame; I could just barely make out magnitude 12 or 13 stars. Currently, I am able to reliably image stars of magnitude 18 or 19 using the same camera and lens.
Other than the problem of the shorter exposure not capturing dim stars, using more short photos has the advantage in many ways. So the disadvantage of the shorter exposure boils down to how well your camera (digital or film?) works at higher ISO numbers.

I recommend doing some experiments.
 
  • #10
Andy Resnick said:
I could just 'do the experiment', but that would potentially cost me several years' worth of observations.
I don't understand this. How could a night of experimenting with A versus B cost you that much?
 
  • #11
russ_watters said:
There's a quantitative answer here, but I'm not certain off the top of my head what it is. I believe the relationship for noise reduction is quadratic, but I'm not certain of that (I probably should be...). In other words, twice as many subs is equivalent to 1.4x exposure length. So losing 50% more subs would outweigh the longer exposure by about 20% if I got the relationship right.
That doesn't seem right to me if you're talking about subs that are 2x the exposure time of the shorter subs. If you add two subs of 10 seconds you should get the same signal as a single 20 second sub. The difference should just be in the read noise of the detector. This noise should contribute more to the 10 second subs compared to the 20 second sub.
timmdeeg said:
I think the sky quality is one of the criteria, the better it is the longer exposures are possible.
Whatever signal you're getting should add linearly with exposure time or exposure number, so I wouldn't think there would be any difference for sky quality.

Vanadium 50 said:
If you have 4x the exposure time, you get 2x the noise and 4x the signal.
If you have 4x the number of exposures, you get 2x the noise and 4x the signal.
Right?
That is my understanding, yes, except that read noise contributes more in shorter exposures vs longer exposures since it doesn't scale with exposure time.

Andy Resnick said:
Averaging many frames decreases the variances by (# frames) ^(1/2) but maintains the same average background/signal levels, so increasing the number of averaged frames does not increase the signal, only the SNR.
I think you mean decreases the SNR?

Andy Resnick said:
Increasing the exposure time increases both the background and signal levels as well as the variances. Perhaps these can be made to nearly cancel each other out, but it would seem that I need more frames at long exposures to get the same level of variance reduction.
This is not true. Doubling exposure time doubles the signal but only increases the noise by the square root of 2, raising your SNR. This is exactly the same as taking double the subs at half the exposure time (except that read noise contributes more to the shorter subs).

Issues of sensitivity, such as detecting very faint stars, shouldn't really apply since digital sensors typically have a linear response even at very low light levels and are capable of detecting single photons. If you detect 1 photon every 10 subs, then that will show up as a slightly brighter pixel against the background once you average many subs together. The only issue should be whether that very small signal can be teased out of all the sources of noise.

Well, that's all according to my understanding of the topic.
 
  • Like
Likes Vanadium 50
  • #12
Drakkith said:
That doesn't seem right to me if you're talking about subs that are 2x the exposure time of the shorter subs. If you add two subs of 10 seconds you should get the same signal as a single 20 second sub. The difference should just be in the read noise of the detector. This noise should contribute more to the 10 second subs compared to the 20 second sub.
Same signal, more shorter subs means more noise, so the snr is worse for shorter subs. By a factor of the square root of the number of subs: 1.4x for twice as many subs vs 2x for twice as long. This goes through the math:

https://dslr-astrophotography.com/long-exposures-multiple-shorter-exposures/
 
  • #13
russ_watters said:
Same signal, more shorter subs means more noise, so the snr is worse for shorter subs.
Certainly. But the question is why and by how much.
russ_watters said:
By a factor of the square root of the number of subs: 1.4x for twice as many subs vs 2x for twice as long. This goes through the math:
I don't see this anywhere in the math. In fact, the article states that without readout noise include:

Now we can see it doesn’t matter for the SNR how we fill in values for N and t as long as N*t = the same value. So only total exposure time matters (N*t) and not how we divide it in subexposures.

And:

So yes, it’s true that if read noise wouldn’t exist it doesn’t matter what exposure time you use and how many exposures you take, all that matters is the total integration time. And even with read noise included in the formula, you can see that once the other values are much much bigger than the read noise, the same will apply; the read noise becomes (almost) irrelevant and we are left in the situation where it doesn’t matter what exposure time you use.

So the only thing that changes the real situation from this is that real images have readout noise. And, as the article states, readout noise only becomes important once all other sources of noise become small, such as when taking very short exposures or imaging very faint targets in very dark skies with a camera that has very low dark current.

Edit: The article concludes with this:

However, I feel the most important conclusion probably is the fact that the exposure length is only relevant when the read noise is relevant. And the read noise is only relevant when you are imaging under a dark sky.
With most moderately to strong light polluted skies, the subexposure length won’t matter much once you are using 2 to 3 minute exposures.
 
  • Like
Likes andrew s 1905
  • #14
@Andy Resnick Suffice it to say, your best bet is to try both methods and see which one works better. My personal opinion is any possible benefit to switching to slightly longer subs is negligible and you'd be better off getting a new mount or some other means of increasing your exposure time to a minute or more. With 10-15 second exposures your readout noise is an appreciable fraction of your shot noise for targets of moderate brightness (based off some calculations from an image of the Omega Nebula I took).

Details: 300 second sub. I chose a random pixel from a semi-bright region of the nebula and obtained a pixel value of about 2000. I then subtracted the sky background and bias of about 700 to obtain a signal of 1300. I then divided that by 20 to obtain the same signal as a 15 second sub. That gave me 65. The noise is then the square root of this, or 8.

Of course those are pixel values, or ADU's. We want to know electrons. My camera has a gain of 0.19e/ADU, so converting the noise to electrons yields a noise of about 1.52e. Compare this with the listed readout noise of 5e and you can see that even a modern, specialized CCD sensor is still dominated by readout noise at short exposure times.

Simply increasing exposure time to 1 minute would result in a shot noise of 16 ADU's, or 3.04e, and 2 minutes to 4.33e, a drastic improvement since we want readout noise to contribute as little as possible relative to shot noise.
 
  • #15
FactChecker said:
I don't understand this. How could a night of experimenting with A versus B cost you that much?
Because I live in a location with generally poor or worse viewing conditions. The images I post here represent about 15 hours of integration time. At best, I can achieve total integration times of about an hour per night, with on average two acceptable viewing nights per week. For me, objects are only in a clear patch of sky for about 1 month of the year.

Do the math.
 
  • #16
FactChecker said:
You have not said if you are using a digital camera with noise reduction at higher ISO settings and how well it works.

I use a DSLR (Nikon D810) , and photograph at the lowest ISO (ISO 64) because I want maximum bit depth per image.
 
  • #17
Drakkith said:
This is not true. Doubling exposure time doubles the signal but only increases the noise by the square root of 2, raising your SNR. This is exactly the same as taking double the subs at half the exposure time (except that read noise contributes more to the shorter subs).
I disagree- stacking algorithms *average* all of the subs, not add them. I mean, they do probably add them first (and then re-normalize).
 
  • #18
Drakkith said:
@Andy Resnick Suffice it to say, your best bet is to try both methods and see which one works better. My personal opinion is any possible benefit to switching to slightly longer subs is negligible and you'd be better off getting a new mount or some other means of increasing your exposure time to a minute or more.
I think that's a reasonable conclusion, at least since there's not an obvious difference. The good news is that I've using the same equipment for at least 10 years and still haven't hit the performance limit- still plenty of room for technique improvements, I can tell the difference between images I took last week and 3 years ago. No need to spend large stacks on new mount + computer control... yet :)

Edit: You know, when I think about it, it's far more cost effective to just move a few thousand miles :)
 
Last edited:
  • Like
Likes Drakkith
  • #19
Andy Resnick said:
I disagree- stacking algorithms *average* all of the subs, not add them. I mean, they do probably add them first (and then re-normalize).
Adding and averaging are functionally identical here. There will be no difference in the image with either method. There will be a difference if you use another method, such as median combine or sigma clip.
 

Similar threads

  • Astronomy and Astrophysics
Replies
5
Views
2K
  • Astronomy and Astrophysics
Replies
1
Views
4K
Replies
1
Views
2K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
2
Views
2K
  • Astronomy and Astrophysics
Replies
2
Views
3K
  • Astronomy and Astrophysics
Replies
2
Views
1K
  • Astronomy and Astrophysics
Replies
4
Views
1K
  • Astronomy and Astrophysics
Replies
1
Views
875
Replies
2
Views
1K
Replies
152
Views
6K
Back
Top