Optimizing Exposure Times: Balancing Efficiency and Image Quality

  • #1
Andy Resnick
Science Advisor
Education Advisor
Insights Author
7,540
3,373
I'm hoping there's a reasonable answer to this. To summarize, data I acquired when imaging a particular target shows that I can retain 75% of my images for stacking at 10s exposure times, but only 50% of the images taken with 15s exposures. The difference is entirely due to tracking error and no, I am not going to get an autotracker.

Possibly the most important metric (for me) is 'efficiency': what fraction of the total time I spend on any particular night, from setup to takedown, consists of 'stackable integration time'. Factoring in everything, 10s exposures results in an efficiency of 50% (e.g. 4 hours outside = 2 hours integration time), while 15s exposures gives me a final efficiency of 40%.

It's not clear what's best: more images decreases the noise, but longer exposure times increases the signal so I can image fainter objects (let's assume I never saturate the detector). More images means I can generate final images in fewer nights, longer exposures means I have fewer images to process to get the same total integration time.

For what it's worth, in my example I would obtain 480 10s images or 303 15s images per night. My final stacked image would likely consist of a few thousand individual images, obtained over a few weeks.

I haven't seen a quantitative argument supporting one or the other.... thoughts?
 
  • Like
Likes Greg Bernhardt
Astronomy news on Phys.org
  • #2
Andy Resnick said:
I haven't seen a quantitative argument supporting one or the other.... thoughts?
This depends on so many things like the camera specs, clouds, dark skies, tripod stability, tracking ability, etc., that I would experiment and trust the results from those experiences far more than any theoretical calculation.
 
  • #3
Hmmm....

It seems to me that you want to first and foremost, maximize the total exposure. In principle, the two options are given, but that assumes there isn't any other reason to toss a frame - like a transient of one sort or another. That would suggest more, shorter exposures would be preferable to fewer, longer ones.
 
  • #4
There's a quantitative answer here, but I'm not certain off the top of my head what it is. I believe the relationship for noise reduction is quadratic, but I'm not certain of that (I probably should be...). In other words, twice as many subs is equivalent to 1.4x exposure length. So losing 50% more subs would outweigh the longer exposure by about 20% if I got the relationship right.
 
  • #5
I think the sky quality is one of the criteria, the better it is the longer exposures are possible.

As to noise reduction, it increases with the root of the number of the exposures. But if one compares short and long exposures from the same object taken at the same time the difference is remarkable and seems to follow the same quadratic dependence.
 
  • #6
heh.. it is a tricky question! Thanks for the responses so far.

Sky quality/tracking accuracy/etc are constant across the comparison, so they shouldn't figure into the consideration.

Right- noise reduction goes as √(number of images), so that seems to imply more images = better. But longer exposures imply increased sensitivity since the lower limit of detection is around 1 signal photon/exposure.

A simple example when I get all wrapped around myself: longer exposures mean a brighter sky background. On one hand, the background is subtracted so it doesn't matter. On the other, sky background noise (per frame) scales as average background intensity (Poisson distribution), so this source of noise *increases* with longer exposure times and thus is not fully compensated for by increasing the number of frames.

Part of the reason I am thinking about this because when I look back to when I started astrophotography, my efficiency was around 5%. No tracking mount meant that the exposure time was only 0.5s but I kept nearly every single frame; I could just barely make out magnitude 12 or 13 stars. Currently, I am able to reliably image stars of magnitude 18 or 19 using the same camera and lens.
 
  • #7
If you have 4x the exposure time, you get 2x the noise and 4x the signal.
If you have 4x the number of exposures, you get 2x the noise and 4x the signal.
Right?
 
  • Like
Likes Gleb1964
  • #8
Vanadium 50 said:
If you have 4x the exposure time, you get 2x the noise and 4x the signal.
If you have 4x the number of exposures, you get 2x the noise and 4x the signal.
Right?
I don't think that's entirely correct. Since the setup is the same for all images, a single pixel in a single image can be parameterized by a given background level (from the diffuse sky illumination) and signal level (from any luminous objects: stars, nebulae, etc). Multiple frames introduces the additional idea of background and signal variance. If I can make the variances infinitesimal, then I can separate signal and background in the entire image. If I make the exposures shorter and shorter, the signal level approaches the background level so I can't detect objects.

Averaging many frames decreases the variances by (# frames) ^(1/2) but maintains the same average background/signal levels, so increasing the number of averaged frames does not increase the signal, only the SNR. Increasing the exposure time increases both the background and signal levels as well as the variances. Perhaps these can be made to nearly cancel each other out, but it would seem that I need more frames at long exposures to get the same level of variance reduction.

Even worse, it takes more nights obtaining long exposures to generate equivalent (do they have to be equal?) numbers of short and long exposure frames.

I could just 'do the experiment', but that would potentially cost me several years' worth of observations.
 
Last edited:
  • #9
Andy Resnick said:
heh.. it is a tricky question! Thanks for the responses so far.

Sky quality/tracking accuracy/etc are constant across the comparison, so they shouldn't figure into the consideration.
That would make one option consistently better, but which one is better could still depend on the value of those constants.
Andy Resnick said:
Right- noise reduction goes as √(number of images), so that seems to imply more images = better. But longer exposures imply increased sensitivity since the lower limit of detection is around 1 signal photon/exposure.
It depends on whether the shorter exposures can capture your image. You have not said if you are using a digital camera with noise reduction at higher ISO settings and how well it works.
Andy Resnick said:
A simple example when I get all wrapped around myself: longer exposures mean a brighter sky background. On one hand, the background is subtracted so it doesn't matter. On the other, sky background noise (per frame) scales as average background intensity (Poisson distribution), so this source of noise *increases* with longer exposure times and thus is not fully compensated for by increasing the number of frames.

Part of the reason I am thinking about this because when I look back to when I started astrophotography, my efficiency was around 5%. No tracking mount meant that the exposure time was only 0.5s but I kept nearly every single frame; I could just barely make out magnitude 12 or 13 stars. Currently, I am able to reliably image stars of magnitude 18 or 19 using the same camera and lens.
Other than the problem of the shorter exposure not capturing dim stars, using more short photos has the advantage in many ways. So the disadvantage of the shorter exposure boils down to how well your camera (digital or film?) works at higher ISO numbers.

I recommend doing some experiments.
 
  • #10
Andy Resnick said:
I could just 'do the experiment', but that would potentially cost me several years' worth of observations.
I don't understand this. How could a night of experimenting with A versus B cost you that much?
 
  • #11
russ_watters said:
There's a quantitative answer here, but I'm not certain off the top of my head what it is. I believe the relationship for noise reduction is quadratic, but I'm not certain of that (I probably should be...). In other words, twice as many subs is equivalent to 1.4x exposure length. So losing 50% more subs would outweigh the longer exposure by about 20% if I got the relationship right.
That doesn't seem right to me if you're talking about subs that are 2x the exposure time of the shorter subs. If you add two subs of 10 seconds you should get the same signal as a single 20 second sub. The difference should just be in the read noise of the detector. This noise should contribute more to the 10 second subs compared to the 20 second sub.
timmdeeg said:
I think the sky quality is one of the criteria, the better it is the longer exposures are possible.
Whatever signal you're getting should add linearly with exposure time or exposure number, so I wouldn't think there would be any difference for sky quality.

Vanadium 50 said:
If you have 4x the exposure time, you get 2x the noise and 4x the signal.
If you have 4x the number of exposures, you get 2x the noise and 4x the signal.
Right?
That is my understanding, yes, except that read noise contributes more in shorter exposures vs longer exposures since it doesn't scale with exposure time.

Andy Resnick said:
Averaging many frames decreases the variances by (# frames) ^(1/2) but maintains the same average background/signal levels, so increasing the number of averaged frames does not increase the signal, only the SNR.
I think you mean decreases the SNR?

Andy Resnick said:
Increasing the exposure time increases both the background and signal levels as well as the variances. Perhaps these can be made to nearly cancel each other out, but it would seem that I need more frames at long exposures to get the same level of variance reduction.
This is not true. Doubling exposure time doubles the signal but only increases the noise by the square root of 2, raising your SNR. This is exactly the same as taking double the subs at half the exposure time (except that read noise contributes more to the shorter subs).

Issues of sensitivity, such as detecting very faint stars, shouldn't really apply since digital sensors typically have a linear response even at very low light levels and are capable of detecting single photons. If you detect 1 photon every 10 subs, then that will show up as a slightly brighter pixel against the background once you average many subs together. The only issue should be whether that very small signal can be teased out of all the sources of noise.

Well, that's all according to my understanding of the topic.
 
  • Like
Likes Gleb1964 and Vanadium 50
  • #12
Drakkith said:
That doesn't seem right to me if you're talking about subs that are 2x the exposure time of the shorter subs. If you add two subs of 10 seconds you should get the same signal as a single 20 second sub. The difference should just be in the read noise of the detector. This noise should contribute more to the 10 second subs compared to the 20 second sub.
Same signal, more shorter subs means more noise, so the snr is worse for shorter subs. By a factor of the square root of the number of subs: 1.4x for twice as many subs vs 2x for twice as long. This goes through the math:

https://dslr-astrophotography.com/long-exposures-multiple-shorter-exposures/
 
  • #13
russ_watters said:
Same signal, more shorter subs means more noise, so the snr is worse for shorter subs.
Certainly. But the question is why and by how much.
russ_watters said:
By a factor of the square root of the number of subs: 1.4x for twice as many subs vs 2x for twice as long. This goes through the math:
I don't see this anywhere in the math. In fact, the article states that without readout noise include:

Now we can see it doesn’t matter for the SNR how we fill in values for N and t as long as N*t = the same value. So only total exposure time matters (N*t) and not how we divide it in subexposures.

And:

So yes, it’s true that if read noise wouldn’t exist it doesn’t matter what exposure time you use and how many exposures you take, all that matters is the total integration time. And even with read noise included in the formula, you can see that once the other values are much much bigger than the read noise, the same will apply; the read noise becomes (almost) irrelevant and we are left in the situation where it doesn’t matter what exposure time you use.

So the only thing that changes the real situation from this is that real images have readout noise. And, as the article states, readout noise only becomes important once all other sources of noise become small, such as when taking very short exposures or imaging very faint targets in very dark skies with a camera that has very low dark current.

Edit: The article concludes with this:

However, I feel the most important conclusion probably is the fact that the exposure length is only relevant when the read noise is relevant. And the read noise is only relevant when you are imaging under a dark sky.
With most moderately to strong light polluted skies, the subexposure length won’t matter much once you are using 2 to 3 minute exposures.
 
  • Like
Likes andrew s 1905
  • #14
@Andy Resnick Suffice it to say, your best bet is to try both methods and see which one works better. My personal opinion is any possible benefit to switching to slightly longer subs is negligible and you'd be better off getting a new mount or some other means of increasing your exposure time to a minute or more. With 10-15 second exposures your readout noise is an appreciable fraction of your shot noise for targets of moderate brightness (based off some calculations from an image of the Omega Nebula I took).

Details: 300 second sub. I chose a random pixel from a semi-bright region of the nebula and obtained a pixel value of about 2000. I then subtracted the sky background and bias of about 700 to obtain a signal of 1300. I then divided that by 20 to obtain the same signal as a 15 second sub. That gave me 65. The noise is then the square root of this, or 8.

Of course those are pixel values, or ADU's. We want to know electrons. My camera has a gain of 0.19e/ADU, so converting the noise to electrons yields a noise of about 1.52e. Compare this with the listed readout noise of 5e and you can see that even a modern, specialized CCD sensor is still dominated by readout noise at short exposure times.

Simply increasing exposure time to 1 minute would result in a shot noise of 16 ADU's, or 3.04e, and 2 minutes to 4.33e, a drastic improvement since we want readout noise to contribute as little as possible relative to shot noise.
 
  • #15
FactChecker said:
I don't understand this. How could a night of experimenting with A versus B cost you that much?
Because I live in a location with generally poor or worse viewing conditions. The images I post here represent about 15 hours of integration time. At best, I can achieve total integration times of about an hour per night, with on average two acceptable viewing nights per week. For me, objects are only in a clear patch of sky for about 1 month of the year.

Do the math.
 
  • #16
FactChecker said:
You have not said if you are using a digital camera with noise reduction at higher ISO settings and how well it works.

I use a DSLR (Nikon D810) , and photograph at the lowest ISO (ISO 64) because I want maximum bit depth per image.
 
  • #17
Drakkith said:
This is not true. Doubling exposure time doubles the signal but only increases the noise by the square root of 2, raising your SNR. This is exactly the same as taking double the subs at half the exposure time (except that read noise contributes more to the shorter subs).
I disagree- stacking algorithms *average* all of the subs, not add them. I mean, they do probably add them first (and then re-normalize).
 
  • #18
Drakkith said:
@Andy Resnick Suffice it to say, your best bet is to try both methods and see which one works better. My personal opinion is any possible benefit to switching to slightly longer subs is negligible and you'd be better off getting a new mount or some other means of increasing your exposure time to a minute or more.
I think that's a reasonable conclusion, at least since there's not an obvious difference. The good news is that I've using the same equipment for at least 10 years and still haven't hit the performance limit- still plenty of room for technique improvements, I can tell the difference between images I took last week and 3 years ago. No need to spend large stacks on new mount + computer control... yet :)

Edit: You know, when I think about it, it's far more cost effective to just move a few thousand miles :)
 
Last edited:
  • Like
Likes jim mcnamara and Drakkith
  • #19
Andy Resnick said:
I disagree- stacking algorithms *average* all of the subs, not add them. I mean, they do probably add them first (and then re-normalize).
Adding and averaging are functionally identical here. There will be no difference in the image with either method. There will be a difference if you use another method, such as median combine or sigma clip.
 
  • Like
Likes Gleb1964
  • #20
Drakkith said:
Adding and averaging are functionally identical here. There will be no difference in the image with either method. There will be a difference if you use another method, such as median combine or sigma clip.

Possibly, but averaging (or adding) 2 5-second exposures does not give the same result as a single 10-second exposure.
 
  • #21
Andy Resnick said:
averaging (or adding) 2 5-second exposures does not give the same result as a single 10-second exposure.
Then isn't that your answer? Whichever one you like best is what you should do.
 
  • #22
Drakkith said:
I don't see this anywhere in the math. In fact, the article states that without readout noise include:

Now we can see it doesn’t matter for the SNR how we fill in values for N and t as long as N*t = the same value. So only total exposure time matters (N*t) and not how we divide it in subexposures.

And:

So yes, it’s true that if read noise wouldn’t exist it doesn’t matter what exposure time you use and how many exposures you take, all that matters is the total integration time. And even with read noise included in the formula, you can see that once the other values are much much bigger than the read noise, the same will apply; the read noise becomes (almost) irrelevant and we are left in the situation where it doesn’t matter what exposure time you use.

So the only thing that changes the real situation from this is that real images have readout noise. And, as the article states, readout noise only becomes important once all other sources of noise become small, such as when taking very short exposures or imaging very faint targets in very dark skies with a camera that has very low dark current.

Edit: The article concludes with this:

However, I feel the most important conclusion probably is the fact that the exposure length is only relevant when the read noise is relevant. And the read noise is only relevant when you are imaging under a dark sky.
With most moderately to strong light polluted skies, the subexposure length won’t matter much once you are using 2 to 3 minute exposures.
You're right, I was skimming over the noise sources part, which makes the ratio change depending on which is dominant (because most only depend on total time, not number of exposures). Still, the OP's example the exposures are indeed very short, which means the readout noise is likely bigger factor, probably by a lot, getting it closer to that oversimplified ratio.

Honestly, I've been using a table for exposure times tailored to my camera (and sky), and didn't dig very far into the math of how the limit was reached...
 
  • Like
Likes Drakkith
  • #23
Andy Resnick said:
Possibly, but averaging (or adding) 2 5-second exposures does not give the same result as a single 10-second exposure.
Certainly not. Always go for longer subs if everything else is equal (which it is not in your case since you lose a greater percentage of longer subs).
 
  • #24
Drakkith said:
Certainly not. Always go for longer subs if everything else is equal (which it is not in your case since you lose a greater percentage of longer subs).
Right- which is why I came up with the 'efficiency' metric. I was hoping there was some quantitative justification for it, is all.
 
  • #25
Vanadium 50 said:
Then isn't that your answer? Whichever one you like best is what you should do.
It seems to be the case. I was wondering/hoping for some rational justification for my 'efficiency' metric, is all.
 
  • #26
Andy Resnick said:
hoping for some rational justification
Like in the movies where someone says "there must be a perfectly rational explanation for all this!"? Be careful...that doesn't always end well.
 
  • #27
Andy Resnick said:
It seems to be the case. I was wondering/hoping for some rational justification for my 'efficiency' metric, is all.
Okay, I dived a little bit into the math. Your camera appears to have a read noise of about 2.62e- at ISO 1600. I can't find a number for ISO 64, so I'm going to assume it's close to 2.62e- as well. Using the other numbers from my previous post (since I have no others to use) and assuming similar efficiency to my camera and a similar optical setup, you're looking at an average signal of about 8.2e- at 10 seconds and 12.3e- at 15 seconds with a shot noise of 2.86e- and 3.5e- respectively (I think my math in the previous post is wrong, as I square rooted before converting to e-).

Let's assume negligible dark current and sky signal (and their associate noise) to simplify things. Adding the noise terms:

10s subs: Final noise per sub is 3.87e-. SNR is 2.12.
15s subs: Final noise per sub is 4.372e-. SNR is 2.81.

Obviously our SNR is going to be better with a longer sub if we just compare single subs. But we're losing more longer subs, so let's add multiple frames together. For each hour of imaging you get 45 min of signal from 10s subs and 30 min of signal from 15s subs. Signal flux is 0.82e-/s.

10s subs: 270 subs with total signal of 2214e-. Total noise is 63.78e-. SNR is 34.72.
15s subs: 120 subs with total signal of 1476e-. Total noise is 47.96e-. SNR is 30.78.
With better tracking, assuming you save 90% of all subs at 15 secs:
216 subs with total signal of 2656e-. Total noise of 64.33e-. SNR is 41.29.

Saving 90% of subs at 1 min exposure time:
54 subs with total signal of 2656e-. Total noise of 55.02e-. SNR is 48.27.

And again with 5 min exposures (adding an extra minute since 90% of 60 is 54 min and we need 55 min for 11 subs):
11 subs with total signal of 2706. Total noise of 52.74e-. SNR is 51.31.

Notice that as exposure time increase read noise contributes less and less overall to our noise. The noise equation, without the dark and sky noise terms, is:

##noise = \sqrt{signal + (ron^2 * N)}##

Where ##ron## is read-out-noise and ##N## is the number of subs. Our signal term here is also what I'm calling the shot noise term, since shot noise is the square root of the signal. Note that I've already added the signal from all the subs, so I've omitted the ##N## from the signal term.

With our 10s subs the total read noise term is at 1853 while the shot noise term is at 2214. That is, read noise is contributing 45% of the total noise. This falls to 35.8% with our 15 sec subs, 12.2% with 1 min subs, and 2.7% with 5 min subs.

Exposure time is so important that you can get the same SNR with 25 min of 5-min subs as you can with 45 min of 10-sec subs. SNR of 34.59 and 34.72 respectively.

Keep in mind that your read noise at low ISO may be different from what was used here and we did not include either dark noise or sky noise.
 
  • Informative
  • Like
Likes Gleb1964 and Nik_2213
  • #28
May I ask if your preferred targets are lots of single pixels, such as a star-field, or many-pixel, eg planetary ??

Been a while --'Before Exo-Planets'-- since I attended local Astronomy society meetings, but one of the photography issues, after Sodium-filtering city sky-glow, was 'twinkle'.

I was more interested in 3D-mapping of 'local' (~50 LY) star systems, but my understanding was that, within the limits of sensitivity, more shorter individual exposures allowed better handling of 'twinkle' and other factors such as wide-bodies' contrails and lights..,

( IIRC, street-lighting's shift to more efficient 'high-pressure' Sodium etc drove observing from the suburbs... )
 
  • #29
Drakkith said:
Okay, I dived a little bit into the math.
Thanks! I appreciate this. However, I think my predominant source of noise is shot noise, not read noise. My night skies are not dark!
 
  • #30
Nik_2213 said:
May I ask if your preferred targets are lots of single pixels, such as a star-field, or many-pixel, eg planetary ??

Been a while --'Before Exo-Planets'-- since I attended local Astronomy society meetings, but one of the photography issues, after Sodium-filtering city sky-glow, was 'twinkle'.

I'll photograph (nearly) anything :)

Twinkle (clear sky turbulence) results, for me, in an overall broadening of the Airy disc. For example, on a good night, my average star FWHM is 2.5 pixels (using a certain lens, and computed by software). On a bad night, that FWHM may spread out to 2.9 pixels.

In terms of stacking, I'll cut frames that have a FWHM above a threshold and frames that have an ellipticity above a certain threshold- so on bad nights, I finish up with fewer frames than on a good night.

If you are interested, an excellent overall reference is Roggemann and Welsh's "Imaging Through Turbulence", and a more focused presentation is in David Saint-Jacques' dissertation "Astronomical Seeing in Space and Time".
 
  • Love
Likes Nik_2213
  • #31
Andy Resnick said:
Thanks! I appreciate this. However, I think my predominant source of noise is shot noise, not read noise. My night skies are not dark!
Can you find a recent picture and get an average background value from it? We're already in the weeds, might as well go all the way.
 
  • #32
Drakkith said:
Can you find a recent picture and get an average background value from it? We're already in the weeds, might as well go all the way.
I have to think about this- the original file (.NEF) is not an image, and regardless of how it's converted into an image, there is the complicated issue of tone mapping. I don't think I am able to go into the NEF file and extract information directly. What I can do (and have been doing) is using Fiji to analyze the computed 32-bit FITS image, but again that's normalized so it's not clear how much useful information can be recovered.
 
  • #33
Ok, here goes:

I am going to compare 2 imaging conditions that differ only by individual exposure time: in one case, it's 8 seconds (imaging M45), and the other 20 seconds (imaging M81 and M82). For both of these cases, everything else- the camera settings (other than exposure time), lens settings, method of stacking, and total integration time- are the same.

I need to be explicit about how these image sets were generated for display and analysis: Single images began as 14-bit RAW files, which were converted into 16-bit/ch TIFF images in Astro Pixel Processor. In order to more easily visualize the comparison I originally asked about, before saving the TIFF image, the histogram was stretched.

I don't fully understand how the histogram was stretched: the technical term is "digital development processing (DDP) stretch" and I used the settings: background targeted at 30% of the dynamic range; base pedestal of 0.0%, and a black point at -2 sigma. This gives me the brightest background.

Here are the 'original images'... first, M81 & M82, then M45: (note: the first image is showing an out-of-focus tree branch, for the analysis I selected a region not occluded)

DSC_5332-St.jpeg


DSC_9823-St.jpg


Examining the background at high magnification (images in the same order):

Untitled4-1.jpg


Untitled5-1.jpg

For these crops, I scaled the original image about 800% without interpolation to preserve the individual pixel variations and cropped to a size I can post here (800 pixels wide).

Note there are high-frequency spatial variations in both brightness and color. These are "in quadrature", so to speak, and so decreasing the saturation also reduces the overall noise.

Some numbers: the intensity variations of this patch of background, according to Fiji, are:

20s exposure time: mean = 20665, st. dev = 9355
8s exposure time: mean = 18340, st. dev = 7248

This qualitatively agrees with shot noise dominating: brighter image = more noise. But note that the mean does not linearly scale with exposure time- that's one consequence of the nonlinear DDP stretch.

Now what happens after stacking? I don't fully understand the entire algorithm, but stacking methodology was identical for both objects and my choices of parameters were typically default settings.

Here are two crops of the stacked image, prior to any post-processing (e.g. background flattening and subtraction).

M81 & M82, averaging 2185 frames (integration time = 43700 seconds):
M81_M82-crop-St-43700s-1.jpg


and averaging 6225 frames for M45 (integration time = 49800s):

M45-crop-St-49800s-1.jpg


Fiji reports:

Averaging 2185 frames: mean = 27068, st. dev = 11510
Averaging 6225 frames: mean = 44519, st. dev = 4117

I'm not entirely sure what conclusions I can draw from this, other than "averaging more frames = decreased noise", but that's hardly a novel concept.

How's that?
 
  • #34
Naturally, the second I posted the above I realized I should have Fiji analyze the unscaled images (no DDP stretch). Those numbers are:

Single frames: 20 s mean = 2481, StDev = 17; 8s mean = 2535, StDev = 6. Bollocks!
Stacked: 2185 frames -> mean = 498 StDev = 0, 6225 frames -> mean = 362 StDev = 8. OMG.

I was better off before... :(

Edit: analyzing whole frames rather than crops doesn't help much:

20s exposure mean = 2563, StDev = 15, Max = 20216; averaging 2200 of them -> mean = 497, StDev = 21, Max = 19380.

8s exposure mean = 2534, StDev = 157, Max = 65535 (saturated pixels). Averaging 6200 of them -> mean = 837, StDev = 708, Max = 65535.
 
Last edited:
  • #35
Indeed, DDP does both a gamma stretch AND an unsharp mask, rendering any quantitative analysis basically impossible. At least for me.

As for your numbers from Fiji's analysis, I can't really make any sense out of them. All I did was choose a single pixel in my image for a quick analysis on. I also don't have a color camera to complicate things, just a monochrome camera with a filter wheel.
 
  • Like
Likes Gleb1964
Back
Top