How does the inverse square law apply to a focused detector?

In summary, the inverse square law does not apply to digital images that are illuminated by a single light source.
  • #1
Mark Dresser
5
0
I am interested in evaluating light intensity variation in a digital image. A colleague wants to apply an inverse square law correction to account for distance variation. I am trying to justify that in this case, the inverse square law does not apply.

Treating each pixel as a detector, it has a fixed acceptance angle and the sensing area will vary as the square of the distance from the detector.

The source is much larger than the area covered by a pixel at the source distance. If the distance is doubled the intensity would be 1/4 but the source area covered by a pixel would also increase by a factor of 4 so the two effects cancel as long as the focused detector pixel area is completely on the source in both cases.

My colleague is a very smart guy but I can't see the flaw in my logic. The concept of a focused detector in the context of the inverse square law doesn't seem to be covered in any of the references I've found. I'd be grateful for a second (third?) opinion.
 
Physics news on Phys.org
  • #2
Mark Dresser said:
I am interested in evaluating light intensity variation in a digital image. A colleague wants to apply an inverse square law correction to account for distance variation.

Can you be more specific about what is in the image, and what sort of distance variation is present?
 
  • Like
Likes russ_watters
  • #3
And what is doing the focusing?
 
  • #4
This is a common source of arguments in amateur astronomy circles, but I too am not clear on the setup you are describing.

A hint/guess though:
It is notable that galaxy intensity is not strongly correlated with distance due to the effect you describe. E.G., a galaxy that is further away sends less light to a telescope/camera, but it is projected on a smaller area, largely cancelling out the reduction. That enables Hubble (and amateurs) to take photos of galaxies with vastly different distances in one frame.
 
  • #5
The source would typically be a large slab of hot steel, 20 - 30 m from the camera. Focussing is done with a commercial camera lens. The light source is many pixels wide at all of the distances in question.
 
  • #6
I would think if the object is more than three or four times the distance of the diameters of the object and light gathering aperture from the camera that the ISL would be a good approximation. This is a rule of thumb for radiation detectors. The reasoning being that any point on the observed object sees about the same subtended solid angle of the aperture which follows the ISL.
 
  • #7
gleem said:
I would think if the object is more than three or four times the distance of the diameters of the object and light gathering aperture from the camera that the ISL would be a good approximation. This is a rule of thumb for radiation detectors. The reasoning being that any point on the observed object sees about the same subtended solid angle of the aperture which follows the ISL.
I think you are (correctly) describing the total amount of light hitting the detector, but the OP is describing the intensity of the light.
 
  • #8
Mark Dresser said:
The source would typically be a large slab of hot steel, 20 - 30 m from the camera. Focussing is done with a commercial camera lens. The light source is many pixels wide at all of the distances in question.
I would say your interpretation of how the photography works is correct. The inverse square law applies to the total amount of light hitting the detector, not the intensity (pixel brightness value).
 
  • #9
I am thinking of two extreme cases and ignoring everything in between:
1. a point source with a detector with a very wide field of view. ISL clearly applies. In the lighting industry they use a rule of thumb that for distances greater than 5x source size ISL is a good enough approximation but their detector is typically averaging incident light over a half sphere.
2. a large source that is filling the entire detector sensing area. If the distance increases enough then the source would no longer fill the detector and ISL would start to apply again.

I guess another way to think about it would be to say that a focussed detector changes the boundary between near and far field conditions?
 
  • #10
russ_watters said:
I would say your interpretation of how the photography works is correct. The inverse square law applies to the total amount of light hitting the detector, not the intensity (pixel brightness value).
So if the image illuminates multiple pixels, the inverse square law manifests in terms of the number of pixels illuminated. Double the distance and quarter the image area and, accordingly, the number of illuminated pixels. But if the image illuminates only a fraction of a pixel, the inverse square law manifests in terms of the intensity at that pixel. Double the distance and quarter the intensity.
 
  • Like
Likes Mark Dresser and russ_watters
  • #11
russ_watters said:
I would say your interpretation of how the photography works is correct. The inverse square law applies to the total amount of light hitting the detector, not the intensity (pixel brightness value).
But to maintain constant intensity at a single pixel as the distance increases we would have to focus the decreased amount of light on a corresponding smaller area of the pixel array, no? It's not clear to me from the original post and followup what the geometry is and whether it has this property.
 
  • #12
jbriggs444 said:
So if the image illuminates multiple pixels, the inverse square law manifests in terms of the number of pixels illuminated. Double the distance and quarter the image area and, accordingly, the number of illuminated pixels. But if the image illuminates only a fraction of a pixel, the inverse square law manifests in terms of the intensity at that pixel. Double the distance and quarter the intensity.
Agreed. For astronomy, what you describe manifests in a difference between imaging individual stars vs entire galaxies.
 
  • #13
Nugatory said:
But to maintain constant intensity at a single pixel as the distance increases we would have to focus the decreased amount of light on a corresponding smaller area of the pixel array, no?
Yes.
It's not clear to me from the original post and followup what the geometry is and whether it has this property.
My read is that the imaging system is not changing, but the subject is moving relative to the camera. So if you double the distance, it covers 1/4 as many pixels.
 
  • Like
Likes Mark Dresser
  • #14
Thanks all for your input.

I like jbriggs444 explanation "So if the image illuminates multiple pixels, the inverse square law manifests in terms of the number of pixels illuminated."

I think I'm satisfied enough to move forward on an experiment to test it!
 
  • #15
russ_watters said:
Yes.

My read is that the imaging system is not changing, but the subject is moving relative to the camera. So if you double the distance, it covers 1/4 as many pixels.
A pixel is a Sample and sampling theory says that, as long as you are sampling at a high enough rate, the original signal can be reconstituted. So I am a bit reluctant to accept that an argument based just on number of pixels is fully valid. There are so many false assumptions about resolution being based just on pixel number and I am always wary. I think that you can (should) ignore the pixels and also ignore their acceptance angle for any distant source.
For an unfocussed point source (i.e. no lens) the energy flux through a given area will follow ISL exactly. So, in terms of pixels, there are the same number all the time and you just integrate. When you focus with a lens, the total flux through the lens will also follow ISL and the sensor array will measure the total as long as all the light from lens to image is intercepted. (Which is what light from a distant point will be.) That argument would apply to a single large detector (bolometer, for instance) and I think a large array would deliver the same answer.
 
  • #16
sophiecentaur said:
A pixel is a Sample and sampling theory says that, as long as you are sampling at a high enough rate, the original signal can be reconstituted. So I am a bit reluctant to accept that an argument based just on number of pixels is fully valid.
While I'm not entirely sure what you are getting at, I'm not, strictly speaking, using pixels themselves, except as a proxy for area or angle. Intensity can be measured in photons per unit area or angular area. With the imaging system fixed, they all vary together by the appropriate relations.

As I and others noted, that assumption gets inaccurate when what you are imaging is smaller than a pixel, but that is not an issue here.
 
Last edited:
  • #17
I tend to shy away from photons and pixels because they can cloud the issue at times. But, now they have been introduced, the following argument should apply. The lens gathers a number of photons per second. The CMOS or CCD elements are pretty well linear. The overall photon count will be the same if one or sixteen elements record it.
I know that’s not necessarily the full answer to the question but I think it at least nails part of it.
 
  • #18
To think about this another way:
You can easily demonstrate that if you have a square source that occupies 16 pixels in your detector field of view and you double the viewing distance it will occupy 4 pixels. If the ISL applies then the total energy hitting the entire detector array also drops by a factor of 4 so that the intensity of the light hitting the remaining 4 pixels must remain constant.
 
  • #19
If we are discussing "pixels" then the implication is that a lens is involved (camera or telescope). The aperture of the lens is what determines how much incident light power reaches the sensor or array. There's no doubt (?) that the energy into the lens aperture is subject to the ISL for large distances. A small image must be assumed if we have already accepted that the object is far enough away for ISL. All the photons going into the lens will fall on sensor elements. (Quantum efficiencies in the region of 90%+ are typical of good sensors) Sensor elements have a linear response (volts per received photon is constant)) so, whether the image falls on just one sensor or several, the integral over the image will give the same result. So the photons / pixels bit will not affect the result compared with the classical idea of power through the lens.
This is not an answer to the whole question, of course, but it does clear up a potential misunderstanding. (Intuition and experience with normal photography can get in the way.) If you use high magnification on a star image, you will see the image become large and appear less bright because it's spread out but that's another issue, to do with the non-linear way the eye (plus our perception) works (CMOS elements are not subjective). There is always an optimum magnification (for viewing) and we tend to choose an image size that's just bordering on the diffraction limit. Pale fuzzy stars and planets are 'no fun'.

The actual size of image from a 'point' source will be diffraction limited and there will be an airy disc distribution of the light falling on the sensor. The size in mm on the sensor will depend on the focal length of the lens and the subtended angle (in terms of the object) will be
f457db8c02895d5047d4af1b53fea00e07677f77
(From Wiki)
and if the scope records the whole of the disc, the total flux will be recorded

The f Number of a lens ("focal ratio") doesn't affect the brightness of a star; it's just affected by the aperture. However, for regular photographs, the f number accounts for what the final image looks like.

If we take an extended but distant object, the above argument will apply to the whole input energy flux if it's considered point by point. In fact. If your telescope field is filled by the Moon and then you go half way there, the ISL will not appear to apply because light from outer parts of the moon image will not hit the sensor or your eye.
 

FAQ: How does the inverse square law apply to a focused detector?

What is the inverse square law?

The inverse square law is a principle in physics that states that the intensity of a physical quantity, such as light or sound, decreases in proportion to the square of the distance from the source. In other words, the further away an object is from its source, the weaker its intensity becomes.

How does the inverse square law apply to a focused detector?

In the case of a focused detector, the inverse square law means that the intensity of the signal received by the detector decreases as the distance between the detector and the source increases. This is because the detector is focused on a specific area, and as the distance increases, the area being detected becomes larger, resulting in a decrease in intensity.

Why is the inverse square law important in detecting signals?

The inverse square law is important in detecting signals because it helps us understand how the intensity of a signal changes with distance. This is crucial in determining the sensitivity and range of a detector, and in predicting the amount of signal that will be received at a particular distance.

How does the inverse square law affect the accuracy of a focused detector?

The inverse square law can affect the accuracy of a focused detector if the distance between the detector and the source is not taken into account. As the distance increases, the intensity of the signal decreases, which can result in lower accuracy and potential errors in the detection.

Can the inverse square law be applied to all types of detectors?

Yes, the inverse square law can be applied to various types of detectors, as long as they are focused on a specific area. This includes detectors used in fields such as astronomy, photography, and radiation detection. However, the specific parameters of the law may vary depending on the type of detector and the physical quantity being detected.

Similar threads

Replies
5
Views
2K
Replies
33
Views
767
Replies
3
Views
2K
Replies
3
Views
2K
Replies
2
Views
4K
Back
Top