Factors that influence depth of field

In summary, depth of field is influenced by several key factors, including aperture size, focal length of the lens, distance from the subject, and sensor size. A wider aperture (smaller f-stop number) results in a shallower depth of field, while a narrower aperture increases it. Longer focal lengths create a shallower depth of field, whereas shorter focal lengths enhance it. Additionally, the closer the subject is to the camera, the shallower the depth of field will be. Finally, larger sensors typically provide a shallower depth of field compared to smaller sensors at equivalent settings. Understanding these factors allows photographers to creatively control focus and composition in their images.
  • #36
Oldhouse said:
Your understanding is correct; the author is confusing depth of focus with depth of field... it's a fairly common confusion as the two are interrelated, but only one is common terminology.

The depth of focus is the relative sharpness of details at the image plane, and depth of focus is a fixed characteristic of an image. It is the non-variable component of depth of field; and it is dictated only by the physical size of the aperture opening/entrance pupil (not the f/#).

Depth of field is not a fixed aspect of an image; so when people talk about depth of field as a fixed aspect, they are really talking about the depth of focus.

Depth of field is dictated only by magnification... it is how apparent the depth of focus is made to the viewer. Magnification includes all of the other variables... focal length, subject distance, sensor area/cropping/enlargement, viewing distance, and even the viewer's visual acuity.
Be careful... that's only partially correct.

"Depth of focus" refers to the allowed mechanical tolerance of placing the sensor at the plane of best focus. Therefore, it is independent of the sensor.

"Depth of field" refers to the range of object distances imaged 'in focus' at the image plane, and depends on all the stuff we have been discussing here.

Unfortunately, many people use the terms interchangeably.
 
  • Informative
Likes berkeman
Science news on Phys.org
  • #37
A.T. said:
The definition of DoF relies on the definition of the CoC which is rather fuzzy and application dependent:
https://en.wikipedia.org/wiki/Circle_of_confusion

If you define the CoC based on the original image (on film/sensor), then it is independent of the display parameters (print/screen size and viewing distance). So I would suggest to clarify how the author of the video defines the CoC.
Yes, of course it doesn't matter if you define the CoC at the sensor or on the final printed image. However, when defining the CoC, you have to take the final viewing conditions into account (no matter if you define the permissible CoC on the sensor or in the final print), else you end up with a completely useless measure. The CoC (no matter if defined on the sensor or the printed image) is usually chosen so when the picture is viewed, a "singular point" is still perceived as a "singular point" and not as a blurry point.
 
  • #38
Oldhouse said:
However, when defining the CoC, you have to take the final viewing conditions into account (...), else you end up with a completely useless measure.
I can see some uses for it:

If the final viewing conditions are unknown, one still might want to compare different optics in terms of DoF, for example based on a CoC defined as a fraction of the original image size.

If the footage is used for automated digital image processing that reads the sensor directly, one still might need to know the DoF, for example based the a CoC in terms of sensor pixels.

In any case, the definition of the CoC is just too fuzzy, to argue about right and wrong.
 
  • #39
A.T. said:
I can see some uses for it:

If the final viewing conditions are unknown, one still might want to compare different optics in terms of DoF, for example based on a CoC defined as a fraction of the original image size.

If the footage is used for automated digital image processing that reads the sensor directly, one still might need to know the DoF, for example based the a CoC in terms of sensor pixels.

In any case, the definition of the CoC is just too fuzzy, to argue about right and wrong.
I probably wouldn't call this comparing optics in terms of DoF, but just comparing optics in terms of the the CoC as a fraction of the sensor size (which is how the "Zeiss Formula" defines the CoC). This now becomes all semantics... I agree, not really a point in arguing over it.
 
  • #40
Oldhouse said:
I probably wouldn't call this comparing optics in terms of DoF, but just comparing optics in terms of the the CoC as a fraction of the sensor size (which is how the "Zeiss Formula" defines the CoC).
The Zeiss Formula gives you the acceptable CoC based on sensor size, which you then can plug into the DoF formula to compute and compare the DoF for different optics. None of that requires knowledge of the actual viewing conditions.
 
  • #41
A.T. said:
The Zeiss Formula gives you the acceptable CoC based on sensor size, which you then can plug into the DoF formula to compute and compare the DoF for different optics. None of that requires knowledge of the actual viewing conditions.
No, you forget that the Zeiss Formula was derived from the DOF markings of an Zeiss Triotar lens, and the DOF markings were put on the lens with a CoC calcululated with a viewing distance that is equal to the picture diagonal. I'm done talking about this... We are going in circles.
 
  • Like
Likes sophiecentaur
  • #42
Oldhouse said:
No, you forget that the Zeiss Formula was derived from the DOF markings of an Zeiss Triotar lens, and the DOF markings were put on the lens with a CoC calcululated with a viewing distance that is equal to the picture diagonal.
Of course the formula was defined based on some practical consideration and not just a random ratio. But when it is applied, you don't need any data on the viewing conditions, just the sensor size. Also, for relative comparison of two optical systems it doesn't matter if you use the Zeiss-CoC or say twice the value, because it's about the ratio of the resulting DoF.

And for automated digital image processing there is no human viewing intended at all. Some algorithm reads the pixel values directly, but is limited by the CoC in terms of pixels, which then gives you the limits in terms of DoF.
 
  • #43
Oldhouse said:
TL;DR Summary: What are the actual factors that influence depth of field in photography?

The DOF we get in a picture is different if we record it on a small piece of film or on a large piece of film because we need to enlarge the small piece of film more than we enlarge the large piece of film to display it on our TV. Therefore the statement "it doesn't matter how big the wall is, it hasn't changed the depth of field" because "the image has already been formed, as it goes through the lens, before it hits the sensor" seams to be nonsensical.

then you misunderstand depth of field

The depth of focus is the relative sharpness of details at the image plane, and depth of focus is a fixed characteristic of an image. It is the non-variable component of depth of field; and it is dictated only by the physical size of the aperture opening/entrance pupil (not the f/#).

Depth of field is not a fixed aspect of an image; so when people talk about depth of field as a fixed aspect, they are really talking about the depth of focus.

I agree with Andy, having done photography for more years than I can to remember.

Depth of field and depth of focus are essentially the same thing, and the guy's comments in the video are correct when stating that the sensor size is irrelevant.

Dave
 
  • Like
Likes Gleb1964
  • #44
davenn said:
then you misunderstand depth of field



I agree with Andy, having done photography for more years than I can to remember.

Depth of field and depth of focus are essentially the same thing, and the guy's comments in the video are correct when stating that the sensor size is irrelevant.

Dave
I would say you misunderstand depth of field. In post #15, another member even posted the formula which makes it obvious that sensor size has an influence on DOF. Furthermore, you can simply use any depth of field calculator (for example: https://dofsimulator.net/en/) and test it yourself. Leave all parameters untouched except for the sensor size and you will see yourself that DOF changes.
 
  • #45
At the risk of beating a dead horse, I thought of a different perspective that might clarify how the size of the image (a non-subjective measure) can impact the depth of field (a subjective measure*).

* it is possible in some circumstances to quantify a maximum permissible defocus error; in fact this calculation is required for machine vision applications.

Specifically, consider display technology. In this analogy, the minimum size of a printed 'dot' or a display pixel approximately corresponds to the size of a circle of confusion. In this analogy, complexities of optical systems and image recording are not relevant; additionally, a quantitative metric of interest (dots per inch or pixel pitch) is always normalized to the subjective nature of human vision and the "optimal" viewing distance.

I looked up the specifications for the following and as best I could, converted everything to dots per inch (dpi):

Las Vegas Sphere (outside): 0.8 dpi
The Humungotron (located in my hometown): 100 dpi
Typical roadside billboard displays: 20 dpi
desktop/laptop monitor(**): 100-200 dpi
phone screen: 300-600 dpi
high-end printing services: 600-1200 dpi

(**) :) if you are looking at this on a specialized 8k or 12k display, you should already understand CoC so why are you wasting your time reading this? :)

The important observation is that for each of these, the image is considered "sufficiently sharp". For example, the ratio of laptop and phone dpi is nearly equal to the ratio of viewing distances for the two, showing how viewing distance (image size) impacts depth of field.

I think this analogy helps understand how depth of field (which depends on the size of the circle of confusion) scales with image size, which is a concept that can seem nonsensical or at least counter-intuitive.
 
  • #46
Drakkith said:
That assumption is the crux of the issue. If both images were displayed on the same monitor but at their native resolutions then the DOF of both would be the same. But since there's an assumption that the second image will be magnified to be displayed at the same size as the first then we must admit that this will reduce the DOF since it involves magnifying the image and all the blurring inherent in that image.
To this thread I chime in with my own skepticism/misapprehensions.

In my ancient world, DoF is about the maximum physical distance between objects in focus.

Stick a stake in the ground every foot from camera to horizon. Near stakes will be out of focus, far ones too. DoF defines the one in focus.

That is determined in-camera.

I fail to see how print resizing or DPI of printout will change which stakes are in focus and which stakes are out of focus. OK, unless a distant stake is only 2 printer dots wide...

Likewise, I can see how sensor size might cause some antialiasing artifacts that will overlap with DoF, since a distant stake that's only 2px wide will seem out of focus


... but I wouldn't have confabulated DPI or PPI with DoF. That seems sloppy to me.
 
  • #47
DaveC426913 said:
That is determined in-camera.

I fail to see how print resizing or DPI of printout will change which stakes are in focus and which stakes are out of focus. OK, unless a distant stake is only 2 printer dots wide...

It is not. Go to post #8 and read the posted link for a pretty easy to understand explanation.
 
  • #48
Oldhouse said:
It is not. Go to post #8 and read the posted link for a pretty easy to understand explanation.
OK. so I was correct. Today's DoF is a mixture of what I will call 'optical DoF' versus 'perceptual DoF'.

"Despite the debate, there is a standard in the photography industry : when looking at a print of 17cm*25cm at an optimum viewing distance of 25cm, a blur circle of less than 0.2mm diameter is seen as a dot and not a circle anymore. This is the diameter of the Circle of Confusion, the largest circle still perceived as a sharp point by the human eye. By default, the DoF is defined relatively to this degree of sharpness."

IOW, if it looks blurry (for whatever reason); it is declared to be blurry.

I still say this seems sloppy. Or at least application-specific. (By that I mean, it is less important what the actual causes of DoF are than what the effects/consequences of it are in the final format.) Pjut another way, it;s become a practical, engineer-y factor, rather than a theoretical science-y factor. A loss of data there.

The implication, as I see it, is that, by this definition, DoF can be affected by almost innumerable subjective, spurious artefactal factors:
  • looking at a photo in dim light will affect its DoF (because: pupils)
  • looking at a photo that has been printed with poor colour-calibraton on an inkjet printer will affect its DoF (by smearing out the dots. All the dots.)
  • looking at a photo through a translucent sheet of paper will change the DoF, because the smallest discernible dot-as-opposed-to-circle is now huge.
  • There is literally no limit to number of ways and conditons that can potentially alter the size of the smallest discernible dot in a picture when viewed after-the-fact.

These factors above affect the entire image equally - the entire pic equally - regardless of pic subject. (Every dot is enlarged by the same factor, it's just that some dots cross the CoC threshold and some do not.)

But we already have a term for how a given image's overall sharpness is defined. It's called sharpness.

Why redefine a term that already had a specific useful meaning? DoF used to be about a differential blurriness between disparate parts of the same photo's semantic subject (eg. foreground vs background), as oppsoed to the photo as an item. Does that still have a name?

Before you pound on me: This is a rant. I acknowledge the information inherent in this definition of DoF is useful to the modern photographer; I just don't know why they need to muddle perfectly cromulant technical terms.

I'm an old school photographer, and for me, DoF is an in-camera factor (though it can be rendered in post - by differentially changing the dot sizes). I am perfectly happy to talk about dot-size and sensor size and print rez and all those things. I'm just happy to use their specific technical labels so that they do not get lost the the melting pot with other similar, yet distinct, technical artefacts.


Question: did such a definition for DoF precede or follow the advent of digital photography?
 
Last edited:
  • #49
DaveC426913 said:
I fail to see how print resizing or DPI of printout will change which stakes are in focus and which stakes are out of focus. OK, unless a distant stake is only 2 printer dots wide...
Blow up the photograph to a poster that's 20 feet wide and 40 feet tall and viewed from 1 foot away. Now none of the field appears to be in focus.
 
  • #50
Drakkith said:
Blow up the photograph to a poster that's 20 feet wide and 40 feet tall and viewed from 1 foot away. Now none of the field appears to be in focus.
No it doesn't. Out-of-focus is distinctive*. The giant poster appears in-focus but no longer sharp.

* For example, bokeh is a depth-of-focus-effect and depends on actal distance in image, not on image-wide dot-size.

We already have terms that fit the phenomena.

Sharpness affects every element in the image, whether 2 inches or 200 miles. in the aboe example they're all one inch diameter dots. Even the Moon, 200,000 miles away. Why call it depth of field? It makes no sense.

In this definition, your 20x40 photo has a DoF of exactly zero. Nothing is "inside the field of focus". That's a nonsensical outcome.

To-wit:

Man at 20 feet: "I love the photographer's use of depth of field. The bees are in focus while the field of flowers is not."
Man at one foot: "False. The depth of field has collapsed and vanished. This is the worst exhibit ever."

🤔
 
Last edited:
  • #51
DaveC426913 said:
False. It appears in-focus but no longer sharp.
Those are essentially the same thing. Besides, at least half or more of the descriptions or definitions I can find of depth of field says something similar to: the size of the area in your image where objects appear acceptably sharp.
 
  • #52
DaveC426913 said:
In this definition, your 20x40 photo has a DoF of exactly zero. Nothing is "inside the field of focus". That's a nonsensical outcome.
Of course it's a sensible outcome. A badly misaligned camera lens can produce photos that never look sharp unless you shrink down the image to where it is very small. If you viewed a picture from this camera at normal size on your monitor you would say that the entire image is out of focus. If the entire image is out of focus, or unsharp, then you could certainly say it has zero depth of field.
 
  • #53
Drakkith said:
Those are essentially the same thing.
They're not. One is an subject property. It affects different parts of the (conceptual) image.
The other is a material property. It affects the entire (actual) image unilaterally.

In the subject image with small DoF, two areas arbirarily close together, can have widely varying degrees of blurriness (say, a hair, seen against a field).

Halving the DPI of the print doesn't prefentially affect the bg focus and ignore the foreground.

Drakkith said:
Besides, at least half or more of the descriptions or definitions I can find of depth of field says something similar to: the size of the area in your image where objects appear acceptably sharp.
"area"?? It's a distance, strictly radially, in the subject, from the lens.
 
  • #54
Drakkith said:
Of course it's a sensible outcome. A badly misaligned camera lens can produce photos that never look sharp unless you shrink down the image to where it is very small. If you viewed a picture from this camera at normal size on your monitor you would say that the entire image is out of focus. If the entire image is out of focus, or unsharp, then you could certainly say it has zero depth of field.
Sure. If the entire pic was shot out of focus, then it follows that the depth of field is zero.
But if I view a sharp pic through a piece of wax paper I can declare, with a straight face, "this photograph has a zero DoF"?


Again:

Man at 20 feet: "I love the photographer's use of depth of field. The bees are in focus while the field of flowers is not."
Man at one foot: "False. The depth of field has collapsed and vanished. This is the worst exhibit ever."


Likewise:

1714703442228.png

Left pic: high depth-of-field
Right pic: zero depth of field
?

1714703529393.png


Even worse:
Left pic: high depth-of-field
Right pic: zero depth of field
?
(I guess we'll just have to invent a new term to describe pics where the small range of distances in the image are in-focus while other ranges are not? A pity. The technique - which has a front-row role in creative photography - is now nameless.) 🤔
 
Last edited:
  • #55
@DaveC426913 You are getting this completely wrong. Please define "sharpness". If possible, what would the formula be? What permissible size circle of confusion are you choosing, and what is the process of settling on a specific size.
 
  • #56
DaveC426913 said:
Likewise:

View attachment 344410
Left pic: high depth-of-field
Right pic: zero depth of field
?

View attachment 344412

Even worse:
Left pic: high depth-of-field
Right pic: zero depth of field
Yes, it is indeed true that the right picture has less DOF than the left... absolutely!
 
  • #57
DaveC426913 said:
I still say this seems sloppy. Or at least application-specific. (By that I mean, it is less important what the actual causes of DoF are than what the effects/consequences of it are in the final format.) Pjut another way, it;s become a practical, engineer-y factor, rather than a theoretical science-y factor. A loss of data there.

It is not sloppy at all. It is like that by definition:
Drakkith said:
After looking into this more, the issue boils down to how DOF is defined:

##DOF \approx \frac{2u^2Nc}{f^2}##

Here ##u## is distance to subject, ##N## is the f-number of the system, ##c## is the acceptable circle of confusion, and ##f## is the focal length of the system. The value ##c##, the maximum acceptable circle of confusion, is what is important to understand here. It turns out that the circle of confusion DOES change as sensor size changes. Per wikipedia:

Image sensor size affects DOF in counterintuitive ways. Because the circle of confusion is directly tied to the sensor size, decreasing the size of the sensor while holding focal length and aperture constant will decrease the depth of field (by the crop factor). The resulting image however will have a different field of view. If the focal length is altered to maintain the field of view, the change in focal length will counter the decrease of DOF from the smaller sensor and increase the depth of field (also by the crop factor).

This also makes total sense when it comes to practical applications: If I film two people standing at different distance to the camera, and if I want them both to be in focus, I need to know if the film will be shown in an iMax theater or just on a home TV. Else it can happen that the two actors are in focus (sharp) on the home TV but when shown in iMax, one of the actors is out of focus.
 
  • Like
Likes hutchphd
  • #58
Oldhouse said:
Yes, it is indeed true that the right picture has less DOF than the left... absolutely!
That is insane*.

No. That's happy.
I have decided that 'happy' has been redefined to mean something new. The concept-formerly-known-as-insane now has no word to describe it until a new term is invented. , no matter how useful the CFKaI remains.

This doesn't have to be logically defensible. As long as I get enough people to agree with me (argumentum ad populum) I can get any word repurposed. And thus, any concept ... *ahem* double plus ** unexpressable.


*no personal insult intended

**pointed literary reference
 
  • #59
Oldhouse said:
This also makes total sense when it comes to practical applications: If I film two people standing at different distance to the camera, and if I want them both to be in focus, I need to know if the film will be shown in an iMax theater or just on a home TV. Else it can happen that the two actors are in focus (sharp) on the home TV but when shown in iMax, one of the actors is out of focus.
Categorically and demonstrably false.

1714704870734.png

1714705196785.png


The foreground and background of this (or any similar) image will have the variances in depth of field between fg and bg preserved from a 4-inch phone screen to a 400-foot silver screen.
 
  • #60
DaveC426913 said:
That is insane*.

No. That's happy.
I have decided that 'happy' has been redefined to mean something new. The concept-formerly-known-as-insane now has no word to describe it until a new term is invented. , no matter how useful the CFKaI remains.

This doesn't have to be logically defensible. As long as I get enough people to agree with me (argumentum ad populum) I can get any word repurposed. And thus, any concept ... *ahem* double plus ** unexpressable.

Ok, let's break this down, and you tell me how that is illogical:
See attached image for the formula of DOF.

"c" stands for the circle of confusion.

Now lets look at how the permissible CoC is defined:

"In photography, the circle of confusion diameter limit (CoC limit or CoC criterion) is often defined as the largest blur spot that will still be perceived by the human eye as a point, when viewed on a final image from a standard viewing distance."
https://en.wikipedia.org/wiki/Circle_of_confusion

The important part here is "largest blur spot that will still be perceived by the human eye as a point".

Now please tell me how you can add grain, or noise without changing the "blur spot"?
If you scale the image, change the viewing distance, alter the resolution etc. you obviously change the quality/size of the "blur spots". Therefore you change DOF.

If you still can't wrap your head around it... just go to https://dofsimulator.net/en/ and change the sensor size while leaving everything else the same.... DOF changes as you see. So why is that in your opinion?
 

Attachments

  • Screen Shot 2024-05-03 at 4.55.20 AM.png
    Screen Shot 2024-05-03 at 4.55.20 AM.png
    3.3 KB · Views: 21
  • #61
DaveC426913 said:
Categorically and demonstrably false.

View attachment 344414
View attachment 344417

The foreground and background of this (or any similar) image will have the variances in depth of field between fg and bg preserved from a 4-inch phone screen to a 400-foot silver screen.
Not true at all... I actually studied Cinematography and worked as a cinematographer. You don't know what you are talking about.
 
  • #62
DaveC426913 said:
They're not. One is an sujbect property. It affects different parts of the (conceptual) image.
The other is a material property. It affects the entire (actual) image.
Okay.
DaveC426913 said:
But if I view a sharp pic through a piece of wax paper I can declare, with a straight face, "this photograph has a zero DoF"?
I'm not going to debate what is right or wrong if you hold various objects in front of your face.

DaveC426913 said:
Man at 20 feet: "I love the photographer's use of depth of field. The bees are in focus while the field of flowers is not."
Man at one foot: "False. The depth of field has collapsed and vanished. This is the worst exhibit ever."
Yes, it is counterintuitive, isn't it? If you believe the definition of depth of field used here is wrong, then please provide a reference supporting your position.
 
  • #63
Oldhouse said:
Now lets look at how the permissible CoC is defined:

"In photography, the circle of confusion diameter limit (CoC limit or CoC criterion) is often defined as the largest blur spot that will still be perceived by the human eye as a point, when viewed on a final image from a standard viewing distance."
https://en.wikipedia.org/wiki/Circle_of_confusion

The important part here is "largest blur spot that will still be perceived by the human eye as a point".

Now please tell me how you can add grain, or noise without changing the "blur spot"?
This is tautological.
If you accept the definition that depth of field is drived from CoC effects then you are forced to make your conclusion.
Oldhouse said:
If you scale the image, change the viewing distance, alter the resolution etc. you obviously change the quality/size of the "blur spots". Therefore you change DOF.
Again. Even more explicit.
You state this as a premise above, then restate it as the conclusion.
Tautalogical.

Oldhouse said:
If you still can't wrap your head around it... just go to https://dofsimulator.net/en/ and change the sensor size while leaving everything else the same.... DOF changes as you see. So why is that in your opinion?
Woah. Sensor size is an in-camera effect. That will affect DoF. (because sensors are flat).


Oldhouse said:
Not true at all... I actually studied Cinematography and worked as a cinematographer. You don't know what you are talking about.
As did I. Let's not start waving bits. Just stick to logic.
 
  • #65
Oldhouse said:
View attachment 344419
Credit: Steven Kersting (https://photo.stackexchange.com/)


You can clearly see how the DOF changes simply by enlarging the picture. Especially noticable if you look at the carpet.
Contrived case. Carpet is almost pure noise. Our brains have a rough time picking clarity in noise.

But the tape measure makes my case. Depth of field has been preserved across image resizing.
In both images, only 22 and 23 appear in-focus. And that's unequivocal. QED.
 
  • #66
DaveC426913 said:
This is tautological.
If you accept the definition that depth of field is drived from CoC effects then you are forced to make your conclusion.


As did I. Let's not start waving bits. Just stick to logic.
Of course I stick to the generally accepted definition of DOF... you don't get to make up your own definition.

You still haven't answered any of the posed questions.... The CoC is an aspect of DOF as per definition... and the CoC is affected by resolution and other factors like grain... therefore resolution, grain etc. have an influence on DOF. So lets stick to logic as you say and let us hear exactly what is wrong in the above (be specific).
 
  • #67
DaveC426913 said:
Contrived case. Carpet is almost pure noise. Our brains have a rough time picking clarity in noise.

But the tape measure makes my case. Depth of field has been preserved across image resizing.
In both images, only 22 and 23 appear in-focus. And that's unequivocal. QED.
No, even in the tape measure it is clearly visible. You only get about 3mm of DOF in the bottom image... and about 10+mm in the image on top.
 
  • #68
Oldhouse said:
No, even in the tape measure it is clearly visible. You only get about 4mm of DOF in the bottom image... and about 10+mm in the image on top.
I see 22 and 23 (or 10/11) legibly in both pics. I see no other numbers legibly - in both pics.
If you blew this up to 400 feet - or shrunk it to 2 inches - only 22 and 23 (10/11) would ever be legible.
The DoF is preserved if you watched this on your phone or on the silver screen.
 
  • #69
DaveC426913 said:
I see 22 and 23 legibly in both pics. I see no other numbers legibly - in both pics.
If you blew this up to 400 feet - or shrunk it to 2 inches - only 22 and and 23 would ever be legible.
You clearly are confusion multiple things here... just because something is legible doesn't mean it is in focus.
In the picture attached bellow... the number is "legible" but clearly out of focus:
 

Attachments

  • Screen Shot 2024-05-03 at 5.21.06 AM.png
    Screen Shot 2024-05-03 at 5.21.06 AM.png
    10.7 KB · Views: 28
  • #70
Oldhouse said:
... just because something is legible doesn't mean it is in focus.
In the picture attached bellow... the number is "legible" but clearly out of focus:
Correct. I did not say otherwise.
Red herring. Neither helps nor hurts either stance.
 

Similar threads

Replies
14
Views
2K
Replies
3
Views
1K
Replies
25
Views
2K
Replies
226
Views
13K
Replies
4
Views
5K
Replies
1
Views
2K
Replies
47
Views
3K
Replies
4
Views
3K
Back
Top