Ultimate Digital Camera Buyer’s Guide: Compacts, DSLR and Tripods
Table of Contents
Introduction
First, congratulations! Camera technology has advanced to the point where a complete novice, using an entry-level camera right out of the box, can take photos that (under certain circumstances) appear identical to professional photos. Regardless if you only use your camera in the ‘automatic’ setting or if you explore the art of photography, the most important part of any photograph is *you*- your brain and your eyes. To paraphrase, cameras don’t take good pictures, people do. So, to the question at hand-
It depends. The best camera for you is the camera that meets your needs. While PF cannot recommend a specific camera (for various reasons), we can provide technical information that will help you better understand the specific product reviews contained in other review sites. This sticky is organized in terms of expertise- starting with zero and working up to expert. You will see that certain topics are re-visited because imaging involves certain technical trade-offs inherent in the relevant physics. PF has a dedicated group of enthusiasts who are happy to help you take better pictures!
A couple of preliminaries and definitions: ‘lens’ refers to the hunk of glass/plastic on the front of a camera that may be made up of multiple elements, and is rotationally symmetric about the ‘optical axis’. ‘Sensor’ refers to the light-detecting chip. ‘Imaging’ means the same thing as ‘taking a picture’. ’35mm format’ refers to the standard (135 film) camera format. It can be helpful to be aware of the 35mm format because much of the nomenclature is derived from 35mm film cameras. The phrase ’35mm film format’ is associated with the Kodak 135 film frame size (ISO 1007). This image frame size is 36mm x 24 mm, with a diagonal dimension of 43 mm.
Compact Point and Shoot
Question:
“I’ve never owned a camera (besides what is in my cell phone and laptop). I want to take pictures at parties and post them online or email them to my friends.”
Answer:
A compact point-and-shoot camera is likely the best option- everything is automatic, all you need to do is point the camera where you want and press the button. Because this camera does everything for you, you give up having control over most or all of the parameters discussed below. Most likely, you will either outgrow this camera or need to replace it in about 1 or 2 years.
One important thing to look for in an entry-level camera is ‘ease of use’. Chances are, you would rather take images rather than fiddle with camera settings. For example, are the buttons and dials easy to find and set? Is the LCD sharp and bright enough to see in sunlight? Other priorities may include the ability to take a picture quickly (to catch moving objects like animals and small children), fast startup or turn-on times, portability, etc. These cameras produce very acceptable images that can achieve a professional-quality 4″ x 6″ print. We will have a lot more to say about resolution and print size later. At this level of performance, the sensor and the lens do not limit the image resolution- usually, the automatic gain control and noise reduction do.
At this level, you don’t need to know any detailed imaging theory. However, we present some basic definitions needed to interpret the specifications on a camera and lens- these concepts will be refined in later sections. Then, we discuss what is inside the camera.
Basic definitions
A camera lens will have some numerical specifications, for example ’18-200mm f/3.5-5.6′, and the sensor will have some size specification, either several pixels (8 MP) or the diagonal dimension of the sensor (e.g. 1/1.7″). For the lens, the numbers 18-200mm’ specify the rear focal length of the lens (in this case, a zoom lens with varying focal lengths between 18 and 200 millimeters). The rear focal length of a lens is not the distance between the lens and the sensor. There may be two sets of focal lengths- the actual and ’35mm equivalent’, and this is discussed more fully below. The term ‘f/3.5-5.6’ is the maximum f-number available for the lens (in this case, the maximum f-number varies with focal length). The f-number is defined as f/D, where f is the rear focal length and D is the diameter of the entrance pupil (defined below).
As you will see, the focal length and f-number are essential quantities to understand. For now, it’s sufficient to know that the focal length is a measure of the angular magnification of the image and with the f-number, is used to calculate the depth of field (DOF- the distance over which near and far objects are in focus). The f-number relates to several image metrics besides DOF, including the optical resolution and the amount of light passing through the lens.
The f-number is usually controlled by varying the size of an aperture within the lens body. However, the size of that aperture is not ‘D’, above- D is the diameter of the entrance pupil. The entrance pupil is the projection of the aperture into object space. Put another way, when you look into a lens and see the aperture, what you see is the aperture projected through the lens; since you are looking into the front of the lens, what you see is the entrance pupil. The distinction may not appear significant, but in some cases (zoom and fisheye/ultrawide lenses and panoramic photography) it is.
Digital camera sensors are often specified in terms of the diagonal length (like televisions), and just like televisions, the aspect ratio of the sensor may be 4:3, 3:2, or 16:9. The size of the sensor controls the field of view and also limits the maximum useful size an image can be enlarged. Because the image size is fixed, many imaging properties of lenses are interrelated: field of view and focal length, for example. Because there are no standard specifications for digital sensors, the practice of quoting 35mm equivalent focal lengths is done to provide a rational means to compare different-sized sensors.
What makes a digital camera special?
A digital camera differs from a film in only two significant respects. First, and most obvious, a chemical emulsion (film) has been replaced by an array of electronic light detectors (pixels)- the sensor. A second key difference is what happens when you press the button to take a picture. In manual cameras, not much happens- a mechanical shutter opens for a set amount of time, exposing the film, and then closes. Digital cameras do a lot of things when that button is depressed, including light metering and focusing, all of which takes time- this is “shutter lag” and is why the camera doesn’t take a picture immediately. This can be a hassle when trying to photograph moving objects as they may move out of the frame (or focus) before the image is acquired.
The “half-press” is a technique that becomes very useful with these cameras for a couple of reasons. First, it lets the camera get ready to take an image (light metering, autofocus, etc) while you compose the shot. Second, many cameras will allow you to ‘hold’ the setting, reposition the camera, and then take a shot without changing the setting. This can be useful if you want something off-center to be in focus, for example. Another example is dealing with scenes with large variations in brightness- bright skies and shadowed valleys. Mastering the ‘half press’ can also alleviate a lot of lag time problems.
Bayer filter
the pixels only detect the total amount of light incident; they do not distinguish colors. To generate a color image, sensor companies coat the sensor with an array of color filters, and the particular pattern has been standardized to a ‘Bayer filter’: Every other pixel sees green, and the other pixels alternate between red and blue. One important result from this is that the final image (say a color Jpeg file) has been produced by interpolating between pixels to appear that each image pixel has full-color information. RAW images consist of the actual individual pixels and are used in more advanced photography because each pixel retains its original identity and the photographer/print shop has more control over the final color print.
Flash
Usually, these cameras have an automatic flash, so make sure you know how to turn it off: flash photography is often forbidden in museums, churches, etc. One important property of the flash is that closer objects will appear brighter than more distant ones, from the inverse-square law. This can result in a close person appearing bleached white, while the wall behind them is black. Also, because the flash is (relatively) low power, there is a limited useful range- the flash may not be able to effectively illuminate people more than a dozen feet away. Another issue related to flash photography is the “red-eye effect”: light enters the subject’s eye, bounces off the fundus, and back out of the eye- the red color comes from the blood vessels in the choroid. Because the problem is so pervasive, some cameras have a ‘red eye reduction’ mechanism which often consists of a series of bright flashes designed to contract the subject’s pupil; make sure your subject doesn’t think the picture was taken during those flashes and walk away.
Gain
These cameras almost always have a high f-number lens. This provides a large depth of field (most of your objects will be in focus) and combined with the small sensor size, produces little aberration. If the aperture is fixed, the amount of light at the sensor is controlled by the shutter speed (or if there is no shutter, the integration time). Often there will be an automatic gain correction (AGC) to control the sensor output levels to prevent bleaching (overexposure or white pixels) and underexposure- black pixels. AGC can introduce a lot of noise in your images- even indoor lighting, without a flash, can tax the gain level of your camera. Because of this, camera manufacturers add noise reduction schemes that result in a loss of fine detail.
Dynamic Range
the dynamic range of an image, like any other signal, represents the amount of (intensity) variation that can be resolved. In a well-exposed image, there are not too many underexposed black pixels or overexposed white pixels. Visual studies have shown that human vision cannot distinguish more than 256 discrete grey levels: this is why computer displays are either 8-bit monochrome or 24-bit RGB color. Images with less than 8 bits (24-bit color) dynamic range appear ‘posterized’, while images with more than 8 (24) bits of the dynamic range must be re-scaled before display.
Sensors can have more than 8 bits of dynamic range- there are 16-bit sensors on the market- but increasing the number of bits does not automatically increase the maximum signal-to-noise ratio that can be supported. Images with more than 8 (24) bits allow more flexibility in post-processing to adjust brightness, contrast, and color balance (discussed below).
Optical zoom vs digital zoom
Some of these camera lenses have zoom capability. Optical zoom means the lens has a variable focal length. Digital zoom is nothing more than digitally magnifying the original image- just like you can do on your computer. Digital zoom is rarely useful and is usually combined with optical zoom to make the customer think the camera can do more than it really can. Optical zoom, by contrast, does represent a real increase in magnification and resolvable detail.
White balance
the color of objects depends on the spectral distribution of illumination. For example, sunlit objects appear different if instead lit indoors under fluorescent lights. Film photographers used filters (gels) to compensate for or enhance this effect. Electronic cameras can do this without a filter by weighting the different colored pixels differently. Taking a picture of a white piece of paper using different white balance settings can be instructive. Often, indoor images have an orange cast to them: since fluorescent lights emphasize blue colors, the camera tries to compensate by boosting the orange tint.
Live preview
This is simply using the LCD to display what the image would look like, in real-time. Different effects can be previewed before imaging.
Image stabilization
any relative motion between the sensor and subject, while an image is acquired, will result in Ômotion blurÕ. While this can be used to emphasize isolated elements of an image (moving wheels, for example), a uniformly blurred image is unappealing. There should be some image stabilization- at this level, manufacturers implement it electronically- but it will help your images appear ‘sharp’.
Shooting modes
Often, cameras will have several “modes”: nighttime, landscape, people, cloudy weather, etc. These represent the manufacturer’s opinion about the best fine-tunings for the camera parameters in particular lighting situations, and may or may not be useful to you. One popular mode is ‘panoramic’: multiple images are (later) combined to produce an image covering a field of view much larger than a single image. While there are some techniques to properly rotate the camera, the manufacturer’s software is usually very forgiving of alignment errors- images will be stretched and distorted slightly to produce a seamless transition.
Video is mentioned here only because some sort of video capability is becoming a standard feature. This sticky does not address issues specific to video other than general optical concepts that apply.
Bridge Cameras
- I’ve never owned a camera, but I want to get a ‘real’ camera (that doesn’t cost too much).
- I’ve owned an entry-level camera already and I want to take the next step.
Usually called ‘bridge cameras’, ‘prosumer cameras’, or ‘micro cameras’, these cameras will allow you to have manual control over various parameters: the f-stop, shutter speed, ISO setting, etc. The lens is usually attached permanently. This type of camera can remain useful for about 3-5 years before technological improvements make it obsolete. This is where life gets interesting- there is a huge range of options, performance, and pricing. There is a bewildering array of options, and cameras can appear like a wallet, a beefed-up version of a point-and-shoot, or something that looks like a DSLR (digital single-lens reflex). As a result, you should take some time to learn a few essential facts and concepts about optical imaging to determine what performance metrics are important to you. Furthermore, the amount of optional electronic processing available greatly increases- automatic face recognition, for example. Whether or not these ‘features’ enhance or inhibit your ability to take a quality image depends entirely on you; so make sure you can at least return all settings to factory default or turn them off. This section is longer than the other two because it assumes you do not know any optics or imaging theory.
Focal length
This is one of three key concepts in imaging. The focal length of a lens does not refer to the ability to focus. The focal length is one of the design parameters of a lens and relates to the angular magnification. The angular magnification of a lens is equal to the ratio of back-to-front focal lengths. Thus, for example, a long focal length telephoto lens has a high angular magnification while a short focal length wide-angle lens has a low angular magnification (sometimes less than 1). Angular magnification is not the same as the reproduction ratio: the magnification of the image concerning the object. The reproduction ratio of most camera lenses is very small- the image of a person on the sensor is much smaller than the actual person, for example- even though the angular magnification may be very large (imaging someone who is far away).
By convention, the designated focal length of a camera lens can also be stated as ’35mm equivalent focal length’. This does not mean the actual focal length is useless information- it is needed to calculate the depth of focus and hyperfocal distance. If the actual focal length of the lens is specified and the sensor size is smaller than the 35mm format, the field of view of the image will correspond to a 35mm image taken with a lens whose focal length is longer than the specified value. For example, a 35-150mm zoom lens on a four-thirds camera will produce images that have the same field of view as a 35mm camera using a 70-300mm zoom- this can be calculated knowing the ‘crop factor’ of the sensor (discussed below). Knowing the 35mm equivalent focal length is helpful when composing your image- human vision is very nearly equivalent to a 50mm lens (35mm format).
One final note about focal length is regarding how your brain constructs depth information from a photo. The image is flat; your brain adds depth perspective to the image based on how your brain has learned to see with your eyes. Images taken with a 50mm lens in a 35mm format will appear very natural to your brain. Lenses with a shorter focal length will produce images that your brain will interpret as having exaggerated depth; conversely, telephoto lenses produce images that your brain will interpret as being compressed in depth. Part of this is related to the way angular magnification changes the relative sizes of near and far, but it also depends on how your brain extracts 3-D information from a 2-D image.
f-stop/aperture setting
the second key concept. The f-stop (and related concepts like numerical aperture) is probably the most important concept in imaging theory. Camera lenses can be set to a series of discrete values of the f-stop (the f-number). The f-number sequence is logarithmic, and defined by first starting with ‘1’ (f = D) and then using aperture diameters that successively halve the amount of light passing through the lens: the f-number sequence begins 1, 1.4, 2, 2.8, etc. Photographic lenses are specified in terms of their focal length and the minimum f-number available (largest aperture diameter): for example a 50mm f/1.4 lens, or a 70-200mm f/5.6 zoom lens. You don’t need to memorize the sequence, but it can be useful to know what ‘going up an f-stop’ means. Some cameras allow for half-stop or third-stop increments, which is important for ‘exposure bracketing’ (see below).
Just as the focal length has nothing to do with the focus distance, the aperture diameter is not the size of the front element. The aperture of a lens is a surface within the lens and typically consists of an adjustable iris. A primary effect of varying the f-stop is to vary the depth of field. Two other important optical results are that both lens aberrations and optical resolution increase as the aperture increases. Most lenses on bridge cameras have moderately high f-numbers (say f/3.5 and up). This, in conjunction with the small sensor size, keeps levels of aberration down and the depth of field high. High f-numbers correspond to an accurate paraxial approximation (sin(q) = q) so the dominant aberrations are the ‘primary’ aberrations (discussed below). At high f-numbers, it is fairly easy to achieve excellent aberration correction, and images taken with these cameras will look good- uniform focus across the image, for example. Two important exceptions are distortion and chromatic aberration- those do not depend on the aperture, and thus may be the primary residual aberrations. Aberrations are discussed more fully below.
A note regarding zoom lenses: zooms have become the dominant consumer lens, but are significantly more complex than fixed focal length (‘prime’) lenses and difficult to discuss in an introductory essay. The way elements within a lens move during focus is much different than how they move during a zoom. Zoom works by changing the magnification of a front group of elements, a rear group, or both. If zoom changes only a front group, the change in the focal length of the lens is exactly the same as the change in the diameter of the entrance pupil, so the f-number remains constant during zoom. If the magnification of the rear group changes, the f-number will change as the lens zooms. In practice, most lenses do the majority of their zooming with the front group, allowing the zoom to retain most or all of the maximum aperture setting. Again, the lens specification should indicate by how much (if at all) the maximum aperture changes during zoom.
Exposure time
the third key concept. On one hand, setting the exposure is a trivial matter- long enough to get sufficient light onto the sensor. If you are imaging moving objects, it’s a little more complicated- a long exposure will result in motion blur (which may or may not be desirable), so if you want to freeze the motion (short exposure), you have to use alternate methods- a flash, increase the aperture, or increase the ISO setting. The ISO setting is an adjustment to the electronic gain at the sensor- a higher ISO means more gain, which introduces more noise. Some cameras let you operate in ‘aperture priority’ or ‘shutter priority’, which means you actively control one (f-stop or shutter speed), and the camera optimizes the image by adjusting the other parameters automatically.
Resolution
Now we come to a concept that is misunderstood and often the subject of spurious claims. A detailed discussion about resolution is beyond the scope of this note, so for now we will simply distinguish between the maximum enlargement you can produce via a print and the ultimate resolution limit due to the lens itself. The print size is different from the display size: 72 dpi (dots per inch) looks great on your monitor, but terrible on a print. The professional standard for printing is 300 dpi. That may sound like a lot, but based on that specification, a 2 MP camera can produce a professional-quality 4 x 6 print. An 8 MP camera has sufficient pixel count to produce a professional-quality 8″ x 10″ print. Higher pixel-count (larger sized) sensors allow you to crop smaller regions (boring parts of the image) and still retain the ability to produce large professional-quality prints.
The ultimate resolution a lens and sensor can deliver depends on the f-number, the degree of aberration correction, and the pixel size. For an aberration-free lens, points at the object are mapped to Airy disks at the image, the size of which is characterized by the distance from the central peak to the first minimum and is given by the Rayleigh criterion, which for visible light (0.5-micron wavelength) reduces to r = 0.6*(f-number) [in microns]. An f/4 lens produces an Airy disk radius of 2.4 microns while stopping down to f/16 produces Airy disk radii of 9.6 microns. Pixel sizes should be not much larger than given by the Rayleigh criterion, or the sensor will limit the attainable resolution (Nyquist’s sampling theorem). For this example, pixel sizes greater than about 3 microns on a side will result in sensor-limited resolution at f/4. A brief survey of pixel sizes currently in production indicates the typical camera has pixels 2-3 microns on a side, meaning the attainable resolution is more dependent on the lens than the sensor unless you are imaging with a fast (f/2.8 and below) lens.
Enlarging an image past the performance limits of the lens (or digital enlargement beyond the capabilities of the sensor) results in ’empty magnification’; blur circles simply become larger blur circles. The rule of thumb is that empty magnification begins at 500/f-number. It is important to note that this rule of thumb is violated for camera sensors due to the Bayer filter and interpolation. Thus, using an f/4 lens on a 1/1.7″ sensor limits the ultimate size of the image to less than 2m on the diagonal; recall at 300 dpi, the maximum professional-quality print size will be about 8′ x 10. To be sure, you could make a large poster out of the image, but it will only look professional from a distance.
Crop Factor
The crop factor is the size of the sensor relative to the 35mm format. For example, a crop factor of 1.6x means the camera sensor’s diagonal length is 26.8 mm. All manufacturers use their sensor format. Thus, it is helpful to refer to the crop factor because the sensor is then expressed in terms of a standard- the 35mm standard. To calculate the focal length for your camera, given the 35mm equivalent focal length, simply multiply by the crop factor. Thus, a 200mm f/5.6 lens on a 1.6x sensor will appear as a 320mm f/5.6 lens mounted to a 35mm camera. Note, that the f-number has not changed- the focal length and f-number of a lens is an intrinsic properties of the lens. By changing the image size, different magnifications are needed to generate identical display sizes.
Depth of field
a precise depth of field calculation is difficult to perform since perfect focus exists only in a single plane. This subject is covered in detail in many other photography sites, so we will not repeat them here. A (relatively) simple formula can be written down using lens and camera parameters: the actual focal length f, the f-number F, the magnification m, (the ratio of object distance to image distance), and the diameter of the ‘circle of confusion. The circle of confusion is the size of the blurred spot that your eye can barely resolve. Based on studies of visual acuity, c = 30 microns for 35mm format images. The formula is fairly straightforward:
[tex] DOF = \frac{2f\frac{m+1}{m}}{\frac{fm}{Fc}-\frac{Fc}{fm}}[/tex]
where DOF is the distance range over which objects will appear in acceptable focus. For your camera, c = 30/(crop factor) microns. As specific examples, using the 35mm standard, a 50mm f/1.4 lens focused 10 m away has a depth of field of 3.5m, while the same lens stopped down to f/11 has an infinite depth of field (near focus = 4.3 m) (this is discussed below, under ‘hyperfocal distance’). By contrast, a 200mm f/5.6 focused 100m away has a DOF = 102m. A 1/1.7″ sensor using a 10mm f/5.6 lens will render all objects between 1.3m and infinity in focus. Small-sensor digital cameras often do not have any out-of-focus components in an image. There is a multitude of free online DOF calculators available that have the relevant data for nearly all cameras on the market.
ISO
ISO stands for the “International Standards Organization”, and in imaging, refers to the sensitivity of film to light. The nomenclature is carried over to digital imaging, and in this context refers to a level of gain (amplification) applied to the sensor output. In conjunction with exposure speed, the ISO setting will adjust your camera’s light sensitivity. Doubling the ISO setting doubles the sensitivity. This can be used to retain a fast shutter speed to capture a moving target. Again, increasing the ISO setting increases the amount of noise present, and different manufacturers use their ideas of how to reduce the noise levels. It’s not uncommon to find cameras with ISO settings up to 6400; ISO 102,400 is not unheard of. Because digital technology is so different than film, this standard has come under renewed scrutiny; however, as a rule of thumb daylight imaging should be performed at ISO 100-200, indoors at ISO 400-800, and nighttime imaging at ISO 1600 and above.
Exposure bracketing
getting a well-exposed image can be tricky, and there may not be a lot of time to adjust settings. Photographers learned a long time ago to perform ‘exposure bracketing’. Instead of taking a single image, they would take a series of images through an f-stop interval, for example, 3 1/2-stop increments through a full stop (- 1/2, 0, + 1/2). Digital cameras can achieve this by varying the f-number of the lens (shutter priority mode), the exposure time (aperture priority mode), or both. Some cameras allow for continuous shooting of an exposure bracket by simply repeated pressing of the shutter release.
Frame rate
If you are interested in photographing moving objects, you may be interested in how fast a camera can take continuous images. The rates can vary, and some very clever autofocus routines have been devised to allow continuous autofocus while imaging a moving target.
Image Histogram
Now that you are becoming familiar with the parts of a camera and how to control the amount of light incident on the sensor, you should understand a basic ‘quality metric’ of the image- the histogram. A histogram is nothing more than a graphical representation of the intensity levels in your image (a graphical representation of the dynamic range). There may be individual histograms for each (r,g,b) color, or one overall histogram. Either way, using the histogram will help ensure that your image is not underexposed (lots of blacks) or overexposed (lots of white).
Three ways to control the brightness of the image have been discussed- adjust the aperture, adjust the exposure time, and adjust the electronic gain. But, there are consequences to making adjustments to any of them- opening the aperture decreases the depth of field and increases the aberrations (discussed below), increasing the exposure time can lead to motion blur, and increasing the gain increases the amount of noise. Learning how to control these elements in your image will enable you to take better photographs (if that’s a goal).
Post-processing
Often, these cameras will have a multitude of ‘on-chip’ image processing options available: you can adjust the contrast, sharpness, saturation, etc. Whether or not you use these is up to you. Many manufacturers will include a basic image processing program bundled with the camera hardware. In addition to commercial programs, a free open-source image processing program (ImageJ) is available that you can download and use to manipulate your images- correcting the brightness and contrast, color balance, cropping, etc.
Pixel size vs signal to noise
Here is another trade-off. Smaller pixel sizes can increase the ultimate resolution and maximize the final print size, but smaller pixels also have smaller light-sensitive areas and thus the sensor needs more light to generate a good signal-to-noise ratio. As we saw above, pixel sizes smaller than 2 microns on a side will not generally increase the attainable resolution in digital cameras. Micro cameras found in cell phones have pixel sizes approaching 1 micron on a side and intensified cameras used in low-light applications often have pixel sizes around 15 microns.
Macro imaging
Macro imaging occupies the space between photography and microscopy. Objects are small (but not too small), and like microscopy, images are usually described in terms of the reproduction ratio. Macro lenses are designed to work close-focus and come in a variety of focal lengths. Longer-length macro lenses allow macro imaging of objects that are farther away: photographing insects at a distance, for example. Because the lens operates at close focus, control of the depth-of-field is critical; often the aperture is set very small and a flash (or several flashes) is used to allow a reasonably fast shutter speed.
Autofocus
Autofocus is a complex multicomponent closed-loop control system consisting of a sensor (different than the image sensor), a control circuit, and a motor-driven lens. The time needed to find the best focus will vary with the aperture setting. Often, the manufacturer will have several autofocus modes: focus on the center of the image, find the best focus over the entire image, or some other focusing algorithm. While some autofocus systems may be active (they emit IR light or ultrasound), most digital cameras use passive autofocus systems- either contrast measurement or phase detection. In general, phase detection is faster and more accurate, but both methods are constantly being improved. “Trap focus”, or “catch in focus” are mechanisms that allow the camera to use autofocus as a detector, acquiring an image when an object passes into the focal plane- this is very useful for photographing fast-moving objects.
Sharpness
another misused concept. At a minimum, “sharpness” implies a well-focused image using a well-corrected lens (aberrations are discussed below) with no motion blur. A related term is ‘acutance’, which describes the edge transitions between light and dark: a high acutance image can support very well-defined and rapid variations between bright and dark (high-contrast images). Due to image processing in human vision, a high-contrast slightly blurred image will be perceived as being sharper than a better-focused, lower-contrast image.
Memory cards
Most cameras require a memory card to store images. In addition to a direct transfer of images from the camera to your computer, the camera will usually record the image onto a removable memory card that can permit much faster data transfer rates. A typical standard format is Secure Digital (SD), and new formats are introduced with some regularity to increase data throughput. The camera may come with a card, or it may not- in any case, it’s worth getting one with as much memory as you can afford. To use the memory card with your computer, you will most likely need a ‘card reader’ that essentially converts your memory card into a USB memory stick.
DSLR
These cameras were originally designed for professionals. Users of these cameras are expected to already understand basic digital photography techniques, and the in-camera electronic imaging processing should be viewed as enhancing good technique, not compensating for poor technique. One key distinction between a DSLR and a bridge camera is that the lens is removable on a DSLR. The lens performance is more critical now- a high-quality lens will continue to deliver excellent performance long after the camera body is obsolete. A second key distinction is the sensor size- sensors in DSLRs approach or even exceed the 35mm format- there are some digital medium-format cameras on the market. A DSLR and well-corrected lens, based on over 100 years of continuously improved optical design, can approach the limit of what is physically possible. So in this section, we will present some additional details of imaging theory. Just as the case with film cameras, each manufacturer has their own proprietary ‘lens mount’ which can make switching between manufacturers problematic, if you have already spent money on a good lens. These include the Nikon F-mount, Sony E-mount, Pentax K-mount, Leica M mount, and Canon EF mount. Lag times in DSLRs are generally nonexistent.
Classification of lenses
Lenses are classified based on their 35mm equivalent rear focal lengths. The standard lens is a 50mm lens; on a 35mm format, a 50mm lens produces an image that nearly matches the normal vision in both magnification and field of view. Lenses with a shorter focal length are ‘wide angle’ lenses until you get down to 10mm or less; these are ‘fisheye’ lenses and produce images with fields of view sometimes exceeding 180 degrees (a full hemisphere). Lenses between 70mm and 90 mm are usually referred to as ‘portrait’ lenses, while longer-length lenses are ‘telephoto’ lenses that at the end (1200mm and up) appear indistinguishable from telescopes. Zoom lenses have become more common, and involve complex movement of lens elements to allow changes in the focal length while keeping the focus (largely) unchanged. The standard ‘kit’ lens offered with a DSLR is usually a zoom lens; you do not *have* to get this lens with your camera. Tilt-shift lenses (or ‘Perspective control’ lenses) allow you to orient the lens concerning the sensor plane- the effect is to remove perspective from a tall building, for example. This also allows imaging under the Scheimpflug condition, where the plane of best focus is not parallel to the sensor plane (useful for photographing hillside landscapes, for example). Macro lenses are designed for close-up focus and reproduction ratios approaching or exceeding 1:1.
Just as the focal length of a lens is unrelated to the distance to focus, the rear focal length is not related to the distance between the lens and the sensor. Wide-angle lenses, in particular, “retro focus type” designs, set the distance between the rear element and sensor to be larger than the rear focal length- the rear focal plane is located in the space between the rear element and sensor. This is to be compared to ‘telephoto’ lenses that place the rear focal plane out in front of the front element.
Something to consider as well is if the camera can be operated with *no* lens attached. This will give you additional flexibility in the choice of lenses and the ability to work with a bellows attachment- but then you most likely have to work in ‘full manual’ mode.
Everyone has their own opinions about what lens (or lenses) are ‘the best’. Lens performance is critical here, and time spent researching to get the best possible lens you can afford will result in a lens you will be very happy with for a long time.
Lens aberrations
This is also a large subject, so only a brief synopsis will be presented here.
Aberrations occur because the paraxial condition fails- that is, sin(q) != q. The paraxial approximation is very accurate for small angles: the error is 0.16% at f/5 and only 1.3% at f/1.8. The next term in the expansion (sin(q)^3/3!) is the dominant error term, and represents ‘3rd order aberrations’, “Seidel aberrations”, or ‘primary aberrations’. Each primary aberration (piston, tilt, defocus, distortion, coma, field curvature, astigmatism, and spherical) represents an independent deviation of the aberrated wavefront concerning a reference sphere. These deviations are:
Piston- Piston is a constant shift in the wavefront phase.
Tilt- Tilt is a (spatial) linear shift in the wavefront phase. Neither piston nor tilt affects image quality and is usually neglected. Similarly, we will pass over defocus, as camera lenses can adjust the focus.
Distortion- Distortion is defined as the variation of magnification with image height. Straight lines do not remain straight; barrel distortion is positive, and lines bow out. The opposite is pincushion distortion. Distortion can vary with focus distance and is very noticeable: you can detect 0.5% distortion easily. This is often a dominant aberration in camera lenses because, in contrast to the other primary aberrations, the amount of distortion does not vary with aperture size. Fisheye lenses (intentionally) have distortions typically approaching 100%. Landscape and architectural photography in particular are very unforgiving of distortion.
Coma- Coma is defined as the variation of magnification with aperture: rays crossing the aperture plane at different heights across the image plane at different heights. Points appear as small ‘comet’ shaped blobs (hence the name). This is particularly distracting when looking at point sources e.g. stars or distant lights.
Field curvature/Petzval curvature- The image plane is not flat; it is instead a section of a sphere. The center of the image is in focus while the periphery is out of focus, or vice-versa. “Plan” lenses are corrected for field curvature over 95% of the image.
Astigmatism- this aberration, like a coma, breaks the rotational symmetry of the optical system. The two orthogonal directions are called ‘tangential’ and ‘sagittal’ and rays in these planes focus on two different image planes. The effect is that defocus blur will be preferentially oriented, becoming rotationally symmetric at an intermediate plane of focus (the ‘medial’ plane). ‘Anastigmatic’ lenses are corrected for astigmatism.
Spherical aberration has become more familiar due to the quality of ‘bokeh’ (defined below). Spherical aberration is defined as the variation of focus with aperture: rays that cross the aperture stop at different heights are focused at different planes. Spherical aberration is always present in lenses made of spherical surfaces. ‘Aplanatic’ lenses are corrected for spherical aberration and include an aspherical element. Use of aspherical surfaces to fully correct spherical aberration is becoming more available as manufacturing technology improves.
Chromatic aberrations: these are not related to the aberrations above, but refer instead to the dispersion of the lens. The two primary forms are lateral and axial chromatic aberration. Lateral chromatic aberration results in in-focus elements appearing as rainbows, while axial chromatic results in points appearing as colored bursts (and is associated with spherochromatism, below). This effect is more pronounced in out-of-focus components of an image. The use of different types of glass (e.g., the crown-flint achromatic doublet) reduces chromatic aberrations. Achromatic means 2 colors focus on the same plane; apochromatic means 3 colors focus on the same plane; superachromats are corrected for four colors. Achromats will be corrected for a blue and a red color (generally the 486.13 nm Hydrogen F-line and 656.28 nm Hydrogen C-line), apochromats will also be corrected for an intermediate wavelength (generally the 587.56 nm Helium d-line).
The variation of spherical aberration with wavelength is called ‘spherochromatism’, and this can be a dominant aberration in a well-corrected lens. Often the term ‘purple fringing’ is used to describe the effect, as objects will have dominant magenta or purple features.
As the aperture increases, aberrations grow in magnitude, and additionally, higher-order aberrations become non-negligible (5th order is sin(q)^5/5!, 7th order, etc. Some high-end lenses are corrected out to the 9th order.
Falloff
images will be brighter in the center than in the periphery. The edges of the image correspond to large incident angles of illumination, and so the geometric obliquity factor cos^4 becomes important. Because the projected areas of both the object and sensor concerning the optical axis become smaller by factors of cos(q), the intensity varies for each as a factor of cos^2, thus leading to a total variation as cos^4. Especially noticeable using wide-angle lenses, imaging at low f-numbers results in the edges of the image being noticeably darker than the center. This can be used to your advantage, by naturally drawing your attention to the center of the image. Digital cameras may incorporate Ôflat field correctionÕ to compensate for this.
Flare
non-imaging light that does not pass normally through the lens, but instead enters the lens at an extreme angle and reflects off an interior surface before reaching the sensor. The usual effect in the image is a row of small bright images of the aperture (aperture ghosting). Typically associated with sunlight, lens flare can be controlled by a variety of methods including the use of a lens hood, coating the interior structure with diffusing black paint, and adding internal baffles. Glare can also occur from reflections between a filter and the front element, or between the first few front elements in the lens itself. Often the result is desaturation of color.
Bokeh
is a Japanese term, added to the photographic vocabulary fairly recently. It refers to how out-of-focus objects are imaged to form a compositional element of the overall image. Ideal bokeh in background objects is produced by undercorrected spherical aberration, with the result that out-of-focus bright objects gently blur into the background. Overcorrected spherical aberration produces bokeh that is characterized by a bright halo around the background object, and is considered unattractive.
Image stabilization
lens/camera/tripod, mirror lockup: there are situations where camera motion becomes problematic: long shutter times (producing motion blur) and long telephoto lenses (high magnifications). Also, these cameras and lenses are often *heavy*. The image can be stabilized using a variety of technologies: the most basic is a tripod/monopod. With the advent of electronic sensors, manufacturers have been introducing motion-compensating mechanisms either within the camera body and/or within the lens itself. Different manufacturers use different motion compensation technologies, and there are debates regarding the advantages of either. Regardless, image stabilization allows you to take sharp images several f-stops higher (or the equivalent change in exposure time) than without. ‘Mirror lockup’ is a technique that was developed to allow mechanical vibrations from the moving mirror to damp out before exposing the film- pressing the shutter once raises the mirror, and pressing it again will expose the sensor. Lastly, use of a shutter release cable (or remote electronic triggering) is used to prevent camera motion during the press of the shutter release.
Filters
In addition to the use of color filters (either electronically or with a gel), other filters can be attached to the lens, usually via a screw thread at the front surface. A basic filter is a ‘UV blocker’, which reflects ultraviolet radiation and also places a protective glass surface in front of the lens. Some people attach a UV filter to their lenses rather than carry lens covers. There are also gradient filters, which present a gradient (either neutral density or colored) across the front element- this can be used to even out the illumination in a scene containing a very bright region (sun, bright sky, etc) and a dark region (shadows, etc.). These filters can be rotated to obtain the optional orientation. Polarizing filters come in two varieties, linear and circular, and are used to control the sunlight that has reflected off of a flat surface: water, cars, etc. Using a polarizer when photographing a clear sky will emphasize the natural polarization of the sky. Due to the properties of autofocus sensors, a circular polarizer is generally preferred to a linearly polarizing filter- it is a linear polarizer in front and a quarter-wave retarder behind. Use of either a polarizer or gradient filter on a zoom lens should only be performed on lenses that do not rotate the front barrel during zoom; otherwise, the filter will rotate with the lens, preventing control over the orientation of the filter.
Rule of 16
The ‘rule of 16’ was developed during the film, and it states: that optimal exposure in bright sunlight is f/16, with the shutter speed set, in seconds, to 1/ISO. That is, slow film (ISO 100) uses a shutter speed of 1/100 seconds, while fast film (ISO 1600) uses a shutter speed of 1/1600 seconds. Since each f-number change halves the amount of light, the rule of 16 provides a starting point to estimate optimal aperture settings and shutter speeds.
Hyperfocal distance
The hyperfocal distance is calculated by maximizing the depth of field: when a lens is focused at the hyperfocal distance, objects from infinity to half the hyperfocal distance are rendered in focus. The analytic result is:
[tex] H = f( \frac{f}{Fc}+1)[/tex],
where H is the hyperfocal distance, f the focal length, F the f-number, and c the diameter of the circle of confusion. The hyperfocal distance also forms a series solution: focusing the lens at 1/2 the hyperfocal distance renders objects from the hyperfocal distance to 1/3 the hyperfocal distance in focus; focusing at 1/3 the hyperfocal distance covers objects from 1/2 to 1/4 the hyperfocal distance, etc. For example, the hyperfocal distance for a 28mm lens set to f/16 on a 35mm camera is about 1.6m. Everything from 0.8m to infinity will be sharp in a photograph taken with this lens focused on an object 1.6m away.
Telephoto lenses are rarely used for hyperfocal distance focusing, as the hyperfocal distance is quite distant with these lenses. For example, the hyperfocal distance for a 200mm lens set to f/16 on a 35mm camera is about to 86 meters. Everything from about 45 m to infinity will be sharp in a photograph taken with this lens focused at this hyperfocal distance. This lens isn’t useful for taking a landscape photograph in which you want near objects to be sharp as well.
Nodal, Pupil, and Focal planes
this section was added to clarify the large amount of confusing and conflicting information we encountered on many otherwise excellent websites while constructing this buyer’s guide. All-optical systems can be analyzed using six (cardinal) points: the front and rear focal points, the front and rear principal points, and the front and rear nodal points. In addition, the location of the aperture stop (equivalently, the entrance and exit pupils) and the field stop, if there is one, should be known. Although these concepts are used in geometrical optics, it can be helpful to describe the action in terms of physical optics.
Focal points: Geometrically, rays initially parallel to the optical axis are brought to focus at the rear focal point. More generally, plane waves entering an optical system will focus to points (Airy disks) at the rear focal plane. When a lens is focused on infinity, the sensor plane lies at the rear focal plane.
Nodal points: The front and rear nodal points are conjugate points with unit angular magnification. Rays passing through the front nodal point with a given angle exit the rear nodal point at the same angle. For lenses in the air, the nodal points are located at the principal points.
Principal points: The intersection of a principal plane and the optical axis is the principal point. Rays that intersect the front principal plane at some height, exit the rear principal plane at the same height: principal planes are conjugate planes that have unit transverse magnification. The distance between the front (rear) focal point to the front (rear) principal point is the front (rear) focal length.
Entrance/Exit pupil: the aperture stop limits the cone of light from object points. The projection of the aperture stop into object space is the entrance pupil, the projection into image space is the exit pupil. When you look into a lens and see the aperture stop, you are seeing the entrance (or exit) pupil. All light that hits the sensor *must* pass through the entrance pupil, aperture stop, and exit pupil.
Confusion arises when discussing panoramic imaging: rotating the camera to capture a large field of view. Rotating a lens about the rear nodal point does not produce motion of the image- swing lens panoramic cameras rotate the lens about the rear nodal point and have a curved image plane. More typical is ‘stitched panoramic’ images taken with a fixed lens and flat sensor. In this case, the lens should rotate about the entrance pupil to eliminate parallax error: near and far objects will maintain their relative positions when the lens is rotated about the entrance pupil.
Miscellany
flash, lens adapters/converters: The flash that comes with your camera (if one does) may not meet your needs. Flash units can attach to your camera, or be designed to work remotely. Some cameras allow you to control an entire bank of flash units remotely. Lens converters: because different manufacturers use different lens mounts, some adapters/converts can allow you to use lenses made by one manufacturer on a camera made by another manufacturer. You may lose some functionality: for example, Nikon “series G” lenses do not have an aperture ring. New lens mount standards regularly appear (the latest is the four-thirds mount). Generally, a camera with a lens mount that places the lens close to the sensor can be easily adapted to fit a lens using a lens mount that places the lens far from the sensor (the adapter is simply a spacer).
Tripods
A tripod holds a camera still while the shutter is open, preventing motion blur. The tripod must be able to maintain this stability in the presence of uneven ground, wind, ground vibrations, etc. Generally, tripods are most easily characterized in terms of their size (weight and height) and the maximum load they can support. Usually, heavier tripods are more stable and can support more weight than lighter tripods, but lighter tripods may be more suitable for backpacking or other situations where excessive weight is a concern. At the extreme, monopods and small ‘tabletop” tripods can be used when minimizing weight and volume is the most important consideration.
There are three parts to a tripod: the legs, the head, and the camera/lens mounting plate. Many tripods also have a ‘center column’ that can be used to raise the camera above the legs. The overall stability of a tripod is a function of the legs, while control of the orientation of the camera is provided by the head. The height of the camera is controlled both by the legs and the center column.
“all in one” tripods consist of legs, heads, and mounting plates integrated into a single device.
Legs generally consist of 3 or 4 telescoping tubes, and the most important properties are 1) the maximum diameter of the tube, 2) the number of leg segments, and 3) the construction material, The larger the diameter and fewer the number of segments, the more stable the tripod. Legs can either be opened to only a specific angle or can independently open to several angles, but it’s important to note that varying the leg angle only varies the camera height, not the tripod stability. Legs are typically aluminum, but carbon fiber can be used to provide additional benefits (vibration reduction, reduced weight, thermal insulation). Wood is also sometimes used. Telescoping segments are useful to both provide flexibility in camera height as well as correct for uneven terrain (for example, if the tripod is located on an inclined surface).
A center column allows for easier fine control of the height of the camera. Because the column introduces additional vibrational modes, for maximum stability the center column should not be raised. Additionally, the presence of a center column limits the *minimum* height achievable for tripods that allow legs to fully open- a feature that can be useful if the camera must be located near the floor or ground. The second benefit of tripods with no center column is the ability to orient the direction of view to vertical, by swinging the front leg of the tripod underneath and behind the head.
The head provides a mechanism to control the direction of view. Two classes of mechanisms- pan head and ball head- pivot the camera about a point below the optical axis of the camera, while a third class- gimbaled heads- pivot the camera about the center of mass, which provides additional stability when using heavy cameras. Pan heads provide separate control over each of the orthogonal rotation axes, while ball heads have a single mechanism (a captive ball and socket), which provides omnidirectional control. For both pan and ball mechanisms, the control and overall stability decrease as the lever arm between the center of mass of the camera and the point of rotation moves away from vertical. Often, ball heads will provide a “90-degree index”, which is a slot machined into the socket allowing either portrait orientation or vertical orientation, but the stability of the tripod is minimized and use of this index is not recommended. Because of this, it is generally recommended that “L-plates” be used for mounting the camera in portrait orientation. Gimbal mounts do not allow for rotation about the optical axis, although integrated lens mounts generally allow for this motion. Finally, there are several head attachments available for specialized operations: panoramic shooting sometimes requires precise camera rotation about the “no parallax” point, and macro shooting requires precise control over the object distance, so translation and rotation stages are available that provide this specific control.
The mounting plate can either be a simple 1/4″ threaded post on the head into which the camera or lens directly attaches, or can consist of a “quick release plate”. Quick-release plates provide an added measure of security/safety when attaching or removing a camera due to the simplified process of attachment. The mounting plate is first attached to the camera or lens, and the mounting plate is placed on the head and clamped into place. Individual manufacturers can make their plates, or plates can be made in an “Arca type” or “Arca-Swiss” plate geometry which is considered a standard.
PhD Physics – Associate Professor
Department of Physics, Cleveland State University
<snip>I’ve never owned a camera (besides what is in my cell phone and laptop). I want to take pictures at parties and post them online or email them to my friends. A: a compact point and shoot camera is likely the best optionReference https://www.physicsforums.com/insights/digital-camera-buyers-guide-compact-point-shoot/<snip>No quarrel with the technical aspects of the article. IMO, the use case for a compact P&S is greatly diminished for the very-casual image snapper who owns a smartphone and cares as much or more about work-flow of publishing pictures than about image quality. I would advise the hypothetical author of the opening question to stick with their smartphone. For better or for worse, every party I go to involves some amount of everyone admiring party pictures on social media of the party while the party is till going on. Sigh. I do it too. It sounds atrociously narcissistic to me when I force myself to say it 'out loud', but its what goes on. Anyway, you can't do that with a digital P&S, at least not easily.
[USER=15808]@DaveC426913[/USER] next in the series