ADVANTAGES OF FOURTEEN BIT CAMERAS-- PART I


ADVANTAGES OF FOURTEEN BIT CAMERAS-- PART I

Article and Photography by Ron Bigelow

Recently, some new DSLRs have been released that provide fourteen bits of color depth. This is good news. However, not necessarily for all of the reasons that some photographers think. Rather, the additional bits provide improvements in very specific areas while providing no improvements in others. Thus, the purpose of this article is to analyze the benefits of increasing the color depth to fourteen bits.

BITS

To understand what the excitement is all about, we first have to understand a little bit about bits.
Within each digital camera is a chip called a sensor. Despite its small size, this sensor is the most expensive and complicated part of the entire digital camera. The sensor is the device that collects and processes the light that is used to create an image; it takes the place of the film that is used in traditional cameras. Each sensor is composed of an array (rectangle) of tiny pixels (photodiodes). Each pixel is composed of a light sensitive semiconductor material.
Light, in the form of photons (tiny packets of light), arrives at each pixel. The light, from an area slightly larger than the active part of the pixel, is focused by a microlens. The light then passes through a color filter array (also known as a Bayer filter). Finally, the light enters the pixel. At this point, the light interacts with the semiconductor material of the pixel to create an electrical charge.
The pixel now has an electrical charge. Of course, the same thing was happening with all of the other pixels in the sensor. For example, in the case of a ten megapixel camera, there would be approximately ten million pixels each with its own electrical charge waiting to be processed into a beautiful image.
Now that the pixels have all those electrical charges, the work of processing those charges into meaningful information that can be used to create an image begins. Figure 2 shows a simplified flowchart of the raw process and subsequent processing.
Figure 1: Raw Process and Subsequent Processing.
Let's go over the steps, in Figure 1, that create a raw file.
  1. The light photons reach the sensor.
  2. The photons create electrical charges on the pixels.
  3. The electrical charges are accumulated and stored. These electrical charges create voltages.
  4. The voltages are amplified (increased in magnitude).
  5. Up until step 5, the digital camera has not been digital at all -- it has been collecting and measuring analog data. In step 5 and step 6, the ADC (analog to digital converter) changes the analog, voltage information into digital information. In step 5, the ADC carries out the first step of the conversion by converting the voltage information into discrete numbers.
  6. The ADC now carries out the second step of the conversion by converting the discrete data into digital data. At the end of step 6, the raw file has been created. All of the subsequent steps are carried out on the raw file in the raw converter.
  7. Digital cameras are colorblind. They can neither see nor measure color. All of those pixels measure only the intensity of light. In a sense, the pixels are only measuring tones of gray. So, how are the colors produced? They are produced through filters and software magic. Above a sensor is a filter array. This filter array filters the light so that each pixel sees only one of three colors of light. Some of the pixels see only red light, some only see green light, and the others only see blue light. In step 7, software looks at each pixel and determines the light intensity of the color of light at that pixel. The software also looks at the light intensity at each of the pixel's neighboring pixels (which will have their own colored light levels). Using this information, the software calculates a color for each pixel and assigns that color value to the pixel. This process is called Bayer interpolation.
  8. White balance adjustments are made. This step corrects for the color of the light that is illuminating the objects being photographed.
  9. At this point, the image data is very dark. A tonal curve is now applied to lighten the image and make it look more natural.
The important point here is that the bits are created by the ADC in steps 5 and 6. All that is happening at this point is that the ADC is taking analogue information from the sensor and changing it into digital information. This happens after the sensor has gathered the information and stored it as analogue, voltage information. Why is this important to understand? Once this is understood, it becomes obvious that increasing the bit depth of an image (e.g. from 12 to 14 bits) does not improve the quality of the information captured by the sensor. That information has been captured and stored by the sensor before it ever gets touched by the ADC. All that is happening when the bits are created is the ADC is changing the format in which the data is stored.

MORE BITS

Currently, the raw files of many digital cameras are twelve bits. That means that each pixel can register 212 = 4,096 levels of light intensity (after conversion by the ADC). In other words, each pixel can render 4,096 shades. Traditionally, 0 represents pure black and 4,095 represents pure white. As you go from 0 to 4,095, the shades go from dark to light. Previously, it was mentioned that some pixels measured red light, some green, and some blue. Therefore, there are 4,096 possible shades of red, 4,096 of green, and 4,096 of blue. When the Bayer interpolation does its magic to calculate a color for each pixel, it uses the color information for each pixel and its neighboring pixels. Since the interpolation is using information from all three colors, there are 4,0963 = 68,719,476,736 possible colors with a twelve bit raw file.
The raw files of some of the newer cameras are fourteen bit. That means that each pixel can register 214 = 16,384 levels of light intensity. Now, when the Bayer interpolation does its magic, there are 16,3843 = 4,398,046,511,104 possible colors. That is sixty-four times more colors than the twelve bit file!
It turns out that the human eye can only see about 16,000,000 colors. In other words, the human eye can not tell the difference between many of the extra colors that can be produced from a fourteen bit file. So, if we can not see all those extra colors, what's the big deal about having fourteen bit files? Well, it turns out that the sensor in the camera plays a dirty, little trick on you.
The little trick is that most digital camera sensors are linear devices. What that means is that when the amount of light that reaches a sensor is doubled, the output of the sensor is doubled. The problem starts to reveal itself when we look at bits in conjunction with the dynamic range of the sensor. Dynamic range is a measure of the span of tonal values over which a device (in this case a sensor) can hold detail. In other words, it is the tonal distance from the darkest point at which the device holds detail to the lightest point. Dynamic range is measured in stops of light. When light is increased by one stop, the amount of light is doubled (going in the other direction, it is cut in half). For instance, a photographer may say that he doubled his exposure by opening up the lens by one stop.
For our purposes, we will assume that you have a camera with a dynamic range of nine stops. The shades that an individual sensor can render in a file must be spread across those nine stops. The problem is that those shades are not spread evenly across the dynamic range of the camera.
Now, let's do a little analysis for a pixel that will output its data to a twelve bit file. For this analysis, it must be kept in mind that all of the numbers that are being generated represent the process prior to step 7 in Figure 1. In other words, these numbers represent the state of the information before any tonal curves (i.e., gamma or transfer function) have been applied. They do not represent the final file.
Suppose that a sensor was exposed until the pixels, that received the most light, could accept no more light. In the case of our nine stop dynamic range camera, the sensor would have received nine stops of light. That is to say that the brightest pixels in the sensor would have received nine stops of light. The brightest pixels would be full; these pixels would have reached their full well capacity. Image A in Figure 2 shows such a sensor with the brightest pixels at full well capacity. Now, with a twelve bit ADC, this sensor is capable of rendering 4,096 shades as covered above.
Figure 2: Shades vs. Stops of Light for a Twelve Bit Camera with a Nine Stop Dynamic Range (for the Brightest Four Stops)
As we move on to analyze the situation shown in Figure 2, the key to understanding what is happening is to remember that, when the exposure is reduced by one stop, the light is reduced by half. Since sensors are linear devices, when the light is reduced by half, the sensor will only be able to render half as many shades.
Now, not all of the pixels received a full nine stops of light. If we ignore the brightest pixels (the ones that received nine stops of light) and look at the pixels that are left, we have the situation shown in Image B in Figure 2. The brightest pixels that are left received eight stops of light (half as much light as the pixels that received nine stops of light). Since sensors are linear and the brightest pixels in Image B received only half as much light as the brightest pixels in Image A, the pixels in Image B would be able to render only half as many shades. Thus, the pixels in image B rendered only 2,048 shades. Since the pixels in Image A (with nine stops of exposure) rendered 4,096 shades and the pixels in Image B (with eight stops of exposure) rendered only 2,048 shades, the ninth stop of light was responsible for rendering the other 2,046 shades. In other words, the brightest stop of dynamic range (the ninth stop) used up half of all the available shades.
The procedure repeats itself. If we ignore the brightest pixels in Image B (the ones that received eight stops of light) and look at the pixels that are left, we have the situation shown in Image C. The brightest pixels that are left received seven stops of light. Since the brightest pixels in Image C received only half as much light as the brightest pixels in Image B, the pixels in Image C would be able to render only half as many shades. Accordingly, the pixels in Image C rendered 1,024 shades. Since the pixels in Image B (with eight stops of exposure) rendered 2,048 shades and the pixels in Image C (with seven stops of exposure) rendered 1,024 shades, the eighth stop of light was responsible for rendering the other 1,024 shades. In other words, the second brightest stop of dynamic range (the eighth stop) used up one fourth of all the available shades.
At this point, we can see that the two brightest stops render 75% of all the shades the camera is capable of producing. The rest of the images in Figure 2 show that, as we continue to work down the dynamic range, the camera is capable of rendering less and less shades. Eventually, one stop of light is reached. This last stop of light is capable of rendering only 16 shades.
The exact same process can be carried out for a fourteen bit sensor. The difference is that the sensor starts off with 16,384 shades. Table 1 summarizes the distribution of the shades across the dynamic range for both twelve and fourteen bit files.
Table 1: Distribution of Shades for a Nine Stop Dynamic Range Prior to Application of Tonal Curves (i.e., Gamma or Transfer Function)
LIGHT LEVEL12 Bits
14 Bits
Nine Stops2,0488,192
Eight Stops1,0244,096
Seven Stops5122,048
Six Stops2561,024
Five Stops128512
Four Stops64256
Three Stops32128
Two Stops1664
One Stop1664

SHADOWS

As can be seen from the chart, the shades are not evenly distributed over the nine stops of dynamic range. More of the shades are allocated to the brightest areas, and far fewer shades are allocated to the darker areas. This causes problems for the shadows -- there are not many shades to render the shadows. This results in less detail in the shadows than in other areas of the image that received more light.
This problem now gets compounded by the human visual system. While the sensor may be a linear device, the human visual system is not. The human visual system is more sensitive to some amounts of light than others. In Particular, the human visual system is more sensitive to shadows than highlights. What this means is that increasing the amount of light in a shadow area will register a larger impact on the visual system than increasing the amount of light, by the same percentage, in the highlights. We now have a situation where we have the least amount of data in the area where the visual system is the most sensitive. This is where the fourteen bit file has an advantage. A fourteen bit file has more shades in the shadows than a twelve bit file. In fact, it has four times as many shades (for each of the three colors). This allows a fourteen bit file to render more shadow detail than a twelve bit file.
Now, it is also true that a fourteen bit file has four times as many shades (for each of the three colors) in the highlights as a twelve bit file. However, even a twelve bit file has so many shades in the highlights that the additional shades of a fourteen bit file do not noticeably improve the highlight detail.
In short, one of the biggest advantages of a fourteen bit file is the additional detail in the shadows.

POSTERIZATION

Figure 3: Shadow Tonal Values Before and After Editing
Generally, colors in an image blend gradually from one color to the next. It is impossible for the human eye to tell where one color stops and the next one picks up. However, in some cases, the transition from one color to the next can actually be seen. This usually occurs in areas of little detail. The result is an image where bands appear to run across the image. This problem is known as posterization (also known as banding). Posterization is highly undesirable, and it is particularly a problem in the shadows.
In essence, posterization occurs when image editing causes too few tones to be spread too far apart. A typical example is when Curves is used to lighten the shadows in an image. Curves takes the original tones and runs the numerical values of the tones through a formula to create the numerical values for the new tones. Figure 3 shows an example of what happens for one set of shadow tones when modified by a particular Curves adjustment. It can easily be seen that the original tones increase only one unit from any tone to the adjacent tone. However, after Curves, the tones are spread farther apart. For example, with the original tones, going from a tone of one to a tone of two gave a one unit increase in tone. However, after editing, the tonal value of one became a value of seven, and the tonal value of two became a value of eleven. Now, the difference between these two tones has become four units. A stronger adjustment would have spread the tones even farther apart. When editing spreads the tones far enough apart, the transition between tones can be seen and posterization occurs.
Since a fourteen bit file has more tones than a twelve bit file, the tones in a fourteen bit file are spaced closer together. Thus, editing is less likely to produce posterization in a fourteen bit file than in one of twelve bits. This is particularly important in the shadows where there are few tonal values with which to start.