[ad_1]
We’re taking a look back at the way camera technology has changed over the 25-year history of DPReview, with attention to the milestones in progress across the last 25 years. In this article we’ll call out the big steps forward we’ve seen in sensors, while also trying to explain the improvements that were brought.
CCD technology underpinned the majority of early digital cameras. |
Other approaches: Super CCDThis article focuses on the technologies used in the majority of cameras, but there have been some variants of these technologies that are also worthy of mention. The first is Fujifilm’s Super CCD technology, that used both a large and a partially-masked photodiode at each pixel. The masked pixel captured less light, so was less prone to overexposure, capturing highlight information that would otherwise be lost. The second-generation version in the S3 Pro DSLR delivered dynamic range far beyond its contemporaries but with the masking inhibiting image quality, especially at higher ISOs. |
The first image sensor technology to deliver usefully good results and be affordable enough to include in consumer products was the CCD (Charge-Coupled Device) sensor.
CCDs read out from the edge of the sensor, one pixel at a time, cascading the charge down from one pixel to the next each time a pixel is read. The speed at which this can be done is dictated by the current applied to the chip, so fast readout requires a lot of power.
With the power constraints of small consumer camera batteries, the process was relatively slow and made live view in compacts quite slow and laggy. CCDs formed the basis of the early digital camera market, from the mid 90s right up until the early 2010s, though during this time constant development of this technology continued, with pixels getting smaller and performance better.
But it was a CMOS sensor that powered the first sub-$1000 DSLR. |
Other approaches: Foveon X3Perhaps the most famous non-Bayer sensor is the multi-layer Foveon X3 design. These are CMOS sensors but ones that don’t use color filters in front of the sensor. Instead they read out the photoelectrons released at three depths in the sensor and, based on the wavelength (color) required for photons to reach each depth, re-assemble the color information. However, while only red photons can penetrate to the deepest part of the sensor, some of them will get absorbed further up (likewise for green photons, that can reach the middle layer), meaning this weak, noisy red signal gets factored into all the other calculations. It’s proven difficult to optimize the effectiveness of the design, particularly for the deeper layers, and it can’t take advantage of some of the noise-reducing features that are now common elsewhere. The result is sensors that capture higher spatial resolution for color, but with appreciably higher noise, meaning they perform best in bright light. |
In the meantime, though, a rival technology, CMOS (Complementary Metal Oxide Semiconductor) was being developed. These deliver the output of each pixel in turn to a common wire, meaning the charge doesn’t have to pass through all the neighboring pixels to get off the chip. This allows the readout to run faster without needing large amounts of power. CMOS sensors were also less expensive to produce. Canon pioneered the adoption of CMOS with its D30 APS-C DSLR in 2000. In the coming years, performance would continue to improve, and Canon gained a reputation for excellent high ISO image quality.
There’s no inherent reason why CCD itself would capture color any differently from CMOS
Although some photographers look back fondly on the color reproduction of the CCD era, there’s no inherent reason why CCD itself would capture color any differently from CMOS. Any differences are more likely to stem from changes in color filter selectiveness and absorption characteristics, as manufacturers tried to boost low light performance by using filters that allowed more light through.
By 2007, the industry’s biggest chip supplier (Sony Semiconductor) had moved across to CMOS for its APS-C chips, and CMOS became the default technology in large sensor cameras.
Early attempts at small-sensor CMOS weren’t always successful, so CCD continued to dominate compacts long after most large-sensor cameras had moved to CMOS. |
The fast readout of CMOS became increasingly important, both for video capture in cameras such as Canon’s EOS 5D Mk II and for the live view that would become increasingly central to the shooting experience of large-sensor cameras as the mirrorless era approached.
2009 saw the introduction of the first Back-Side Illuminated (BSI) CMOS sensors, a technology that at first was primarily beneficial for the tiny pixels in smartphone and compact camera sensors. BSI sensors are fabricated in much the same way as the existing, front-side illuminated designs, but the backing material they’ve been built on is then shaved away, and the ‘back’ of the sensor is placed so it faces the lens and receives light. This means you don’t have wiring and circuitry in front of the light-sensitive part of each pixel, increasing light absorption. These benefits are less pronounced in large sensors, so Four Thirds, APS-C and full-frame BSI chips wouldn’t arrive for several more years.
Continued development of CMOS designs resulted in continued gains. New designs allowed the inclusion of more analog-to-digital converters (ADCs), and for those ADCs to be placed closer to the pixels. This minimized the amount of electronic noise that could creep in before the readout voltage was captured, and the large numbers of ADCs meant that each one didn’t have to work so fast to deliver fast readout. The amount of noise added by ADCs relates to their speed, so this design delivers a significant reduction in read noise.
Further refinement of these designs kept lowering read noise, heralding an era where you could expect most cameras to capture significantly wider dynamic range than would be included in a typical JPEG, meaning there was much more exploitable information in Raw files.
BSI arrived in large sensors from 2014 onwards. In large sensors, wiring made up a much less significant proportion of the much, much larger pixels, so BSI offers much less improvement in image quality. Its did bring advantages, though. The first comes from improving the angles from which pixels can accept light. This is especially useful at the corners of sensors, where the light might hit the sensor at a very acute angle that’s difficult to redirect down into the recessed photosensitive region of an FSI sensor. Secondly, moving the wiring behind the pixel allowed more complex circuitry, meaning a further increase in the number of ADCs and faster readout without increased noise.
The use of BSI still isn’t universal, nearly a decade later, since it doesn’t offer a major image quality benefit.
One of the first sensors to combine dual conversion gain with Sony’s low read noise designs gave the a7S excellent performance at high ISO. |
Another advance to improve dynamic range came along with dual conversion gain sensors. These first appeared in the Aptina sensors used in the Nikon 1 series cameras. They feature a choice of readout modes within each pixel: one that maximizes dynamic range at low ISOs, the other which has less capacity for DR but delivers lower read noise, giving better shadow performance at high ISOs, where DR is less critical.
When this technology was licensed to Sony Semiconductor, it was combined with the existing high-DR designs to create sensors with excellent DR at base ISO and a boost in high ISO performance. These two-mode designs aren’t always publicized by the manufacturers, but the adoption of dual gain is what gave the original a7S its excellent high ISO performance (not its large pixels, despite what you might have heard). This is the state that most contemporary cameras have reached.
Other approaches: Super CCD EXRFujifilm continued to develop the Super CCD concept, culminating with Super CCD EXR. This featured slightly offset rows of pixels, with the Bayer filter pattern duplicated across pairs of rows (so you had pairs of red and pairs of blue pixels next to one another). The offset rows were supposed to boost resolution capture in full-res mode, but the duplicated filter pattern also meant that rows could be easily combined. This enabled a half-resolution low-light mode or a half-res high DR mode, where alternate rows were read-out early (giving the highlight benefits of the original Super CCD design). Although it’s no longer used, there are direct parallels between this three-mode approach and the way the latest Quad Bayer and Tetracell sensors are being used in smartphones. |
Stacked CMOS is the current cutting edge of fabrication tech, and it takes the BSI approach even further, creating layers of semiconductor, shaving them off their backing and then connecting them together to allow designs with still more complex and sophisticated circuity. It’s a time-consuming and expensive process, so has only appeared in fairly small chips in smartphones and compact cameras, and in very high-performance large sensor models. Like BSI, its main benefits don’t come in the form of image quality, but in allowing faster and more complex data processing. Examples we’ve seen so far have included built-in RAM to allow the sensor to capture another image while the previous one is still being processed by the camera, or twin readouts that provide parallel paths for the readout, one for the full-quality image and a secondary feed for autofocus and viewfinder updates.
Stacked CMOS chips currently underpin some of the fastest-shooting cameras, and those with some of the lowest rolling shutter, which emboldened Nikon to produce a flagship camera, the Z9, with no mechanical shutter. The complexity and sophistication of Stacked sensors is only likely to rise in the coming years.
All of which brings us to the present day. The sensors in most consumer cameras are excellent, with huge amounts of DR at base ISO and very little noise at high ISOs, other than the noisiness of the light they’re capturing. Modern sensors have exceptionally low electronic noise and typically register more than 50% of the light that hits them, meaning current technology is less than one stop from maximum improvement. There may be ways to improve IQ by expanding to lower ISOs, or breakthroughs in the way color is interpreted. But it’s likely to need another major technology change to see big changes in image quality.
With huge thanks to bobn2 for his input and corrections in the preparation of this article.
[ad_2]
Source link