CMOS imaging is trending to become the dominant imaging technology. Initially, CMOS was limited by its inherent noise. Architectures were then essentially analog and the idea of integrating the image processing features with System On Chip (SoC) technology was yet to be considered. However, it is fundamentally this SoC characteristic of CMOS that has driven impressive growth. Over the years, this technology has become more and more competitive. The commercial race started in early 2000 when the big players applied continuous improvements to electro-optical performance.
CMOS vs CCD
With CCDs, photonic signals are converted into electron packets and are sequentially transferred to a common output structure where the electric charge is converted to voltage. A CMOS imager, on the other hand, converts charge to voltage at the pixel level, and most functions are integrated into the chip. It can be operated with a single power supply and is capable of flexible readout, with regions-of-interest or windowing. CCDs are generally made in NMOS technology, which is dedicated in performance with specifics like overlapping double poly-silicon, anti-blooming, metal shields, and a specific starting material. CMOS imagers are often consumer oriented and are based on standard CMOS process technology for digital ICs, with some adaptation for imaging (e.g. pinned photodiodes).
The system architecture is improved with CMOS, as it generally embeds SOC features like analog to digital conversion, correlated double sampling, clock generation, voltage regulators, or image post-processing.
Digital Output (GHz)
In simple terms, for CMOS, an amplification chain comprises low (1/f) and high (white) frequency noise components. Since Continuous Interleaved Sampling (CIS) frequencies are potentially lower, the amplification chain bandwidth required for reading out the pixels can be reduced, so the integrated temporal noise is thereby lower. Global shuttering exposes all pixels of the array at the same time. A drawback to CMOS, however, is that this approach consumes pixel area because it requires extra transistors in each pixel. Every pixel has an open-loop output amplifier, and the offset and gain of each amplifier fluctuates considerably because of wafer processing variations, making both dark and illuminated non-uniformities worse than those in CCD. On the other hand, CMOS imagers have lower power dissipation than equivalent CCDs. The power dissipation of other circuits on the chip can be lower than that of a CCD, which uses companion chips from an optimized analog system — a significant advantage.
To eliminate, or at least reduce, the variation of noise, the video channel integrates a Correlated Double Sampling (CDS) stage. The read noise must be extremely low for applications such as astronomy or science, where the image is read out at a very low frequency. The system design includes electronics for which frequency bandwidth is minimized to avoid integration of the time fluctuations of the signal. For these applications, the 1/f component of the noise is dominant. For high-speed video applications, noise is much higher and leads to a significant degradation of the signal to noise ratio (Figure 3). The CMOS image sensor has a column parallel readout scheme (Figure 2). The threshold readout frequency is therefore divided by the number of columns. Consequently, the readout noise of the CIS is generally dominated by the 1/f contribution. It has been demonstrated recently that very good noise performance in the range of 1 e-and even below, is achievable.
What Capabilities are Possible with Today's CMOS Sensors?
Next generation products will have to offer application- and sensor-level cost reduction through smaller optics and camera housings. They are required to scan, identify, and read codes, or inspect and measure objects in factory automation, logistics, and retail applications, with higher levels of accuracy and at higher throughput rates. At the same time, they will both reduce the cost of ownership and significantly improve productivity.
To respond to these market forces, one clear trend in industrial image sensor developments is the drive toward smaller pixel technologies with the usual necessary features like global shutters, and high speed outputs, etc. Pure-play image sensor semiconductor fabrication plants (fabs) serving the ‘fabless’ industrial CMOS sensor manufacturers are being forced to implement many of the techniques introduced to shrink consumer pixels without sacrificing Signal to Noise Ratio (SNR) or other critical performance parameters.
In pursuit of application cost reduction goals, some standard consumer sensor/processor interfaces are being adopted on next generation industrial image sensors — for example MIPI — which was originally designed to reduce complexity and offer multi-sourcing and interoperability features in mobile telephone cameras. The latest CMOS technology trends represent a quantum leap forward. Techniques such as light guides, deep trench isolation (DTI), buried μlens, and even stacked die containing pixel transistors underneath the photosensitive area, are now being employed.
3D stacked image sensors use a thin back-side illuminated image sensor array that is bonded above the image processing and control circuitry embedded in the lower chip. Thru-Silicon Via (TSV) technology simplifies interconnections between the stacked dice, and yields image sensors that can contain significant on-chip processing power. Cost is optimized because processing dice are much denser than the imager part.
What Applications are Now Possible?
In recent years, several attempts to reproduce signal summation either in the analog domain (voltage) or in the digital domain have opened the way forward for CMOS Time Delay and Integration (TDI) technology. For space-earth observation or for machine vision, CCD TDI architecture is still in demand for its low noise and high sensitivity performance. However, the most promising results have been obtained by considering the best of both technologies: the combination of charge transfer registers and column-wise ADC converters based on a CMOS process.
Time-Of-Flight (TOF) technology is growing in popularity for three-dimensional (3D) imaging, where depth information is measured. In principle, a source of pulsed artificial light located on the sensor plane is emitted, and the return of the reflected wave is used in a correlated function to extract the distance. This technique can lead the way to mass production of industrial 3D sensors for a variety of applications including people-counting, safety control, metrology, industrial robotics, gesture recognition, and automotive Advanced Driver Assistance Systems (ADAS).
The variety and inventiveness of pixel structures developed for CMOS imagers has surpassed the imagination. This has been achieved through the downsizing of transistor etchings and the evolution of CMOS fabrication technology, now completely adjusted for CIS production. The major industrial imaging makers still compete on price, as well as on electro-optical performance. Industrial applications benefit from these advances, which have been accomplished for a mass market. Vision systems are increasingly based on imagers that follow the trends of consumer demand, including the shrinking of pixels. Speed is also an important economic factor since it maximizes the throughput of expensive production machines and automated processes/inspection. New applications are pushing sensors towards their extreme capabilities. In order to enable single photon imaging, for example, there cannot be any additional noise in the image. Beyond simple image capture and display, 3D augmented reality uses the potential of CMOS technology to achieve a different perception of space. Clearly CMOS sensors have evolved and adapted to their environment as the dominant species.
This article was written by Gareth Powel Marketing Manager, Professional Imaging Division, at Teledyne e2v (Saint-Egrève, France) and Pierre Fereyre, Image Sensor specialist at Professional Imaging Division, at e2v. For more information, Click Here .