For many years now, CMOS (Complementary, Metal-Oxide-Semiconductor) based sensors have dominated the high-speed imaging market. Cameras are being released with ever-higher headline imaging rates. The fundamental limitation in the speed of operation of CMOS devices is the time it takes to read out the image data from the sensor. This restricts the full-frame capture rate to a few tens-of-thousands of frames per second. Faster frame rates can be achieved by reading out smaller areas of the image, but this severely limits the usefulness of these devices at very high frame rates.
Two main approaches have been used to increase the speed of operation. The first approach is to convert the pixel charge from analog to digital locally, and read out the data through parallel buses using LVDS signaling. This approach has the big advantage of simplifying the external circuitry, but readout speed is now limited by the bandwidth of the digital interface. In addition, a significant amount of useful light sensing area has to be sacrificed to accommodate the data conversion and storage functions. The second (and potentially faster readout) approach is to use parallel analog outputs – digitization is carried out (in parallel) off-chip which adds significantly to the complexity of the camera system. Readout rate here is limited by the capacitance of the analog buses, which can also cause image artifacts due to crosstalk between these sensitive connection paths.
There also remains the trade-off between spatial resolution and frame rate, limited by the available output bandwidth. This trade-off does not preclude operation at up to one-million frames per second, or more, but means that images at this fast rate are reduced to just a few hundred pixels per frame. Such resolution is of limited value to most users, and certainly no use for applications that need fine detail in the images such as Digital Image Correlation, where a speckle pattern is applied to the sample and then displacements down to 1/100th pixel can be measured by using pattern correlation from one image to the next.
The best way of overcoming the readout bandwidth limitations is not to readout the images in real time at all. By adding storage elements within the sensor, capture rate is not compromised by waiting for the previous image to be removed from the sensor. Once a sequence of images is captured, readout can be carried out at modest readout rates, thereby simplifying the overall system design.
An early approach to high-speed image sensors (using CCD devices) was to employ a frame transfer technology, but to optically mask strips of pixels along the readout axis. Then, by shifting each pixel along the readout register, the acquired image was moved under the optical mask and stored. After a second exposure period, the next image could also be shifted along the readout axis to be stored under the mask. In this way, 16 or more images could be stored on the chip before final readout at normal rates. This approach can give very high acquisition speeds for a relatively small number of frames, albeit at the expense of usable image pixels. Extending this concept to the CMOS domain using local storage has resulted in sensors with local pixel data storage of up to 100 frames at each photosite. However, this has again come at the expense of light sensitive area, and therefore sensitivity. The more storage nodes there are in each pixel, the less area is available for detecting light. Additionally, the number of available pixels over the sensor area has had to be reduced to accommodate the local storage and digitization elements.
Increasing the frame readout rate in image sensors is fundamentally limited by the physics of the devices – i.e. large area arrays have large capacitance output structures which restrict the bandwidth for analog readout techniques. High-speed digital readout is limited by the number of parallel paths that can be accommodated in the physical device and by the number of bits per frame that must be transmitted over each path.
The earliest ultra-high-speed image sensors used CCD technology to temporarily store a number of images within linear CCD structures. This however came at the expense of area within the pixel field, and a slow readout rate because of the serial nature of the CCDs themselves. An example of this approach can be found in the Shimadzu HPV2 camera which had a CCD sensor consisting of 312 × 260 pixels, each with 100 frames of CCD storage. Depending on image content, moiré artefacts are sometimes introduced due to the pattern of exposed pixels, especially in areas where there is a lot of fine detail.
A later alternative to CCDs used capacitors as the storage element in a CMOS sensor whereby charge from the photodiode is converted to voltage and stored in an array of CMOS capacitors. The disadvantage of this approach is the comparatively large area required for capacitors. The Shimadzu HPVX camera, for example, has 200 frames of storage for each pixel, organized into banks outside the photosensitive pixel field, making optical screening much easier. But to achieve this, as much as 50% of the total sensor area has to be sacrificed.