Traditional digital cameras are comprised of an image sensor, typically either a Charge-Coupled Device (CCD), or, more commonly Complementary Metal-Oxide Semiconductor (CMOS). In either case, these devices integrate an array of photodiodes which convert photons to current which is then integrated over time and digitized. The sensing device is agnostic to the wavelength of detected photons, as long as the energy of these photons is sufficient to create electron-hole pairs which can then be separated under an electric field.
Oftentimes, it is desirable to know the wavelength of the collected photons. This information is typically inferred by applying an array of polymer color filters, known as a Color Filter Array (CFA) on top of the pixel array, such that each group of pixels has some which preferentially transmit different spectral bands of lights. Typical “RGB” image sensors, as are found in most cellphone or DSLR cameras, contain three such types of polymers which transmit red, green and blue light in wide spectral bands. “False” colors are generated by measuring the relative intensity of light on neighboring pixels with different broad spectral sensitivities. Over the past few years several companies integrated larger collections of such polymer filters onto image sensors to form larger CFA’s, such that, for example, 16 bands of light are collected for each pixel group. Such sensors are known as multispectral image sensors. They can typically discern a small number of spectral bands, and, while they can acquire images very quickly, they achieve spectral resolution at the cost of spatial resolution.
Recently, a number of companies introduced miniaturized spectrometers, and even integrated them into cellphones. Such instruments are useful for interrogating bulk properties of materials but not the spatial distribution of these properties. For example, using such a sensor may tell a user he is looking at a green object, but a hyperspectral imager will show him a zucchini, whether parts of the zucchini begin to ripen, or whether a disease is progressing on one region of the vegetable (Figure 1).
A number of other applications require finer spectral resolution than can be achieved with multispectral imaging, and often require it with high spatial resolution. Some examples include the emerging field of precise agriculture, where the health or irrigation status of specific areas in agricultural lands is inferred from spectral information; medical imaging where spectral information has been shown to correlate with certain pathologies; and anticounterfeiting, where the combination of fine spectral and spatial resolutions can help identify fake goods.
Hyperspectral Imaging Architectures
Hyperspectral imaging can be achieved in three general architectures (Figure 2). In a whiskbroom architecture, a simple point-detector spectrometer, which typically contains a diffractive element, mechanically scans the field of view, either using mechanical stages or digital mirrors. The spectral information from each point in the field of view is stored to form a three-dimensional data structure known as a hyperspectral data cube. Such imagers can achieve fine spectral resolution but their spatial resolution is attained at the cost of long acquisition times, which scale as the resolution. Achieving fine motion control often comes with increased weight, cost and sensitivity to motion.
In pushbroom imagers the diffractive element is stretched in one dimension such that each point along it creates a spectral dispersion in the other dimension. That way, a two-dimensional array of light is generated whose one axis corresponds to a physical dimension of the image object, while the other axis contains spectral information. In order to generate a hyperspectral data cube, this linear sensor needs to scan the field of view. This sometimes occurs naturally, such as when the sensor is mounted on a satellite orbiting the earth or when the sensor images a production line with objects traversing its field of view. The benefit of such systems is their fast acquisition times per line, and their ability to achieve high spectral resolution with a high spatial resolution in one dimension. However, when the object is stationary with respect to the camera, expensive scanning stages need to be built into the system, making it more expensive and bulky, with acquisition times that scale with the scanning direction.
The third type of hyperspectral imagers are known as snapshot or staring imagers. These devices acquire a hyperspectral image of the whole field of view so their acquisition time does not depend on the spatial resolution. As such they offer unique benefits to fields such as precision agriculture and medical imaging, where high resolution per image saves processing or transmission resources, which are often the most expensive elements of the system.
Snapshot Imagers for Mobile Hyperspectral Cameras
In a snapshot HSI system, a hyperspectral data cube is achieved by scanning a tunable filter in front of the staring sensor. A large number of tunable filters have been demonstrated to date. In liquid crystal tunable filters (LCTFs), a voltage alters the optical properties of a liquid crystal, thus selecting certain wavelengths to be transmitted. Acousto-optical tunable filters use a crystal whose transmissivity can be modulated by an acoustical wave and can typically be tuned faster than LCTFs. While these technologies offer some promise, their limited bandwidth, sensitivity to environmental conditions and the non-linear behavior of the medium proved to be obstacles to highvolume commercialization.
Interferometers have been used for many years to spectrally resolve light. They operate by splitting a light beam such that one part travels a different path length than the other. If the path length difference is an integer multiple of the wavelength, constructive interference takes place and that wavelength is transmitted. Notable interferometer configurations include the Michelson Interferometer, where light is split into two optical arms before interfering, and the Fabry-Perot Interferometer (FPI) where a pair of mirrors is used to interfere between beams traversing the inter-mirror gap a different number of times (Figure 3). The latter interferometer type can be miniaturized and has been deployed on space-borne vehicles without losing alignment.
In order to span a spectral range, the inter-mirror gap of the mirrors must be scanned. Several methods for achieving this have been demonstrated to date, including using piezoelectric crystals and MEMS-membrane structures.
A major challenge in implementing such devices arises from the planarity and alignment required to achieve high spectral finesse. In order to limit transmission to a narrow spectral band, the mirrors must have high reflectivity, typically above 95% across the spectral band. This means that light beams undergo multiple reflections before exiting the cavity. This magnifies any imperfections, such as localized defects in the mirrors, as well as imperfect alignment of the mirrors. In terms of smoothness and flatness, typically 1-5 nm tolerances are required, which translate to λ/200 to λ/1000. Such smoothness specifications represent a major challenge in mirror manufacturing and polishing, especially for high-volume applications.
An even more severe challenge arises from the coplanarity requirements, which are also on the order of 1 nm. Typical aperture sizes in such devices are on the order of 10 mm, so these coplanarity requirements are 7 orders of magnitude smaller than the aperture size. In order to facilitate fast acquisition times for the data cube, inter-mirror gaps should be scanned at 100 to 1,000 gaps per second, all while maintaining 1 nm alignment. These amazing performance specifications have been demonstrated in recent years for a few high-end applications. Achieving them in a commercially-viable, mechanically-robust fashion has been the main hurdle to overcome before commoditizing hyperspectral imaging.
With those challenges in mind, TruTag Technologies developed and recently introduced a handheld, battery-operated hyperspectral imager which integrates a tunable filter and a 2.3 MPixel CMOS image sensor. The imager offers a 2 second acquisition of 400 spectral bands at full 2.3 MPixel resolution and processes the acquisition and processing of the data cubes internally with an embedded processor.
In addition to the challenges listed above, the camera needs to handle tremendous amounts of data which are generated during each acquisition. A camera employing 2.3 MPixel resolution, 10 bit resolution and 400 spectral bands will generate 9.2 Gigabits (1.15 Gigabytes) of data from a single scan. Moving this amount of data into onboard storage and then processing this amount of data to extract the information of interest requires expensive components, draws power, and, importantly, takes time, which is undesirable in most applications. Various compression techniques can be employed to alleviate this data bottleneck. If a priori information exists on the objects of interest and they are sparse either spectrally or spatially, a real-time data minimization scheme can be implemented which can significantly reduce the amount of data which needs to be moved, processed and stored.
The world of hyperspectral imaging is undergoing significant changes with the advent of new technologies, both on the optical and in the data processing domains. As performance envelopes are expanded and dimensions and cost are reduced, new applications are becoming feasible. Monolithic integration of hyperspectral cameras into cellphones is now on the horizon, and, if applications arise that justify such developments, we are certain to hear more about this exciting field.
This article was written by Hod Finkelstein, Chief Technology Officer of TruTag Technologies (Emeryville, CA). For more information, contact Dr. Finkelstein at