A mere 20 years ago, machine vision system designers did all they could to avoid color imaging applications due to higher costs and greater system and software complexity. But as microprocessors increased their computational power, color applications became less daunting, although designers still preferred to create color applications using filters and monochrome cameras whenever possible.

As the cost of color cameras dropped thanks to microprocessor gains, along with the cost per gigaflop, color vision became much more common. Today, industrial camera designers and machine vision lighting companies are working to add more channels to their standard RGB solutions, making multispectral applications financially viable for a much wider range of machine vision systems.

Multispectral and Channel Counts

For those new to the subject, “multispectral imaging” generally refers to a system capable of capturing between three and ten wide spectral wavelength bands — for example, three blue (B), green (G), and red (R) channels of 400–500 nm, 500–600 nm, and 600–700 nm, respectively. By carefully analyzing image data within multiple spectral bands, machine vision systems can solve color applications that were beyond the capability of traditional RGB machine vision solutions. For example, companies are beginning to use multispectral imaging for fruit and vegetable inspection and evaluation as well as material recycling, pharmaceutical product sorting, and other applications.

Figure 1. For its area array cameras, JAI employs a prism and three imagers to capture individual R, G, and B channels. The company uses the same concept in area array cameras capable of simultaneously processing RGB images using both a Bayer-filtered and a monochrome imager for NIR. (Image courtesy JAI)

A number of different methods can be used to filter specific wavelengths of a product. Perhaps the simplest is to use multiple monochrome cameras, each with a different bandpass filter. While this is effective, image alignment issues, the cost of multiple cameras and interfaces, and timing and software requirements may make such an option impractical. Such implementations suffer in applications where multispectral images must be precisely registered to one another, because each sensor is at a different location. This registration problem can be overcome by placing a prism in the optical path of the object to be imaged. Then, by using multiple image sensors, different wavelengths can be captured.

JAI (Glostrup, Denmark) has used this technique in both its area and line-scan cameras. JAI’s line-scan cameras can be used to capture individual R, G, and B images and a NIR channel. The use of prisms eliminates alignment issues such as off-axis viewing and time-delay integration issues associated with tri-linear line-scan cameras, and reduces light loss due to photon absorption and reflectance by the filter itself. (Figure 1).

Figure 2. Comparisons of the four spectrally independent R, G, B, and NIR channels used in the Piranha4 2k color and NIR line-scan camera from Teledyne Dalsa (a) and IMEC’s TDI-based CCD color line-scan sensor (b). (Images courtesy Teledyne Dalsa and IMEC)

The Piranha4 2k color and NIR line-scan camera from Teledyne Dalsa (Waterloo, Ontario) uses four spectrally independent R, G, B, and NIR channels with bandpass filters applied directly on top of the sensor to collect multispectral data at speeds up to 70 kHz (Figure 2a). The sensor registers each row of data in the time domain integration (TDI) rather than the spectral domain, as with prism cameras.

Another architecture that uses TDI places bandpass filters on each row of an area array, essentially turning an area array sensor into a multi- or hyperspectral “sweep” sensor. This approach, developed by IMEC (Leuven, Belgium), uses a TDI-based CCD color line-scan sensor, similar to the Teledyne DALSA approach (Figure 2b).

Illuminating Objects

While the discussion has focused on cameras, or receivers, every machine vision system begins with the illuminator and the photons it produces. For each of the camera architectures mentioned above, the traditional machine vision solution has been to use broadband spectral illumination (for example, white light, often from gas discharge illuminators due to flat spectral output). However, such lights do not last long, have spectral outputs that significantly degrade over time, and are power hogs and industrial heaters, creating design and maintenance challenges.

Alternatively, the product itself can be subjected to a specific wavelength of light to increase the contrast of specific reflected frequencies. While broadband reflected light from objects using filters can highlight specific features, so can illuminating an object with different wavelengths of light and measuring the reflected frequency. Here again, a number of different lighting and camera configurations can be used.

Recognizing the need to drive numerous types of LED illuminators to accomplish this task, lighting vendors have developed new products. The RM140 multispectral light from Smart Vision Lights (Muskegon, Michigan) combines user-set RGB illuminators plus NIR or SWIR chips in the same light. To manage the operation of multispectral illuminators, designers also need a light manager. The LLM Light Manager offers easy, programmable control of multiple illuminators or wavelengths, often set through a browser-based interface. Or, as with the LLM, with both NPN and PNP inputs, the lighting controller can be directly driven by most of today’s smart cameras, at 2A in continuous mode or 10A in OverDrive™ mode. To program the LED illumination system, an Ethernet-based, browser-based interface is used to set image sequences, set program timing sequences, and reconfigure illumination modes.

Lighting controllers and illumination sources capable of driving multiple channels, including infrared wavelengths, are also on the horizon. SVL already has an eight-channel controller in development and has shown how such a controller can be used with both visible and NIR/SWIR lighting to produce multispectral images. Such lighting controllers will be especially useful in controlling illumination in web applications. With cameras such as the Linea color line-scan camera from Teledyne Dalsa, different light sources, lighting angles, exposure times, and gain can be used to extract multispectral data in a single pass. Recently, Teledyne DALSA announced a new, cost-effective Linea TL multispectral camera that uses time domain instead of filters to separate each channel. Solutions like this will require precise light control to guarantee adequate throughput for a new class of cost-effective line-scan multispectral applications.

Multispectral machine vision systems are still in their infancy. However, with the numerous developments in camera designs, filters, lighting controllers, and lighting products, multispectral systems will be used in an increasing number of integrated machine vision systems. These may encompass not only traditional R, G, and B cameras but also those targeted toward ultraviolet light, NIR wavelengths, and polarized light. Similarly, the filters and illuminants required to image these different spectra will become more commonplace, with sophisticated multispectral LED lighting and lighting controllers offered in a number of different configurations.

This article was written by Matt Pinter, Co-Founder and Director of Engineering, Smart Vision Lights (Muskegon, MI). For more information, contact This email address is being protected from spambots. You need JavaScript enabled to view it. or visit here.