Machine vision plays a key role in the automation of production processes. Since the advent of vision in industrial manufacturing, there have been many advances in camera and image sensor technology. Right from the first monochrome cameras to the latest hyperspectral cameras, machine vision has been vital in solving key challenges such as object identification and differentiation, feature and color recognition and multi-spectral or hyperspectral differentiation based on an object's physical footprint, i.e., wavelength-based analysis. The cameras in machine vision can be broadly classified into single sensor and multi-sensor cameras.

Single sensor cameras could be monochrome cameras or color cameras. The latter consist of a mosaic color filter array placed on the top of a silicon chip. This type of color sensor is known as a Bayer RGB sensor. In Bayer sensor cameras, each pixel is assigned to record only one of the three colors. Hence the output from each pixel cannot fully specify the missing red, green and blue information on its own. Full color information is only possible by using debayering algorithms that interpolate the missing information from neighboring pixels and provide an estimation. This is depicted in Figure 1. The debayering process often leads to false representation of image quality. There is no guarantee of producing good results, which reflects the basic nature of estimation algorithms. Additionally, there is a tradeoff between speed and quality of interpolation.

Figure 1. Working principle of a Bayer RGB camera (left) and a prism-based camera (right).

As there is no significant pricing difference between monochrome and Bayer sensors, the latter has been successful in replacing monochrome cameras in many low cost applications where color requirements are fairly simple. Camera manufacturers are able to take full advantage of these developments and today it has become relatively easy to assemble these sensors in simple camera housings, and then add signal processing and a data interface to it. As the market's demand for superior color image quality is growing, there has been a significant effort to add intelligent image processing algorithms on the camera head to repair the artifacts arising from the Bayer pattern. However, this improvement is at the cost of critical factors such as sharpness, color accuracy, image noise and speed.

In some cases, single sensor cameras could also be hyperspectral cameras consisting of mosaic pattern filters on top of traditional image sensors. The spectral resolution of such cameras could range between 16 to 25 bands. However, from the view point of high-speed machine vision applications, hyperspectral imaging still has a long way to go. High spatial resolution, complex data transmission and handling in real time together with high costs remain a challenge for hyper-spectral imaging. Furthermore, most of the applications within the scope of machine vision do not require the complexity of a hyperspectral camera in terms of the number of channels.

In contrast to Bayer cameras, multisensor cameras are prism-based. The assembly of sensors along with the prism block requires very high precision and demands in-depth know-how and skill. The advantage of this technology is superior image quality which does not require image repair.

The prism block consists of multiple prism elements which are equipped with hard dichroic coatings to assist in the separation of light. Figure 1 shows the separation of white light inside the prism block. The short wavelengths, i.e., blue region of the spectrum, are separated first, followed by red while the green passes through the prism block. Light is separated before interacting with the sensors. This approach ensures minimum loss of signal strength. As wavelength and frequency are inversely proportional, the prism design is optimized to allow a shorter propagation distance for the blue component than the red. In the case of RGB cameras, an infrared cut-off filter on top of the prism block separates the visible and IR components. It also supports in avoiding artifacts in the blue and green channels.

The value of using a prism block can be broadly classified into six points. These are:

True color accuracy: True color accuracy comes naturally to prism-based cameras due to their optical construction. Multi-sensor prism-based cameras consist of one image sensor per color separation (3 monochrome sensors in the case of RGB, 4 monochrome sensors in the case of RGB + NIR). Every pixel captures true color information in full bit depth. True color forms the basis for better color differentiation, avoiding false color representation, reducing metamerism and providing better image contrast. True color also leads to better color conversion accuracy for converting camera RGB into other three-dimensional color spaces such as sRGB, Adobe RGB, CIEXYZ, CIELAB, CIELUV etc.

Higher spatial resolution: Single sensor per color channel is also an advantage when it comes to better spatial resolution, which is driven by pixel size. The smaller the pixel, the higher the resolution. In Bayer pattern cameras the matrix used for debayering has to be taken into account while determining the pixel size and calculating the resolution. For a typical 2x2 Bayer-pattern sensor, spatial resolution decreases by a factor of 4 for the blue and red, and by a factor of 2 for the green channel. This also results in interference (moiré) patterns which do not exist in the case of multisensor prism-based cameras (Figure 2). Due to the size of the prism, multi-sensor cameras are limited in the size of their sensors, and thus, typically come with smaller pixel counts. Their FOV is often smaller than for Bayer-pattern cameras. Yet despite their lower pixel count, spatial resolution is higher than in Bayer pattern cameras with the same pixel sizes. Thanks to their multi-sensor prism design, one sensor per color channel is available to cover the same FOV. Hence, every color channel is imaged with the full pixel resolution of the sensor.

Figure 2. Comparison of line charts captured with prism-based camera (left) and Bayer RGB camera (right).

Better spectral differentiation and low color crosstalk: The color filter array (CFA) for Bayer RGB and several other types of color sensors like trilinear are made up of color dyes or pigments. Due to the very nature of these materials, the spectral distribution for blue extends into green and red, green extends into blue and red and red extends into green. For most of the machine vision applications that demand better spectral differentiation, the result of this extension is contamination due to unwanted signal. In addition to the CFA, the very nature of CMOS sensors leads to color crosstalk when photons falling on one pixel are falsely sensed by the pixels around it. The effect of color crosstalk could be further visible during the debayering process where the false pixel values are used for interpolation.

Figure 3. Typical Bayer RGB spectral response (left) vs response of a prism-based camera (right).

In multi-sensor prism cameras, dichroic interference filters are used. These filters are steep in nature and provide more efficient filtering than pigment or dye-based ones. The lifetime of these filters is much longer because the color is intrinsic in the construction of hard microscopic layers. Unlike Bayer RGB cameras where the sensor response could describe the camera response, prism-based cameras are different. The spectral response is not only a function of the sensor but also the transmission of light through the prism.

Gain and exposure control: In camera technology, gain is the amplification of signal. A prism based, multi-sensor camera has every color channel equipped with a separate sensor. Analog gain can thus be optimized independently for every single color channel. Furthermore, multi-sensor cameras also allow adjusting exposure times for each sensor, separately. Each color channel can thus be adjusted in gain and exposure time to achieve optimal signal-to-noise ratio for every color channel. This is depicted in Figure 4. Bayer cameras only offer gain-based control, which often results in higher overall noise levels than when both methods are available.

Figure 4. Optimizing grey levels with multi-sensor cameras (gain + exposure balanced or exposure balanced).

Flexibility: The addition of a prism allows more flexibility in filtering of light and selection of spectral bands than single-sensor cameras based on a Bayer RGB sensor. Different configurations such as 2 channels: RGB + NIR, 4 channels: R, G, B + NIR, 3 channels: RGB + NIR + NIR (1 color and 2 NIR channels for applications requiring specific channels in the longer wavelengths such as vegetation analysis) are possible. Custom coatings based on the requirements of an application can be realized with the prism-based approach.

High-speed: The latest multi-sensor prism-based line scan cameras can reach speeds of 66,000 lines/sec in a three channel (RGB) configuration. The area scan cameras can work up to 79 fps with a 3 channel 1.6 MP resolution camera. These speeds are good enough for challenging applications such as high-speed print inspection, foil, paper and banknote inspection, sorting of granular items such as rice and beans, and differentiating objects moving at unknown speeds such as tea leaves, tobacco leaves and cotton sorting.

With the traditional machine vision industry merging with intricate measurement technologies, consistent, reliable, high-fidelity color imaging is playing key roles in industrial quality control. In the midst of this convolution, prism-based technology realizes the true potential of imaging due to its unique advantages and added-values. This article outlines six key points which underline the benefits of using prism-based multi-sensor cameras.

This article was written by Paritosh Prayagi, Global Product Manager, Line Scan Portfolio, JAI (San Jose, CA). For more information, contact Mr. Prayagi at This email address is being protected from spambots. You need JavaScript enabled to view it. or visit here .

Photonics & Imaging Technology Magazine

This article first appeared in the May, 2018 issue of Photonics & Imaging Technology Magazine.

Read more articles from this issue here.

Read more articles from the archives here.