With the proliferation of smart vision (SV) cameras and platforms it is understandable that automation engineers begin to define vision technology capabilities based on the performance and features of these popular and easy to implement products. Although they are effective for addressing a broad range of common vision applications and requirements, there are compromises made to make SV products easy to use and less costly, the obvious being reduced vision processing and performance capabilities.

Automated inspection of color print materials using an advanced machine vision system.
Vision system solution providers, like Innovative Solutions often see situations where clients have tried SV based solutions only to find they could not achieve their performance goals. In such situations, it is possible to meet vision-performance goals with an advanced vision system. But as Figure 1 shows, a good machine vision system involves more than just the hardware.

Front End Vision Components

Basic front end design begins with proper lighting. Effective lighting implementation can be an art as much as a science, so it is important to experiment with various light types and configurations. Don’t forget to test actual product samples when optimizing lighting design.

It is critical to pay attention to spectral output. For example, white LEDs are sometimes not effective for color analysis because their spectral output is spiked and shifted toward the blue part of the visible spectrum. Using NIR lighting, and lights in combination with lens filters, can be effective for enhancing features of interest and suppressing others.

Optics is usually the technical area most neglected during machine vision system development. This is unfortunate because lenses have a huge impact on overall system performance. Optical performance is best defined with modulation transfer function (MTF) curves that show how resolution varies over the field of view, at different wavelengths, at different working distances, and at different aperture settings.

Cameras, more and more, are incorporating larger sensors and sensors with very small pixel structures, making selection of an adequately performing lens all the more critical to achieving maximum system performance. Make sure your lens is large enough to fit the entire sensor within the “sweet spot” at the center of the field of view (Figure 2).

Machine vision cameras are available with two basic image sensor technologies: CCD and CMOS. CCD-based cameras typically provide superior light sensitivity and have lower pixel response variations compared to CMOS cameras. CMOS cameras, on the other hand, provide faster frame rates, have greater capabilities for partial frame readout, do not bloom or smear under bright light conditions, are lower cost, and consume less power.

For both sensor types, dynamic range is the most common specification provided by camera manufacturers to benchmark imaging performance. This metric provides insight into the capability of a single pixel to distinguish levels of gray, but it really provides no information about intensity variations amongst all the pixels when they are read out into an image frame. Flat field and pixel defect correction, incorporated in many of the camera designs, improves pixel output variations.

When relative movement exists between the parts to be inspected and the camera, using a line scan camera should be considered. Traditional line scan cameras are often light-starved, especially for fast moving systems, because the amount of time a pixel has for collecting light is minimal. High sensitivity line scan cameras use time delay integration (TDI) – a technology pioneered by Teledyne DALSA - to greatly increase light sensitivity, opening up new application areas for line scan cameras.

Camera models incorporating the same sensor, in addition to having image quality variances, often have feature and capability differences. The camera software will provide insight into available features and operational modes. Additionally, a variety of digital interfaces are available, some of which (i.e., USB2, USB3, GigE Vision) do not require a frame grabber, and high bandwidth interfaces (i.e., CameraLink, CameraLink HS, and CoaXPress) that do.

Back-End Vision Components

Figure 1. Designers need to plan system elements, from scene lighting to operational software, carefully to achieve optimum performance. All components need to fit together with compatible specifications. (Source: C.G. Masi)
Back-end vision components include processing and computer hardware, as well as image-processing development software. The latter is a software development kit (SDK) that allows engineers to build and modify the application software needed to ultimately run the system.

Frame grabbers, in addition to functioning as high bandwidth camera interfaces, can provide features, like advanced I/O capability and embedded processing, which are useful for addressing high performance vision applications. Some frame grabbers incorporate powerful FPGAs that can be used for preprocessing image data, thereby reducing host processor burden. Flat field correction, thresholding, color space conversions, and convolution calculations are examples of the inherent real time processing effectively performed by frame-grabber FPGAs.

Photonics Tech Briefs Magazine

This article first appeared in the May, 2014 issue of Photonics Tech Briefs Magazine.

Read more articles from the archives here.