With the proliferation of smart vision (SV) cameras and platforms it is understandable that automation engineers begin to define vision technology capabilities based on the performance and features of these popular and easy to implement products. Although they are effective for addressing a broad range of common vision applications and requirements, there are compromises made to make SV products easy to use and less costly, the obvious being reduced vision processing and performance capabilities.
Vision system solution providers, like Innovative Solutions often see situations where clients have tried SV based solutions only to find they could not achieve their performance goals. In such situations, it is possible to meet vision-performance goals with an advanced vision system. But as Figure 1 shows, a good machine vision system involves more than just the hardware.
Front End Vision Components
Basic front end design begins with proper lighting. Effective lighting implementation can be an art as much as a science, so it is important to experiment with various light types and configurations. Don’t forget to test actual product samples when optimizing lighting design.
It is critical to pay attention to spectral output. For example, white LEDs are sometimes not effective for color analysis because their spectral output is spiked and shifted toward the blue part of the visible spectrum. Using NIR lighting, and lights in combination with lens filters, can be effective for enhancing features of interest and suppressing others.
Optics is usually the technical area most neglected during machine vision system development. This is unfortunate because lenses have a huge impact on overall system performance. Optical performance is best defined with modulation transfer function (MTF) curves that show how resolution varies over the field of view, at different wavelengths, at different working distances, and at different aperture settings.
Cameras, more and more, are incorporating larger sensors and sensors with very small pixel structures, making selection of an adequately performing lens all the more critical to achieving maximum system performance. Make sure your lens is large enough to fit the entire sensor within the “sweet spot” at the center of the field of view (Figure 2).
Machine vision cameras are available with two basic image sensor technologies: CCD and CMOS. CCD-based cameras typically provide superior light sensitivity and have lower pixel response variations compared to CMOS cameras. CMOS cameras, on the other hand, provide faster frame rates, have greater capabilities for partial frame readout, do not bloom or smear under bright light conditions, are lower cost, and consume less power.
For both sensor types, dynamic range is the most common specification provided by camera manufacturers to benchmark imaging performance. This metric provides insight into the capability of a single pixel to distinguish levels of gray, but it really provides no information about intensity variations amongst all the pixels when they are read out into an image frame. Flat field and pixel defect correction, incorporated in many of the camera designs, improves pixel output variations.
When relative movement exists between the parts to be inspected and the camera, using a line scan camera should be considered. Traditional line scan cameras are often light-starved, especially for fast moving systems, because the amount of time a pixel has for collecting light is minimal. High sensitivity line scan cameras use time delay integration (TDI) – a technology pioneered by Teledyne DALSA - to greatly increase light sensitivity, opening up new application areas for line scan cameras.
Camera models incorporating the same sensor, in addition to having image quality variances, often have feature and capability differences. The camera software will provide insight into available features and operational modes. Additionally, a variety of digital interfaces are available, some of which (i.e., USB2, USB3, GigE Vision) do not require a frame grabber, and high bandwidth interfaces (i.e., CameraLink, CameraLink HS, and CoaXPress) that do.
Back-End Vision Components
Back-end vision components include processing and computer hardware, as well as image-processing development software. The latter is a software development kit (SDK) that allows engineers to build and modify the application software needed to ultimately run the system.
Frame grabbers, in addition to functioning as high bandwidth camera interfaces, can provide features, like advanced I/O capability and embedded processing, which are useful for addressing high performance vision applications. Some frame grabbers incorporate powerful FPGAs that can be used for preprocessing image data, thereby reducing host processor burden. Flat field correction, thresholding, color space conversions, and convolution calculations are examples of the inherent real time processing effectively performed by frame-grabber FPGAs.
The type and number of host based processors incorporated in a vision system computer have a significant impact on performance. Graphic Processing Units (GPUs) can provide blazing processing speed for a variety of image processing operations.
Vision development software is perhaps the most defining component for what can be accomplished with an advanced vision development project. There is a learning curve associated with any advanced software product, and vision development software should not be assessed based solely on the project at hand, but with an eye toward future projects and requirements.
Software performance and overall capabilities are obvious SDK attributes to evaluate. Questions to explore with regard to performance include:
- Are multiple processor cores supported and seamlessly utilized?
- Does the software support GPU acceleration?
- How fast are the processing functions? (Pattern matching is a good test.)
- Does the SDK support embedded processing hardware, such as FPGAs?
Ideally, a vision SDK includes evolved tools and functions for implementing advanced image processing methodologies and algorithms, such as variation-model comparison, CAD file matching/comparison, classification, texture analysis, contour processing, localized segmentation, and 3D processing/analysis. Additionally, calibration capabilities and methodology should be examined as calibration impacts both accuracy and measurement consistency.
Support for a broad array of hardware types, multiple operating systems and programming environments, and socket and serial port data communication, all increase a vision software product’s flexibility.
Sustainability and support are also important software considerations. Working with development software that is well documented, supported, includes extensive source code examples, and is continuously updated with operating system and hardware-driver changes, has a huge benefit for extended, deployed system costs.
It is imperative that an advanced vision development project be well defined from the beginning. Start by defining what needs to be measured, with what required accuracy, and at what required speed. Physical envelope and electrical limitations are important as well. These parameters, to a high degree, will determine the optics, lighting and camera selection.
Writing and defining a clear specification for a system’s requirements is critical to ensuring successful development. An advanced vision project should be broken into development phases, with the first phase, and perhaps most important, being feasibility analysis focused on proving the vision design concept, both from a vision component and imaging algorithm viewpoint.
Vision technology is advancing faster than ever, and its impact on manufacturing and automation is in its infancy. Acquiring knowledge and familiarity with advanced vision capabilities, beyond a smart vision product context, is a worthwhile and strategic endeavor. A thoughtful approach to advanced vision system development is the best way to ensure meeting your system goals at the most reasonable initial and lifetime cost.