Solving a machine vision application, whether it involves quality inspection, part verification or any number of additional tasks, requires taking several factors into consideration. The most important part of this process is analyzing the target object and its inspection environment, and then specifying the tolerance between “good” parts and “bad” parts. From this information, one can choose the optimal lighting, vision sensor and lens for the application at hand.

Selecting Lighting

When the optimal vision lighting, sensor and lens are employed, applications such as reading an etched bar code on an IC chip become simple to solve.
Selecting the proper lighting is the most critical part of creating a working vision solution. Robust lighting simplifies the configuring and running of a vision system. Optimal lighting creates adequate contrast between the feature(s) of interest and the background (everything else in the camera’s field-of-view). To be effective, lighting must be consistent and light pollution (noise from changing ambient light levels) must be eliminated. Light selection is an art that involves analyzing the optical properties of the part to be inspected. The four main optical properties are shape, surface texture, color and translucency. The goal is to define, in terms of these optical properties, how the feature(s) of interest differ from their background, and then choose a lighting technique that takes advantage of these differences.

For example, when trying to read a dot peened barcode on an otherwise smooth piece of metal, the optical property that separates the dot peened marks from the flat background is surface texture. Some lighting techniques (e.g., low-angle ring light and on-axis light) are particularly good at generating contrast in these situations and should be tried first.

Common off-the-shelf machine vision lights are available in various sizes, LED colors, and housings. The technologies can be roughly broken down into these three categories: back-lighting, bright-field lighting (area, on-axis, linear array or camera-mount ring lights) and dark-field lighting (low-angle ring or area lights).

Backlighting, a scheme which places the light source behind the object so it shines directly into the camera, creates silhouettes of opaque objects and is useful to analyze shapes or inspect for holes. Backlighting is also used when looking for defects in translucent objects, or for detecting the degree of translucency.

In bright-field lighting, the light source is pointing along a more or less perpendicular line to the target, making smooth objects (like a mirror) appear bright in the camera’s image. Ring lights are the most common bright-field light, and they conveniently mount directly onto the camera and surround its lens with a band of light. Ring lights are often used to detect label presence or inspect date or lot codes.

Dark-field lighting positions the source at a low angle so that the light bounces off smooth objects and away from the camera, making smooth areas of the target appear dark in the image. Meanwhile, rough surfaces wind up reflecting some of the dark-field light into the camera, making rough areas appear bright in the image. Dark-field lighting is a good technique for inspecting raised (or lowered) features and textures. Area lights positioned 45 to 90 degrees off the camera axis can distinguish between rough and smooth surfaces to detect notches in ceramic rings or dents in metal tubing.

Determining Capabilities

Vision sensors use various location, vision and analysis tools to detect features of interest.
One of the most important aspects to consider when selecting a vision sensor is its imager. The imager is a solid-state device positioned behind the vision sensor’s lens, with a surface containing thousands of photosites that capture and record the light intensity. It is very useful to keep this in mind: an imager chip is really a collection of thousands of little light meters. Imager resolution is determined by the number of photosites physically on the imager chip, which determines the number of pixels in the image. The more pixels you have, the more detailed the image, allowing you to see and inspect finer components in a larger field of view, or to have greater accuracy in measurements. Photosites that collect a lot of light are shown as bright pixels in the image. Those photosites which did not see much light are shown as dark pixels. You can also select a color imager which measures both the intensity and the wavelength of light. Grayscale imaging, because of its efficiency and simplicity, is more commonly applied in industrial machine vision than color.

A vision sensor’s toolset comprises specific software algorithms used to analyze an image in order to determine whether a target object passes or fails an inspection. The toolset can include various location, vision and analysis tools. Location tools are used to overcome variations in part presentation, translating and rotating inspection points to account for part movement. Vision tool algorithms include those able to match patterns, measure light intensity, inspect blobs (groups of like-intensity pixels) or find edge locations. Analysis tools judge, measure, or perform mathematical functions on the vision tool results.

Many modern machine vision sensors or cameras have simple GUIs (graphical user-interfaces), so professional programming competence is not required on the part of the end-user. Specialty high-performance computer-based vision systems still exist, however, and often require sophisticated computer programming.

Results from a vision inspection can be a simple, discrete go/no-go output, and vision sensors and cameras are often equipped with multiple discrete outputs. Most vision sensors can send these output signals over industrial Ethernet protocols like EtherNet/IP and Mod bus/TCP. In addition, many vision sensors can send words and other data over TCP/IP via 10/100 Ethernet or serially over RS232.

Lens Selection

Choosing a camera lens is another part of specifying a vision sensor, but one that is straightforward and simple. The lens is an optical device, made up of multiple glass elements, that collects and focuses reflected light onto the camera’s imager. Lenses are differentiated by their type, format and focal length. A common type of lens used in machine vision is the C-mount; another (less common) is the CS-mount. The type of lens required is determined by the camera used. The lens format describes the maximum size of imager chip for which the lens is suitable; this variable is also determined by the specifics of the camera in question. The lens focal length relates the size of the area to be inspected, or field-of-view (FOV) to the distance between the camera and the part to be inspected, or working distance. For a given lens, the FOV size is a linear function of working distance. The simplest way to determine the focal length lens needed for a specific FOV size (or a specific working distance) is to use a lens chart. Charts are specific to the type of lens and format of the imager chip, such as 1/3 or 1/5 inch.

This article was written by Brent Evanger, Senior Applications Engineer for Vision Sensors, Banner Engineering, Minneapolis, MN. For more information, please contact Banner at 888.373.6767, e-mail them at This email address is being protected from spambots. You need JavaScript enabled to view it.., or visit

Motion Control Technology Magazine

This article first appeared in the August, 2008 issue of Motion Control Technology Magazine.

Read more articles from this issue here.

Read more articles from the archives here.