It is hard to see how machine-vision camera manufacturers produce decent-quality products at reasonable prices. The multi-megapixel sensors at the heart of current machine-vision cameras are among the largest of VLSI (very-large-scale integration) semiconductor chips, and it is almost impossible to make them with the pixel-to-pixel uniformity required for high-precision imaging applications. Without some way of compensating for manufacturing variations across a given image sensor, many otherwise acceptable chips would have to be discarded. That would drastically reduce manufacturing yields and drive sensor-chip prices far beyond levels acceptable for many applications.
Many digital camera manufacturers use lookup tables (LUTs) to escape this quality/cost problem. An LUT is a matrix of values stored in non-volatile semiconductor memory that an on-board computer then uses to modify each image file before passing it to the output (see Figure 1).
Typically, camera manufacturers use this LUT system to rapidly compensate for variations in pixel sensitivity (defective pixel correction) and systematic intensity variations across the image due to optical vignetting (flat field correction). In its most basic implementation, the LUT carries one or two bytes of information representing a correction factor for each pixel. As the image sensor circuitry passes a digital value for each array pixel, the image processing computer looks up the correction factor for that pixel and applies it to the value before passing it on to the output circuitry. The LUT value for a given pixel would combine factors for both defective-pixel and flat-field correction. There are, of course, other ways to store these corrections, but looking up and applying a single factor from a stored table requires the least processing overhead.
While LUT systems are common in machine- vision cameras, most manufacturers do not give users access to them. Users can improve the image-acquisition process when they have access to the LUT system by downloading modified LUTs that correct variations caused by a particular application and/or hardware aging. For example, the manufacturer’s LUT compensates for the vignetting characteristics of the lens they used to test the camera, which may or may not be similar to the lens the user employs in their application.
Similarly, non-linear sensitivity effects become important when the application lighting level differs from that of the manufacturer’s test, or even when environmental conditions, such as temperature, are different. In addition, there may be special application requirements that might lead the user to suppress or enhance pixel sensitivity in one part of the image over that of another. Of course, as the image-capture hardware ages, the ideal corrections might be expected to change. In such a situation, the ability to modify the LUT values would help the user maintain the system’s viability.
Imperx gives the end user LUT access through three camera system capabilities: auxiliary software to build LUTs, downloading of new LUTs to the camera, and LUT cascades. The auxiliary software runs on a host computer and provides a simple, intuitive interface through which the user can build new LUTs.
Downloading LUTs uses a Terminal Utility (TU) program, which may be part of the LUT editing package or a separate utility entirely. The TU provides a simple and intuitive interface whereby the user can move files between the camera’s non-volatile memory and the host computer’s file system.
LUT cascades make it possible for the user to load multiple LUTs into the camera, then program the image-processing computer to apply them in a given sequence.
FFC, Gamma, and Cascade
With LUTs, users can apply corrections to the signals coming from every pixel in the image. There are two distinct ways these corrections can work. In the first, and perhaps easiest to visualize, the LUT contains a separate correction value (essentially a gain factor) for each pixel. In the second, the LUT carries a separate correction for every possible signal level, which the computer applies to every pixel. Flat-field correction (FFC) is an example of the first type of correction, and gamma correction is an example of the second.
Vignetting is one phenomenon for which FFC compensates. Figure 2 shows an example of vignetting, which occurs when the image-forming lens is unable to maintain the image brightness all the way out to the field of view edge.
To compensate for vignetting, you must first measure it by aiming the camera at a uniformly lit scene. The optical standard is a 20% gray background; that is, a uniformly lit screen whose reflectance spectrum is (as nearly as possible) flat at 20% reflectance. Such a target works for both monochrome and color cameras.
Under these test conditions, the camera will provide a raw image similar to that in Figure 2. The signal level in each pixel will be proportional to the overall brightness at the target times the sensitivity of that pixel. From these levels in the raw image, an FFC utility in the auxiliary software package calculates a correction factor for every pixel such that applying them to the raw image produces a corrected image with the same level in every pixel. Subsequently, when the camera views a scene and applies the FFC LUT, it returns a uniformly lit image, instead of one artificially darkened at the edges.
Gamma correction is a nonlinear modification of the slope of the camera transfer function that results in the suppression or enhancement of certain image regions based on the amount of light reaching the camera from those regions. Gamma correction gets its name from the shape of the level vs. brightness curve shown. Figure 3 shows the effect of gamma correction. Gamma correction provides acceptable contrast in dark or dimly lit portions of the scene. Without it, details in shadowed areas may be completely lost, undermining any machine vision application.
Of course, any function can be used to modify the signal level. Thresholding is a common application in machine vision. Here, the function applied is a simple step function, where the output level is zero below a threshold input level and full intensity above. One could even create a pseudo-color image where different red, green, and blue levels could be put out based on the black level at the input.
Where the ability to apply multiple LUTs really shines is when you want to apply both correction types. If, for example, you applied thresholding as part of a particle-sizing application, you would want to first apply FFC. Otherwise, you likely would find the particle size seemed to depend on where the particle happened to fall in the image frame.
Since FFC applies a separate correction factor to each pixel and thresholding applies the same level-dependent correction to every pixel, you couldn’t efficiently apply them both with the same LUT. You would first apply the FFC from one LUT and then the thresholding function from the second.
All of these corrections can, of course, be made while post-processing the images. The advantage of being able to make them in the camera using LUTs is speed. Image-processing software algorithms are notoriously slow. LUTs allow you to perform many of the same operations at hardware speeds. Experience has shown that systems performing these operations in post-processing software are often lucky to keep up with standard video rates. Doing the same operations in the camera with LUTs, systems typically achieve speeds several times standard video rates.
This article was written by Petko Dinev, President of Imperx Inc., Boca Raton, FL. For more information, contact Dr. Dinev at