Moving Machine Vision to 64-Bits
- Sunday, 01 November 2009
A nearly geometric growth in the data requirements for many machine vision applications is pushing 32-bit processing to its limits. The challenge is not in processing power, however, but in addressing memory buffers as systems fill them with ever-increasing volumes of data. Moving vision systems to 64-bit operation can solve the data challenge, but will require fully-updated hardware and software support.
Not long ago the electronics industry thought “who would ever need more than 4 Gbytes of memory in a PC?” Yet that ceiling, set by the 32-bit limit that many operating systems place on the address space they will handle, is rapidly becoming an impediment to machine vision applications. Several factors are pushing machine vision data requirements against that ceiling, including image size, throughput demands, and increasing use of color.
The image sizes needed for machine vision applications — in terms of memory requirement — are increasing for several reasons. One is simply larger objects to be inspected. Another is the need for multiple cameras and images. An inspection system that must examine a populated printed circuit board (PCB), for instance, may require thousands of images taken from different angles and different positions in order to inspect different aspects of the board, such as chip lead positions, printing and other markings, solder joint quality, and the like.
Size increases also stem from increasing demands for higher image resolution. Inspection of a flat-panel display, for instance, must be able to resolve objects that continue to shrink as panels evolve toward high-definition. This requires more camera pixels per image inch, which compounds the image size growth that stems from increasing display panel sizes. Compounding is also at work in inspection systems that must work with three-dimensional (3D) measurements such as in the case of solder paste applied to a PCB. The thickness of paste on a PCB depends on the type of component to be mounted, so it varies across the board. To make accurate depth measurements, the image must have a resolution 10 to 100 times greater than the measurement accuracy required. Along with increasing image size the machine vision industry must address continual demands for faster inspection throughput. Because the inspection throughput directly affects manufacturing productivity, time really is money. Thus, continuous inspection systems using line cameras must not only provide more pixels per line and more lines per inch, they must scan more inches per second, filling memory very quickly. Area cameras also need to capture larger images more quickly and rapidly move them into storage for processing.
Increasing industry interest in color images is further compounding the growth in data storage requirements. Color images typically add three components to the image intensity of mono-chrome images — red, green, and blue color saturation — resulting in a data requirement as much as four times the size of comparable monochrome images. The vision system could derive the intensity information from just the three color components, but the computation required typically creates an unacceptable load on a system’s processing capacity.
Memory is the Barrier
The net effect of all these compounding factors is an exponential growth in data requirements for images that is pushing many systems beyond the addressable space of 32-bit operating systems. Performance is not as much of an issue. Today’s PCs use state-of-the-art processors that have as many as four processor cores on chip and are capable of handling data at rates of 600 to 700 Mbytes/sec. The advent of PCIexpress gives system backplanes the capacity to transfer data at 5 Gbytes/sec. These speeds are typically high enough to handle images as fast as they are acquired.
The machine vision process, however, works with pixels in blocks rather than one at a time. Thus, vision system inspection rates are based on average rather than continuous processing speeds. The system acquires an object’s image, begins processing, and finishes while the next object moves into the inspection area. In a wafer inspection system, for example, the camera takes an image of the wafer under test, sends that image to the vision system, and loads a new wafer as the vision system continues processing. Ideally, the vision system will complete its processing in the time it takes the wafer handling system to move the new wafer into place so that the handling system does not have to pause.