A nearly geometric growth in the data requirements for many machine vision applications is pushing 32-bit processing to its limits. The challenge is not in processing power, however, but in addressing memory buffers as systems fill them with ever-increasing volumes of data. Moving vision systems to 64-bit operation can solve the data challenge, but will require fully-updated hardware and software support.
The image sizes needed for machine vision applications — in terms of memory requirement — are increasing for several reasons. One is simply larger objects to be inspected. Another is the need for multiple cameras and images. An inspection system that must examine a populated printed circuit board (PCB), for instance, may require thousands of images taken from different angles and different positions in order to inspect different aspects of the board, such as chip lead positions, printing and other markings, solder joint quality, and the like.
Size increases also stem from increasing demands for higher image resolution. Inspection of a flat-panel display, for instance, must be able to resolve objects that continue to shrink as panels evolve toward high-definition. This requires more camera pixels per image inch, which compounds the image size growth that stems from increasing display panel sizes. Compounding is also at work in inspection systems that must work with three-dimensional (3D) measurements such as in the case of solder paste applied to a PCB. The thickness of paste on a PCB depends on the type of component to be mounted, so it varies across the board. To make accurate depth measurements, the image must have a resolution 10 to 100 times greater than the measurement accuracy required. Along with increasing image size the machine vision industry must address continual demands for faster inspection throughput. Because the inspection throughput directly affects manufacturing productivity, time really is money. Thus, continuous inspection systems using line cameras must not only provide more pixels per line and more lines per inch, they must scan more inches per second, filling memory very quickly. Area cameras also need to capture larger images more quickly and rapidly move them into storage for processing.
Increasing industry interest in color images is further compounding the growth in data storage requirements. Color images typically add three components to the image intensity of mono-chrome images — red, green, and blue color saturation — resulting in a data requirement as much as four times the size of comparable monochrome images. The vision system could derive the intensity information from just the three color components, but the computation required typically creates an unacceptable load on a system’s processing capacity.
Memory is the Barrier
The net effect of all these compounding factors is an exponential growth in data requirements for images that is pushing many systems beyond the addressable space of 32-bit operating systems. Performance is not as much of an issue. Today’s PCs use state-of-the-art processors that have as many as four processor cores on chip and are capable of handling data at rates of 600 to 700 Mbytes/sec. The advent of PCIexpress gives system backplanes the capacity to transfer data at 5 Gbytes/sec. These speeds are typically high enough to handle images as fast as they are acquired.
The machine vision process, however, works with pixels in blocks rather than one at a time. Thus, vision system inspection rates are based on average rather than continuous processing speeds. The system acquires an object’s image, begins processing, and finishes while the next object moves into the inspection area. In a wafer inspection system, for example, the camera takes an image of the wafer under test, sends that image to the vision system, and loads a new wafer as the vision system continues processing. Ideally, the vision system will complete its processing in the time it takes the wafer handling system to move the new wafer into place so that the handling system does not have to pause.
To achieve maximum processing efficiency, however, the vision processor must buffer data at its local memory (online) so that it does not have to wait for data to load. An external data storage device, such as a disk drive, is too slow to keep up with frame rate requirements, especially because such storage requires data to move twice — once to the drive, then later to the vision system. In addition, disk drives have an overhead penalty that arises because they use a file structure for data access, not the first-in, first-out (FIFO) data access that vision systems require. Finally, given the image size increases now occurring and the latency of image storage and retrieval, the drive would need to offer Terabytes of storage in order to provide adequate buffering. Drive systems of that size would be cost prohibitive.
On-line data buffering does not suffer these drawbacks. The system does not need to move information twice and is easily configured to store and retrieve data using FIFO access with no overhead penalty. Memory cost is also not a major issue. The performance of online storage is typically fast enough that buffering requirements reduce to two images at most (one incoming and one in processing) and with DRAM pricing now down to $10 to $12 per Gigabyte online storage is quite affordable.
Solving Buffer Issues
The challenge that vision systems face with these large image requirements is not so much performance as it is storage space. Many current systems need buffer sizes as large as 3.5 Gbytes. This is perilously close to the 4 Gbyte memory addressing limit of 32-bit processing, leaving little room for other system storage needs much less expansion in image size.
There are workarounds available — such as paging and virtual addressing — that extend the memory size a 32-bit system can handle. Such schemes use a two-step addressing system that calls first for selection of a “page” or block of memory to work within, followed by normal memory access within that block. One solution for offering support for such address extensions is working with programs such as Windows Server 2003 and Data Center. The problem with such memory extensions, however, is that they increase software complexity and overhead to manage the page addressing when accessing data, especially when a data access must move across page boundaries. The additional overhead works to limit vision system throughput.
The other solution to the memory size limit is to move the system design to the 64-bit addressing level. With 64 bits the directly addressable space increases from 4 Gbytes to nearly two hundred billion Gbytes. This is an essentially infinite memory space for systems to work within, limited only in practice by the cost of populating that space with physical memory.
Moving to 64-bits
Moving a system to embrace 64-bit operation, however, affects the entire system design. First, the hardware must support 64-bit operation. Most high-performance processors can handle the 64 bits, but peripheral devices must also. The vision system camera, for instance, will need to support 64-bit addresses, although it does not have to use 64-bit data. Similarly, the frame grabber that buffers image data must allow 64-bit addressing. Legacy systems may thus need hardware upgrades in order to move to 64-bit operation.
As well as the hardware, the system software must work with 64-bit addresses and data words. The software involved includes the operating system (OS), hardware drivers, image processing libraries, and user applications code. The OS part is easy; Windows Vista is available both in a 32-bit and a 64-bit version (Vista64). The challenge lies in the other software elements.
For new designs the challenge is less significant as long as all the building blocks can be chosen to support 64-bit operation. If 32-bit legacy software is to be used, however, it will require some rework. Code that is written in a high-level language such as C can be ported to 64-bit operation by recompiling, a fairly painless transition. Drivers and libraries typically fall into this category, as does most applications code. Some legacy application software, however, is written in assembly language to maximize performance. Such hardware-specific custom code is the most difficult and expensive to migrate.
Not all applications will ever need the full power of 64-bit systems and many that will eventually migrate do not need to do so now. The freedom to choose gives developers an opportunity to avoid the higher costs of 64-bit systems when they are not needed.
Conclusion
For most vision applications, increasing image size and throughput demands will ultimately push system memory requirements past the 4 Gbyte limit of 32-bit operation. When that happens, migration to 64-bit operation provides the simplest approach to handling large data sets as well as offering extensive growth room for system enhancement.
This article was written by Yvon Bouchard, Director Systems Architecture, DALSA (Billerica, MA). For more information, contact Mr. Bouchard at