2009

Moving Machine Vision to 64-Bits

To achieve maximum processing efficiency, however, the vision processor must buffer data at its local memory (online) so that it does not have to wait for data to load. An external data storage device, such as a disk drive, is too slow to keep up with frame rate requirements, especially because such storage requires data to move twice — once to the drive, then later to the vision system. In addition, disk drives have an overhead penalty that arises because they use a file structure for data access, not the first-in, first-out (FIFO) data access that vision systems require. Finally, given the image size increases now occurring and the latency of image storage and retrieval, the drive would need to offer Terabytes of storage in order to provide adequate buffering. Drive systems of that size would be cost prohibitive.

On-line data buffering does not suffer these drawbacks. The system does not need to move information twice and is easily configured to store and retrieve data using FIFO access with no overhead penalty. Memory cost is also not a major issue. The performance of online storage is typically fast enough that buffering requirements reduce to two images at most (one incoming and one in processing) and with DRAM pricing now down to $10 to $12 per Gigabyte online storage is quite affordable.

Solving Buffer Issues

The challenge that vision systems face with these large image requirements is not so much performance as it is storage space. Many current systems need buffer sizes as large as 3.5 Gbytes. This is perilously close to the 4 Gbyte memory addressing limit of 32-bit processing, leaving little room for other system storage needs much less expansion in image size.

There are workarounds available — such as paging and virtual addressing — that extend the memory size a 32-bit system can handle. Such schemes use a two-step addressing system that calls first for selection of a “page” or block of memory to work within, followed by normal memory access within that block. One solution for offering support for such address extensions is working with programs such as Windows Server 2003 and Data Center. The problem with such memory extensions, however, is that they increase software complexity and overhead to manage the page addressing when accessing data, especially when a data access must move across page boundaries. The additional overhead works to limit vision system throughput.

The other solution to the memory size limit is to move the system design to the 64-bit addressing level. With 64 bits the directly addressable space increases from 4 Gbytes to nearly two hundred billion Gbytes. This is an essentially infinite memory space for systems to work within, limited only in practice by the cost of populating that space with physical memory.

Moving to 64-bits

Moving a system to embrace 64-bit operation, however, affects the entire system design. First, the hardware must support 64-bit operation. Most high-performance processors can handle the 64 bits, but peripheral devices must also. The vision system camera, for instance, will need to support 64-bit addresses, although it does not have to use 64-bit data. Similarly, the frame grabber that buffers image data must allow 64-bit addressing. Legacy systems may thus need hardware upgrades in order to move to 64-bit operation.

As well as the hardware, the system software must work with 64-bit addresses and data words. The software involved includes the operating system (OS), hardware drivers, image processing libraries, and user applications code. The OS part is easy; Windows Vista is available both in a 32-bit and a 64-bit version (Vista64). The challenge lies in the other software elements.

For new designs the challenge is less significant as long as all the building blocks can be chosen to support 64-bit operation. If 32-bit legacy software is to be used, however, it will require some rework. Code that is written in a high-level language such as C can be ported to 64-bit operation by recompiling, a fairly painless transition. Drivers and libraries typically fall into this category, as does most applications code. Some legacy application software, however, is written in assembly language to maximize performance. Such hardware-specific custom code is the most difficult and expensive to migrate.

Not all applications will ever need the full power of 64-bit systems and many that will eventually migrate do not need to do so now. The freedom to choose gives developers an opportunity to avoid the higher costs of 64-bit systems when they are not needed.

Conclusion

For most vision applications, increasing image size and throughput demands will ultimately push system memory requirements past the 4 Gbyte limit of 32-bit operation. When that happens, migration to 64-bit operation provides the simplest approach to handling large data sets as well as offering extensive growth room for system enhancement.

This article was written by Yvon Bouchard, Director Systems Architecture, DALSA (Billerica, MA). For more information, contact Mr. Bouchard at This email address is being protected from spambots. You need JavaScript enabled to view it., or visit http://info.hotims.com/22930-200.