Machine vision systems have three basic components: a camera to acquire images; software to extract actionable information about the objects in the images; and a computer to run the image processing software. In the 1990s, the machine vision industry placed all three components into a single housing and called it a “smart camera” to reduce the cost and size of the vision system, and improve its usability for manufacturing process controls and quality inspection.

Graphic processing units (GPUs) are specifically designed for computationally intensive tasks. Adding a GPU core to a die with a CPU greatly reduces the computational load on the CPU, increasing overall processing speed and decreasing latency. The addition of zero copy transfers further enhances accelerated processing unit (APU) performance for industrial applications.

Unfortunately, smart cameras weren’t very smart. They had slow microprocessors, which limited the sensor size and options the camera could handle; limited memory; and because many users without any machine vision expertise bought them, smart cameras could appear to be less than “user friendly.” The result of these conditions was that smart cameras could not process high-resolution images with sufficient speed for most industrial processes, run a standard PC operating system (OS) with full network and peripheral functionality, or offer a full set of image processing functions, limiting the ability to tackle complex machine vision applications.

Today, thanks to new, low-power, highspeed and multicore processors, smart cameras can achieve processing speeds upwards of 90 gigaflops (Gflops), ten times the processing power of a Pentium M class, single-core microprocessor — enough to support full image processing libraries, sensor resolutions of 5 megapixels (MP) or more, and run a standard PC OS with full peripheral and network functionality, simplifying system setup and remote support for all levels of machine vision expertise. This new class of “PC camera” truly places the full capabilities of an industrial PC inside a machine vision camera.

Slow Silicon

By 2006, microprocessor technology and compact flash memory had advanced to the point that smart cameras, such as Sony’s XCI-SX1 with Geode processors performing at about 1000 megaflops (Mflops), could run a full Windows operating system and full image processing library. Megahertz-speed microprocessors, however, meant the smart camera could still only process VGA-resolution images, even while using more efficient modern image-processing algorithms, such as vector-based geometric pattern searches, compared to older normalized correlation pixel-to-pixel convolutions. The smart camera, therefore, had to have a very small field of view, or defects needed to be relatively large to be visible in the VGA images; and usually the process needed to be relatively slow, processing dozens of parts per minute rather than hundreds or thousands.

Until now, transistor budget constraints typically mandated a twochip solution for CPU and GPU functions, forcing system architects to use a chip-to-chip crossing between the memory controller and either the CPU or GPU. These transfers affect memory latency and consume system power. The APU’s scalar x86 cores and SIMD engines share a common path to system memory to help avoid these constraints. Image courtesy of AMD.

So why didn’t smart camera makers use the same Pentium M class, gigahertz microprocessors as a PC, and achieve performance parity at a lower cost? PCs use fans to actively cool the microprocessor, which allows the PC’s “brain” to work faster and process more data. If the smart camera added a fan, it would lose several advantages over the PC-host system: namely, its size and ruggedness. No moving parts translates to longer mean-time between failures (MTBF). Plus, adding a fan, vents, and drive electronics would make the smart camera considerably larger — a bad idea for customers and OEMs looking to build the smart camera into larger machines or retrofit existing equipment — while opening up the device to industrial contaminates and increasing the chance of system failure.

An “Atomic” Game Changer

The shift came in 2008 when Intel announced the new Atom microprocessor designed for net books and Internet devices based on 45-nm lithography technology. By shrinking the size of the circuits on the microprocessor, Intel could achieve performance comparable to half that of a Pentium M class (2-3 Gflops), or an order of magnitude more than the Geode predecessors used in the first PC camera models.

But just as important as performance is power consumption and associated heat generation. The Atom microprocessor consumes 20% less power than a Pentium M class at full speed, and considerably less during idle times, allowing the unit to cool faster and better than previous models. Early this year, Intel added a graphic processor unit (GPU) to the x86-based CPU, while AMD joined the fray with the Fusion accelerated processing unit (APU), which, like the new Atom E6xx class microprocessor, places a GPU core on the same die as the CPU. Using Fusion’s 40-nm lithography technology, the latest PC cameras can now deliver up to 90 Gflops of processing power. Very soon, aggressive PC camera makers that are early adopters of the new CPU/GPU processors will deliver up to 480 Gflops in a PC camera through AMD’s new A-Series APU announced in August 2011.

More than just computational power, the A-Series also delivers zero copy memory function on a single die, allowing the CPU and GPU to move values from one core to the other without using runtime memory transfers across a PCIx bus, speeding up computations and reducing system latency.

PC Inside

More than just running a full OS and image processing library, the additional processing speed has freed machine vision PC camera vendors from having to optimize their system for a single image processing library or OS. XIMEA GmbH, for example, includes 25 application programming interfaces (APIs) with its CURRERA line, making the camera plug-and-play compatible with major image processing libraries on the market, including Cog nex Corp.’s VisionPro, Matrox’s MIL, National In struments LabVIEW, MV Tec Software’s HALCON, and more. The Leutron Vision CheckSight PC camera offers a C-compiler that designers can use to develop their own APIs. This step towards greater compatibility for vision technology is important for integrators because most of them are only familiar with a few image processing libraries, which can limit their hardware selections for a given machine vision application.

The CURRERA-G PC camera has a series of available I/O interfaces, and the device has been designed to accommodate image processing libraries.

The smaller footprint of the PC camera allows data to get from the sensor to the processor much faster than on a comparable PC-host system, reducing latency and jitter (dislocations in the image) between image acquisition and processing. The image transfer speed and data integrity from a remote camera to a PC or embedded vision is limited by the cable bandwidth, length, and electromagnetic interference (EMI). Unlike standard PC-host machine vision systems, which come with consumer-based operating systems, PC cameras have the option of running a full operating system, such as with the Leutron CheckSight smart camera, or an embedded version of Windows or Linux, such as with the Matrox Iris GT and XIMEA CURRERA PC camera series. An embedded OS uses a componentized architecture that allows the PC camera maker to choose only those features that are necessary for system and network support. OS modules, such as legacy support for applications designed for older versions of the operating system, or various APIs for Internet Explorer and other non-essential OS tasks, can be eliminated using an embedded OS, further reducing latency and increasing the PC camera’s overall processing speed. A machine vision system based on an industrial PC can lay the same claim to an embedded OS, however, the cost of an industrial PC with multi-megapixel industrial camera costs more than a PC camera machine vision technology, while still using cables and bus interfaces that slow image transfer speed between camera and processor, complicate system integration with existing manufacturing equipment, and increase the chance of data loss during transfer.

Unfortunately, even an embedded OS is not a “real-time” operating system, which means that determinism — or the assurance that data will be at a certain point at a given time — varies depending on computational load and other factors. While determinism is improved through a PC camera architecture that puts all components in close proximity to one another and uses on-board interfaces rather than cabling and back planes, the additional processing power allows PC camera makers to include realtime industrial field bus interfaces. Matrox’s Iris GT features Modbus functionality, while XIMEA’s CURRERA line includes an on-board PLC that secures nanosecond-level determinism when communicating between the PC camera and downstream ejectors and other industrial equipment.

Going the Distance

PC cameras, like all machine vision systems, are designed for industrial product lifetime support in excess of 7 years, while consumer PCs’ hardware and software configurations change every week, creating a potential support nightmare for machine vision providers. At the same time, while industrial PC cameras will fail less and perform better than consumer-based platform technology because the software and hardware is better integrated and supported, troubleshooting PC cameras is more difficult because they are not designed to be disassembled by anyone except trained factory personnel.

Fortunately, the full OS capabilities of a PC camera provide an answer by including full network, Internet, and browser support that marks a major improvement compared to traditional smart camera remote support options. In today’s global economy, improved remote support is a “must” for machine vision providers and customers alike where lean operations cannot withstand periods of unexpected downtime.

In the future, PC camera makers could improve support by using “snap in” modular designs that allow the user to replace a failed motherboard or network interface. This, combined with the growing processing power and low-heat generation of accelerated core processors, would also help solve the last advantage of PC-host systems over PC camera solutions: a number, variety, and selection of sensor types. Imagine being able to re-purpose a PC camera for a high-resolution operation simply by snapping out the sensor box and replacing it with a larger array. Science fiction? Just wait.

This article was written by Max Larin, CEO, XIMEA GmbH (Münster, Germany). For more information, Click Here .