Machine vision systems have three basic components: a camera to acquire images; software to extract actionable information about the objects in the images; and a computer to run the image processing software. In the 1990s, the machine vision industry placed all three components into a single housing and called it a “smart camera” to reduce the cost and size of the vision system, and improve its usability for manufacturing process controls and quality inspection.
Unfortunately, smart cameras weren’t very smart. They had slow microprocessors, which limited the sensor size and options the camera could handle; limited memory; and because many users without any machine vision expertise bought them, smart cameras could appear to be less than “user friendly.” The result of these conditions was that smart cameras could not process high-resolution images with sufficient speed for most industrial processes, run a standard PC operating system (OS) with full network and peripheral functionality, or offer a full set of image processing functions, limiting the ability to tackle complex machine vision applications.
Today, thanks to new, low-power, highspeed and multicore processors, smart cameras can achieve processing speeds upwards of 90 gigaflops (Gflops), ten times the processing power of a Pentium M class, single-core microprocessor — enough to support full image processing libraries, sensor resolutions of 5 megapixels (MP) or more, and run a standard PC OS with full peripheral and network functionality, simplifying system setup and remote support for all levels of machine vision expertise. This new class of “PC camera” truly places the full capabilities of an industrial PC inside a machine vision camera.
By 2006, microprocessor technology and compact flash memory had advanced to the point that smart cameras, such as Sony’s XCI-SX1 with Geode processors performing at about 1000 megaflops (Mflops), could run a full Windows operating system and full image processing library. Megahertz-speed microprocessors, however, meant the smart camera could still only process VGA-resolution images, even while using more efficient modern image-processing algorithms, such as vector-based geometric pattern searches, compared to older normalized correlation pixel-to-pixel convolutions. The smart camera, therefore, had to have a very small field of view, or defects needed to be relatively large to be visible in the VGA images; and usually the process needed to be relatively slow, processing dozens of parts per minute rather than hundreds or thousands.
So why didn’t smart camera makers use the same Pentium M class, gigahertz microprocessors as a PC, and achieve performance parity at a lower cost? PCs use fans to actively cool the microprocessor, which allows the PC’s “brain” to work faster and process more data. If the smart camera added a fan, it would lose several advantages over the PC-host system: namely, its size and ruggedness. No moving parts translates to longer mean-time between failures (MTBF). Plus, adding a fan, vents, and drive electronics would make the smart camera considerably larger — a bad idea for customers and OEMs looking to build the smart camera into larger machines or retrofit existing equipment — while opening up the device to industrial contaminates and increasing the chance of system failure.