Over the decades, machine vision has adopted a number of industry standards from the more traditional Camera Link standard that involves the use of frame grabbers to the more mainstream USB 3.0 standard. To keep up with changing camera, processing, and integration technologies, the machine vision standards are consistently evolving, with each protocol serving various industrial niches. This article will discuss in detail the merits of the USB3 and GigE vision standards, how they keep pace with imaging technologies, and how augmenting the cabling is critical for optimal performance.
Fundamental Machine Vision Considerations
From defect detection to optical character recognition/verification (OCR/ OCV), the choice of equipment and interface for a machine vision system relies on an array of parameters including the field of view (FOV), working distance, depth of field, image resolution, and exposure time (Table 1). All of these vary based upon the given machine vision application. For instance, defect detection will typically require a much smaller region of interest than feature differentiation. The FOV and working distance for an imaging system attached to a robotic arm will be quite different from an in-line inspection configuration, so the respective optics, lighting technique, lens-camera combination, and interface would also differ.
The shorter the exposure time for the camera, the faster the machine vision system would have to be in order to send the received image data to the frame grabber, PC, or embedded processor. Lack of complexity in image data can translate to faster processing, while more complex data would require slower processing speeds. However, processing capabilities have evolved that do not require an investment in more expensive equipment. This is especially true with the Automated Imaging Association (AIA) GigE and USB 3.0/3.1 standards that have tailored popular commercial interfaces to better fit machine vision applications.
Choosing the Machine Vision Standard
The choice of camera bus depends upon the fundamental considerations listed in Table 1 as well as the installation considerations for a specific machine vision application. Speed and bandwidth are two major considerations for machine vision applications. For instance, a high-speed web inspection with part rates on the order of 400 parts per minute (PPM) would typically require advanced line-scan cameras with a high resolution and higher processing speeds. It would also require a machine vision interface that minimizes the CPU usage with high system synchronization to control peripheral functions such as lighting or reject gates. A high bandwidth application such as PCB inspection would require powerful high-resolution cameras as well as a topology that mitigates the intensive CPU processing that would occur during data transfer.
Plug-and-Play Vision Standards: USB3 and GigE Vision
Released in 2006, the GigE vision standard started the trend of introducing commonly used commercial interfaces for machine vision applications. This allows for cost-effective commercial off-the-shelf (COTS) equipment that can be upgraded over time as Ethernet advances. For instance, the adoption of 10 gigabit Ethernet (10GigE) allows for the utilization of the established Ethernet infrastructure with switches, cabling, SFP+ transceiver modules, and their respective optics. Moreover, GigE vision can transmit both power and data over the same cable with power over Ethernet (PoE). Current GigE machine vision systems often have a 1 Gbps bandwidth, but this is quickly expanding to 5 Gbps- and 10 Gbps-enabled equipment for higher resolution cameras.
The USB3 vision standard is built on the common commercial USB 3.0 specification with the augmentation of custom transport layers to better fit machine vision applications. This is largely the go-to standard for machine vision because of its higher bandwidth and relatively lower CPU usage as compared to the other “plug-and-play” standard, GigE vision (Table 2). Future iterations of USB3 vision include USB 3.1 generation 2, where the 400 Mbps data rate is more than doubled to 900 Mbps with a power delivery up to 100 watts over the USB cable — making this standard comparable with higher speed vision standards such as CoaXPress without the need for external power for the camera equipment.
USB3 and GigE vision have the major benefits of plug-and-play capability with software that can readily detect the camera on a number of host PCs. This differs from more custom interfaces that require the use of a frame grabber such as Camera Link. The lack of a frame grabber reduces system costs drastically since a standard API or GUI can be used on a PC or laptop.
The downside of this is the larger overall CPU usage and I/O synchronization for a high-speed application, since the bulk of the image processing is now shifted to the host PC. There are, however, workarounds such as external trigger timing controllers as well as complex cameras with custom triggering capabilities such as pulse generation, I/O interactions, and advanced synchronization options.
Through Linux kernel modifications, many vendor implementations of USB3 vision minimize CPU load through zerocopy image transfers, where image data is sent directly into host memory without CPU usage, or through on-camera image preprocessing where resources are saved for the actual image processing — an option found in both GigE and USB3 cameras. The flexibility and vendor diversity within these standards enable them to scale with advancements in machine vision.
Machine Vision Trends
Optics and Peripherals
Like any electromagnetic wave, light data includes amplitude (intensity/ brightness), wavelength (color), and polarization (contrast). Optical metrology has evolved mostly in the spectral domain from grey level cameras to RGB color (multispectral) cameras, and finally, multispectral and hyperspectral imaging systems for subsurface defect detection. Monochrome cameras often have a single sensor to determine grayscale values for each pixel on the sensor, based on the exposure, or amount of available light. Multispectral cameras offer another level of visual detection by including red, green, blue — and in some cases infrared (IR) — wavelengths. Hyperspectral imaging (HSI) divides an image into many more bands within the electromagnetic spectrum (up to 300 spectral channels).
Polarization data can provide additional insight into the surface of an object and is particularly desirable in cases where the object under analysis is a single color. Moreover, on-chip polarization can reveal much more detail on required information for a machine vision application. This can vary from material classification, imperfection (scratch, particle, etc.) detection, to shape recognition. Visual inspection, however, can only typically reveal a limited amount of information regarding the state of an object, even in color. And using color for visual inspection tends to add expense while reducing resolution. For this reason, monochrome cameras are used most often, along with polarization sensors, instead of expensive HSI alternatives.
Artificial intelligence (AI), machine learning, and deep learning can offer potential alternatives to the highly complex inspections that traditionally rely on the human eye. The challenge is handling the intensive processing power required. That can be addressed with vision-based IoT platforms such as Ring and Nest Hello, where the AI processing occurs in the cloud. However, progressive upgrades to video capture and compression are necessary to generate a good enough image for deep learning machine vision applications.
This requires the use of a technology that is both scalable and easily upgradeable over time. Using out-of-date or legacy systems runs the risk of needing an entire redesign, and therefore significant upfront investment, in order to keep pace with advancements in technology.
In terms of improving flexibility, scalability, transmission speeds, cost-effectiveness, and upgradeability, the USB3 and GigE machine vision standards rank fairly high. Naturally, there is no one-size-fits-all solution for any given industrial vision application. However, it is important to note that, in addition to the need for adjustments in the protocols for each of these standards to better fit machine vision scenarios, the physical cabling must also be robust enough to endure the environmental, mechanical, and electrical strains that they may undergo in an industrial setting.
Strengthening Ethernet and USB Cables for Industrial Applications
Ethernet cable and connector heads must undergo augmentations in order to provide the best service for industrial applications. This is especially true with the advent of industrial Ethernet (IE), which is the most popular industrial communication standard to date, overtaking fieldbus technology.
Where the typical Ethernet cable comes equipped with a RJ45 connector head and a PVC cable jacket, it would need a much sturdier jacketing material with an M12, or screw-down RJ45 connector head (Figure 1). Although RJ45 connectors are sufficient in forming an electrical connection with a simple “clip” mechanism, that is not good enough in the presence of vibration, shock, or mechanical strains.
Circular M12 connectors are typically IP-rated with ingress protection levels at, or beyond, IP67 (entirely dust tight and able to withstand temporary immersion). Moreover, the threaded fitting prevents intermittent connections and unmating during strain. In some cases, screw-down RJ45 connector variants are sufficient as they do not dislodge. Ethernet jacketing materials are a major consideration in environments where a cable is exposed to chemical, oil, or moisture ingress, as well as UV. Various jacketing materials can be leveraged to mitigate the detrimental effects of harsh solvents including thermosets such as polyurethane (PUR) or thermoplastic elastomer (TPE) materials. Often, cables are rated to withstand millions of flex cycles as well as torsional loads for the frequent and rapid movements found in robotic arms for industrial automation applications.
Industrial USB Cables
USB cables are not as prolific as Ethernet for industrial uses outside of machine vision and, therefore, there is a more limited variety of cable/connector designs. Still, they can be similarly ruggedized with robust jacketing materials and augmented connector heads.
Connectors can include diecast metal shells for longer lasting connectors or thumbscrews for connections that will experience vibration. USB connectors can also be IP-rated with molded back-shells for durability. Armoring a USB cable can protect it from crushing or overexertion from excessive bending. Typically, armoring involves a helically wrapped metallic sheet with interlocked edges that yield a corrugated tube that can withstand immensely high pressures (upwards of 1500 PSI).
The other inherent benefit of this is the additional protection it provides the underlying cable from chemical agents and UV. Moreover, the armoring prevents bends beyond the allowable bend radius of the cable or any kinking that can occur during handling. This solution can be more practical for USB than for Ethernet due to the relatively smaller lengths of cable used for the USB3 vision standard as compared to GigE vision.
The Ethernet and USB standards enable desirable, all-purpose, COTS solutions for machine vision applications. The modularity and technological maturity of both of these platforms allows them to keep pace with the intensive requirements of modern industrial applications. The cabling used in these cases requires a high enough level of ruggedization to function optimally over its lifetime in order to reduce the risk of factory downtime.