Machine vision dates back to the beginning of the modern industrial robot age in the 1980s. Augmenting collaborative robots (or ‘cobots’) with vision allows them to perform with higher precision, flexibility, and intelligence. However, integration is not a one-size-fits-all process as the specific requirements of each application can vary greatly.
In an electronics assembly line, for example, a cobot working on printed circuit board (PCB) assembly could use a machine vision system for two or more tasks with high precision requirements, such as locating the position of components on a PCB and inspecting the quality of solder joints post-assembly.
Meanwhile, a simple application, such as detecting whether a product has a sticker indicating that it has passed inspection, can be performed by simple, 2D camera-based systems at a lower price point.
The global machine vision market grew by 10.7 percent between 2021 and 2022 to reach a value of $12 billion, according to MarketsandMarkets, and is projected to reach $17.2 billion by 2027. Major factors driving the adoption of machine vision in cobot systems include: the increasing affordability of machine vision hardware; usability enhancements; and the incorporation of artificial intelligence. These improvements compound to improve performance, make new cobot applications possible, and reduce the TCO (total cost of ownership) on machine vision projects.
So, what are the main considerations to keep in mind when choosing a machine vision system for a cobot-based application?
1. Determine the Need for Machine Vision
First things first, are you sure that your application requires machine vision? Could the application be performed using traditional sensors or fixturing? Some cobots, for example, have built-in user-friendly palletizing wizards that can easily pick up parts placed in a grid pattern on a peg board, which would not require vision at all. Similarly, simple sorting and detection applications that don’t require high precision, could be performed using traditional sensors.
Nevertheless, a significant number of tasks require some sort of machine vision system. These include applications involving object recognition, variable object locations, quality inspection tasks, and safety.
2. Location, Inspection, or Safety?
Before you purchase a cobot arm with built-in vision, make sure that it is the right vision solution for your application. Cobots that provide seamless integration with a wide range of vision solutions from entry-level to advanced are a real plus here; broad compatibility provides more flexibility and helps to futureproof the initial cobot investment.
Most vision-based applications fall into one of three subdomains: location (including path planning), inspection, and safety. For part location applications, it’s important that the vision system is capable of accurate object recognition and pose estimation.
In quality inspection roles, the system should be able to detect minute defects. This requires high-resolution cameras and advanced image processing software.
For safety applications, such as detecting when a human has approached and/ or entered the cobot’s workspace, a machine vision system will require real-time processing capabilities, robust object detection, and tracking functionality.
3. Consider Your Lighting
Lighting strongly influences image quality and, as a result, makes a major impact on the performance of vision systems. Some vision systems need consistent, high-contrast lighting conditions. Many machine vision systems come with their own illumination components to address this aspect.
Meanwhile, other machine vision solutions can cope with variable lighting conditions. Changes in ambient light levels over the course of a day might throw some vision systems, but not others. Even a change in the type of light bulb used in a factory, like switching from fluorescent to LED bulb use, can cause some vision systems to fail.
4. Select a 2D or 3D Camera
Often, 2D robotic vision works well across a wide range of simple applications with a lot of repeatability. Typical examples include barcode reading, label orientation and printing verification. However, 2D cameras only provide length and width information — not depth — which limits the number of applications they can handle.
In contrast, 3D cameras provide extremely accurate depth and rotational informational, making them a good fit for applications that demand detailed information about an object’s precise location, along with its size, volume, surface angles, degrees of flatness, and other features. Similarly, advanced inspection applications that need to be able to detect minute defects in a product’s finish will perform best using 3D camera systems.
5. Determine Your Precision Requirements
Take time to closely examine the accuracy, repeatability, and tolerance requirements of your application before you select a machine vision system for your cobot.
Applications like microchip manufacturing and precision assembly require advanced vision systems with high-resolution cameras and complex image processing capabilities. Meanwhile, applications with more lenient tolerances can easily, and effectively, be performed by cheaper vision systems.
6. Consider Cycle Time
To ensure a successful cobot-based machine vision deployment, the processing speed of machine vision systems needs to align with the cycle time of the cobot operation. High-speed applications will require systems with fast image capture and processing capabilities — but be aware that processing all that visual information also takes time, and this impact on performance will need to be factored into any cycle time considerations.
A New Alternative to Machine Vision
It’s also worth noting that there are alternatives to vision-based systems that need no light requirements whatsoever, instead using radio frequencies to provide high-precision microlocation and sub-milimeter precise path planning for cobot arms in real-time at a level most people would traditionally associate with the most advanced 3D systems.
One such system, specially developed for the automotive sector but with applications across numerous verticals is being developed by Humatics, enabling extremely advanced applications, such as having two cobots work concurrently on an engine as it moves down a conveyor.
Radio frequency-based systems can even outperform vision on certain tasks, said Ron Senior, Vice President of Sales, Marketing and Business Development at Humatics. “The ability to process data in complex environments, such as an engine in motion, is well suited to the lightweight, highly rapid positional data that radio frequency provides. It’s very difficult to achieve the same performance with machine vision because of the image processing required,” he said.
This article was written by Mike De-Grace, UR+ Ecosystem Manager, Universal Robots (Ann Arbor, MI). For more information visit here .