The development of a robotic random bin picking system that translates to a real-world factory application has received attention for more than 30 years, and has been called by some the “Search for the Holy Grail.” Random bin picking refers to a system where vision-enabled robots locate, grasp, and move single parts from a bin of jumbled or randomly piled parts. In recent years, advances in processing speed, new algorithms, and significant engineering have combined to solve limited versions of the problem, but the more general problem remained.

Semi-Random vs. Really Random

Many of the limited solutions handle what the machine vision industry refers to as “semi-random” problems, where parts are not totally randomized, but loosely fitted in a bin. In some cases, parts are roughly stacked, are flat 2D parts, or they arrive in a single layer in the bin creating less variance in part orientation and height. Semi-random picking techniques typically include pattern matching, 2D or 2.5D part location, and a standard approach trajectory for grasping parts.

Random bin picking refers to a system where vision-enabled robots locate, grasp, and move single parts from a bin of jumbled or randomly piled parts.

Truly randomized bins, where parts are situated underneath or on top of one another in any position, create an additional set of problems. Parts jumbled on top of one another increase the number of potential orientations of a part, reduce the visibility of part features used to recognize individual parts, and require additional potential paths for grasping parts. Real-world solutions need to overcome all of these obstacles, and in addition to solving the vision problem, ensure that the robot avoids colliding with other parts or the bin itself.

All vision guidance solutions tackle extremely difficult problems caused by the possibility of harsh lighting conditions. In addition, random bin picking must deal with increased shadowing and specularities caused by reflections from other parts, variation in appearance of the part based on its pose, varying degrees of occlusion by other parts in the pile, and the lack of (a large number of ) salient features for recognition — a typical characteristic of rough or unfinished parts that are inexpensive enough to toss together in a bin. Another particularly difficult problem is cascading layers of parts — parts that partially lay on top of one another in a weaving formation — where a vision system must be able to recognize a part as safe to grasp even while under other parts.

In addition to vision issues, there are significant challenges with planning grasping paths for robot grippers that avoid collisions with the bin and the rest of the pile within the robot’s range of movement. Robot tools must be developed to grip parts from various positions, and must grasp parts without colliding with parts above, below, or next to the part. The system must also complete the task as fast as or faster than current manual or semi-manual systems to be commercially viable to customers.

Methods and Approaches

Research focused on visually recognizing and locating 3D objects uses either 2D data from a single image or 3D data from stereo images or range scanners. Methods can be subdivided into model-based, appearance-based, and 3D data approaches.

The model-based approaches suffer from difficulties in feature extraction under harsh lighting conditions. Typical parts will not contain a large number of features, limiting the accuracy of a model-based fit to noisy image data. Appearance-based approaches have problems in segmenting out the object for recognition, have trouble with occlusion, and may not provide a 3D pose accurate enough for grasping purposes.

Approaches with 3D data face lighting effects that cause problems for stereo reconstruction, and specularities that create spurious data both for stereo and laser range finders. Once the 3D data is generated, there are the issues of image segmentation and representation. On the representation side, more complex models are often used than in the 2D case. These models contain a larger number of free parameters, which can be difficult to fit to noisy data.

A Vision Solution

Vision guidance solutions for random bin picking must deal with varying degrees of occlusion by other parts in a pile, and parts that partially lay on top of one another in a weaving formation.

A solution that is showing great promise combines the strongest aspects of both 2D and 3D object recognition. First, a robot-mounted stereo camera is used to acquire images from multiple planned viewpoints above the bin in order to increase the probability of observing parts in desirable positions. This method of continually moving the camera to new viewpoints is called Active Vision — an intelligent sensing strategy that acquires and uses multiple image information for a given task. Active Vision takes full advantage of a robot-mounted camera by moving from areas with too much reflection, shadowing, or occlusions, to more suitable areas where better source images can be gathered to improve both 2D and 3D algorithms.

Next, images from one of the robot-mounted stereo cameras are analyzed using 2D matching algorithms to locate potential part candidates. The 3D matching stage then compares a 3D model of the part, called a 3D point cloud, against the runtime 3D surface map of the pile in the area chosen by the 2D matching stage. The 3D surface map data is determined using both the stereo cameras. Once parts are located, the current best candidate for picking is chosen based on various criteria. Three-dimensional models of the robot arm, end-of-arm tool, and the bin are used to determine possible collisions with known part locations, and the 3D surface map in the region around the grasp points is tested to ensure there is nothing obstructing the gripper as it moves to grasp the part.

Random Bin Picking™ (RBP™ ) technology, developed by Braintech, only considers cases where there is a single part type to be recognized; however, the part may exhibit specularity and may contain only a small number of significant features that are useful for recognition. A focus on edge detection and use of Active Vision enables the system to be more resistant to changes in lighting. The system deals with occlusion by simply not requiring a complete match. The number of found part features required at a minimum is part-specific, but generally stays in the 60 to 70% range of total features.

Random bin picking streamlines processes along assembly lines by removing the need for expensive custom crates, fixturing, precision feeding mechanisms in the case of automated systems, and ergonomically dangerous manual labor in the case of human-operated part feeding. As a result of reduced fixturing costs and improved ergonomics, RBP represents a significant return on investment.

This article was written by Kait Jones, Marketing Coordinator, and Jeff Beis, Robot Vision Scientist, at Braintech, Inc., Vancouver, BC, Canada. For more information visit http://info.hotims.com/15132-155 .