Retrieving objects from a pile is a daunting task for a robot as it involves complex reasoning about the pile and objects in it, which presents a steep challenge. MIT researchers previously demonstrated a robotic arm that combines visual information and radio frequency (RF) signals to find hidden objects that were tagged with RFID tags (which reflect signals sent by an antenna). Building off that work, they have now developed a new system that can efficiently retrieve any object buried in a pile. As long as some items in the pile have RFID tags, the target item does not need to be tagged for the system to recover it.
The algorithms behind the system, known as FuseBot, reason about the probable location and orientation of objects under the pile. Then FuseBot finds the most efficient way to remove obstructing objects and extract the target item. This reasoning enabled FuseBot to find more hidden items than a state-of-the-art robotics system, in half the time.
This speed could be especially useful in an e-commerce warehouse. A robot tasked with processing returns could find items in an unsorted pile more efficiently with the FuseBot system, said Fadel Adib, Associate Professor in the Department of Electrical Engineering and Computer Science and director of the Signal Kinetics group in the Media Lab.” We were able to do this because we added multimodal reasoning to the system — FuseBot can reason about both vision and RF to understand a pile of items, said Adib.
A recent market report indicates that more than 90 percent of U.S. retailers now use RFID tags, but the technology is not universal, leading to situations in which only some objects within piles are tagged. This problem inspired the group’s research.
With FuseBot, a robotic arm uses an attached video camera and RF antenna to retrieve an untagged target item from a mixed pile. The system scans the pile with its camera to create a 3D model of the environment. Simultaneously, it sends signals from its antenna to locate RFID tags. These radio waves can pass through most solid surfaces, so the robot can “see” deep into the pile. Since the target item is not tagged, FuseBot knows the item cannot be located at the exact same spot as an RFID tag.
Algorithms fuse this information to update the 3D model of the environment and highlight potential locations of the target item; the robot knows its size and shape. Then the system reasons about the objects in the pile and RFID tag locations to determine which item to remove, with the goal of finding the target item with the fewest moves. This reasoning, as well as its use of RF signals, gave FuseBot an edge over a state-of-the-art system that used only vision.
FuseBot could be applied in a variety of settings because the software that performs its complex reasoning can be implemented on any computer. In the near future, the researchers are planning to incorporate more complex models into FuseBot so it performs better on deformable objects. Beyond that, they are interested in exploring different manipulations, such as a robotic arm that pushes items out of the way. Future iterations of the system could also be used with a mobile robot that searches multiple piles for lost objects.
For more information, contact Abby Abazorius at