A SimPLE Way to Teach Robots to Pick and Place

MIT MechE researchers introduce an approach — dubbed SimPLE (Simulation to Pick Localize and placE) — that teaches robots to pick, regrasp, and place objects using the object’s computer-aided design model. Watch this video to learn more.

“In this work we show that it is possible to achieve the levels of positional accuracy that are required for many industrial pick and place tasks without any other specialization,” says MIT visiting scientist, Alberto Rodriguez  .



Transcript

00:00:00 [MUSIC PLAYING] ANTONIA DELORES BRONARS: Manufacturing is one of the first places where robotics has become very important and very impactful. The general paradigm is one robot is capable of doing one job, interacting with one type of object, which makes it very expensive for companies to introduce new parts to their manufacturing lines.

00:00:23 Our solution is to reduce the burden of introducing new objects to make it so that robots can interact still precisely, but more flexibly. The approach that we have developed is called SimPLE. And it relies on interacting with objects in simulation to precisely pick up, localize, and place objects. We rely on the object CAD models for the type of domains that we're targeting. It's a compelling demonstration of the power of integrating vision and tactile.

00:00:54 One gives you a very global, high level view of what's going on, while the other gives you a local but highly precise view of where the object is. One of the features of our system is that it's bimanual, meaning it has two arms. So, if necessary, it can decide to hand the object off to the other arm, which can lead to a higher success rate. Our lab recognized that there is a gap when it comes from transforming arrangements of objects from an unstructured set to a structured set, which is super

00:01:24 valuable in the industry because we open the door for a wider range of downstream tasks. This system could see applications in manufacturing, in hospitals, in laboratory settings, anywhere where the set of objects that the robot would interact with are relatively consistent over some horizon of time. I think the really cool thing about this work is it truly is an end to end system that puts together a lot of pieces from perception with multiple modalities

00:01:50 to planning to really explore the synergies between these different parts and how we can leverage the robot's knowledge of how well they work to come up with a plan that is robust and efficient. It's capable of understanding its own capabilities as well as limitations, which is a very humanlike form of intelligence. [MUSIC PLAYING]