Using a dual-arm robot equipped with visuotactile sensing, SimPLE employs task-aware grasping, perception by sight and touch (visuotactile perception), and regrasp planning. (Image: John Freidah/MIT Department of Mechanical Engineering)

Pick-and-place machines are a type of automated equipment used to place objects into structured, organized locations. These machines are used for a variety of applications — from electronics assembly to packaging, bin picking, and even inspection — but many current pick-and-place solutions are limited. Current solutions lack “precise generalization,” or the ability to solve many tasks without compromising on accuracy.

“In industry, you often see that [manufacturers] end up with very tailored solutions to the particular problem that they have, so a lot of engineering and not so much flexibility in terms of the solution,” Maria Bauza Villalonga Ph.D. ’22, a Senior Research Scientist at Google DeepMind where she works on robotics and robotic manipulation. “SimPLE solves this problem and provides a solution to pick-and-place that is flexible and still provides the needed precision.”

SimPLE, an approach to object manipulation developed by Department of Mechanical Engineering researchers, aims to “reduce the burden of introducing new objects to make it so that robots can interact still precisely but more flexibly,” according to doctoral student Antonia Delores Bronars SM ’22. (Image: John Freidah/MIT Department of Mechanical Engineering)

A new paper MechE researchers published in the journal Science Robotics explores pick-and-place solutions with more precision. In precise pick-and-place, also known as kitting, the robot transforms an unstructured arrangement of objects into an organized arrangement. The approach, dubbed SimPLE (Simulation to Pick Localize and placE), learns to pick, regrasp and place objects using the object’s computer-aided design (CAD) model, and all without any prior experience or encounters with the specific objects.

“The promise of SimPLE is that we can solve many different tasks with the same hardware and software using simulation to learn models that adapt to each specific task,” said Alberto Rodriguez, an MIT visiting scientist who is a former member of the MechE faculty and now Associate Director of manipulation research for Boston Dynamics. SimPLE was developed by members of the Manipulation and Mechanisms Lab at MIT (MCube) under Rodriguez’ direction.

“In this work we show that it is possible to achieve the levels of positional accuracy that are required for many industrial pick and place tasks without any other specialization,” Rodriguez said.

Using a dual-arm robot equipped with visuotactile sensing, the SimPLE solution employs three main components: task-aware grasping, perception by sight and touch (visuotactile perception), and regrasp planning. Real observations are matched against a set of simulated observations through supervised learning so that a distribution of likely object poses can be estimated, and placement accomplished.

In experiments, SimPLE successfully demonstrated the ability to pick-and-place diverse objects spanning a wide range of shapes, achieving successful placements over 90 percent of the time for six objects, and over 80 percent of the time for 11 objects.

“There’s an intuitive understanding in the robotics community that vision and touch are both useful, but [until now] there haven’t been many systematic demonstrations of how it can be useful for complex robotics tasks,” said mechanical engineering doctoral student Antonia Delores Bronars. Bronars, who is now working with Pulkit Agrawal, Assistant Professor in the Department of Electrical Engineering and Computer Science (EECS), is continuing her Ph.D. work investigating the incorporation of tactile capabilities into robotic systems.

For more information, contact Anne Wilson at This email address is being protected from spambots. You need JavaScript enabled to view it.; 617-715-2882.