Researchers at Columbia Engineering have demonstrated a highly dexterous robot hand, one that combines an advanced sense of touch with motor learning algorithms in order to achieve a high level of dexterity.
To demonstrate its skill, the team chose a difficult manipulation task: executing an arbitrarily large rotation of an unevenly shaped grasped object in hand while always maintaining the object in a stable, secure hold. Not only was the hand able to perform this task, but it also did so without any visual feedback whatsoever — based solely on touch sensing. In addition, the hand worked without any external cameras, so it’s immune to lighting, occlusion, or similar issues.
“While our demonstration was on a proof-of-concept task, meant to illustrate the capabilities of the hand, we believe that this level of dexterity will open up entirely new applications for robotic manipulation in the real world,” said Associate Professor Matei Ciocarlie. “Some of the more immediate uses might be in logistics and material handling, helping ease up supply chain problems like the ones that have plagued our economy in recent years, and in advanced manufacturing and assembly in factories.”
The researchers designed and built a robot hand with five fingers and 15 independently actuated joints — each finger was equipped with the team’s touch-sensing technology. The next step was to test the ability of the tactile hand to perform complex manipulation tasks. To do this, they used a method called deep reinforcement learning, augmented with new algorithms that they developed for effective exploration of possible motor strategies.
The input to the motor learning algorithms consisted exclusively of the team’s tactile and proprioceptive data, without any vision. Using simulation as a training ground, the robot completed approximately one year of practice in only hours, thanks to modern physics simulators and highly parallel processors. The researchers then transferred this manipulation skill trained in simulation to the real robot hand, which was able to achieve the anticipated level of dexterity.
“The directional goal for the field remains assistive robotics in the home, the ultimate proving ground for real dexterity. In this study, we’ve shown that robot hands can also be highly dexterous based on touch sensing alone. Once we also add visual feedback into the mix along with touch, we hope to be able to achieve even more dexterity, and one day start approaching the replication of the human hand,” said Ciocarlie.
Ciocarlie observed that a physical robot being useful in the real-world needs both abstract, semantic intelligence and embodied intelligence. Large language models such as OpenAI’s GPT-4 or Google’s PALM aim to provide the former, while dexterity in manipulation as achieved in this study represents complementary advances in the latter.
For more information, contact Holly Evarts at