HERMES: Two-Legged Robot with Human Reflexes
Mechanical engineers from MIT have designed an interface that takes advantage of a human's split-second reflexes, allowing a humanoid to maintain its balance and complete tasks. A two-legged robot named HERMES, outfitted with load sensors, can punch through drywall, smash soda cans, and karate-chop boards in half, but its actions are not its own. A few feet away, MIT Ph.D. student Joao Ramos stands on a platform, wearing an exoskeleton of wires and motors. As Ramos mimes punching through a wall, the robot does the same. When the robot's fist hits the wall, Ramos feels a jolt at his waist. By reflex, he leans back against the jolt, causing the robot to rock back, effectively balancing the robot against the force of its punch. The exercises are meant to demonstrate the robot's unique balance-feedback interface.
Transcript
00:00:05 HERMES is a humanoid platform that we've been trying to develop in order to deploy into disaster situations scenarios. So you want to be able to deploy a human, but once its too dangerous to deploy a human itself we wanted to be able to deploy something that could do work as a human would be able to.
00:00:21 So the way I like to think about this project is that we're trying to put the human's brain inside the robot. So we want to take advantage of what humans can do. Humans can learn and adapt in order to face new situations and challenges that we may not predict. For humanoid robots or legged-robots in general,
00:00:38 keeping balance is critical to being able to carry out any task. We've decided to tackle this head-on by feeding the balance sensations of the robot back to the human as forces on his waist. That way we can take advantage of the natural reflexes and the learning capability of the human to be
00:00:58 able to keep the robot balanced. So we try to give the human as much freedom as possible. So the suit is a full-body suit so the human can move their arms and both legs. And the idea is that the robot is going to follow exactly the same way. The human also has handle controllers with which you can push a couple
00:01:17 buttons and those buttons are responsible for controlling the hands of the robot. So for grasping or releasing we control the force that which the robot is grasping this object very firmly or loosely or even to let it go. We also have a camera in the robots head, where your head would be, and that vision that
00:01:38 the robot sees is fed back to the operator in some vision- goggles. When the human wants to do more delicate tasks, like things that really require vision and strict positioning, he can use goggles and do a more precise manipulation with his hands. So currently we have the whole actions taken by the robot
00:02:00 is commanded by the human but we know that may not be the ultimate solution for the problem so we want to implement some intelligence in the robot. The human is still going to provide that creativity that problem-solving and the large scale coordination of all the joints. But we've designed the
00:02:19 robot to be stronger than a person so we'd imagine that in the future we want to merge some level of autonomous control along with the human's intelligence.