In the 1980s, the humanoid robot race was picking up.
Japan’s Waseda University created the WABOT-2 , which could read a musical score and play it back on a keyboard. Hitachi built its WHL-11 – a biped robot that could walk on a flat surface at 13 seconds per step.
And in 1988, in Richland, Washington, a team at Pacific Northwest National Laboratories (PNNL) made a mannequin robot named, well, "Manny."
The full-scale anthropomorphic robot had 42 articulated joints and degrees of freedom. And Manny’s job? To test the shielding clothing used in hazardous environments.
Sponsored by the U.S. Army’s Dugway Proving Ground in Dugway, Utah, the PNNL researchers used the mannequin to simulate as many motions as a firefighter or other typical protective-gear wearer would go through, including crawling, turning the head, and stepping forward.
Manny had a four-inch square shaft coming out of its back. The support arm helped the mannequin simulate motions like walking, bending, and squatting.
The advanced model resembled the human body in many ways, including in its size. Manny had a flexible plastic skin and an artificial respiratory system. By expanding and contracting the chest and injecting moist air at the noise and mouth, the robot could inhale and exhale.
Manny could even sweat. Perspiration was simulated by using narrow tubes to inject water at places on the skin surface.
Gordon Anderson, one of the original engineers on the “Manny” project, oversaw all of the computers and electronics required to control the different joints in the robot’s body. Anderson spoke with Tech Briefs about his time making Manny.
Tech Briefs: What was the inspiration behind the Manny project?
Gordon Anderson: The goal was to test protective clothing that you would use in some kind of a hazardous environment.
You may have seen a protective clothing test before where they dress mannequins in fire suits and shoot them with a flamethrower. The trouble with those kinds of tests, though, are that they’re static mannequins and they don’t do a good job of really testing where these suits might fail.
The idea with Manny is you can dress the mannequin in protective clothing and put sensors on his body. Then, you can expose the sensors to chemicals and fire, and find out where this suit has failure. And you can have the mannequin move in a lifelike way.
Tech Briefs: What did Manny look like?
Gordon Anderson: All of the sensor leads and hydraulics cylinders that operated the mannequin came out back to an electronics array that was used to run everything, including control valves and computers. There were roughly 40 articulated joints, and each of those joints had a computer that was responsible for just controlling its positions. These 40 computers talked to an intermediate “traffic cop” computer that then sent the information to a higher-level computer that took care of all the controls to make it do various types of motions like walking.
Tech Briefs: What actions were most important to simulate?
Gordon Anderson: A firefighter may be crawling in to a building, so we had to be able to simulate crawling-type motions. You may have to lift things up or raise things above your head. We tried to simulate as many motions as a person would go through wearing protective clothing.
Tech Briefs: How did you feel as you were developing Manny in 1988?
Gordon Anderson: The project gave you huge respect for the human body — the range of motion, the complexity of the joints. Things get challenging from a mechanical-engineering perspective because your body is an amazing device in terms of all the joints.
Tech Briefs: What were some of your biggest design challenges?
Gordon Anderson: For all of our control electronics, probably our biggest challenge was establishing reliable communication between all the computers that needed to send information back and forth. The electronics were larger in volume than the robot itself was.
You’d go through a process we call debugging. You’d write test programs and you’d let it run all night. In our case, we’d exercise the communications in kind of a stress test. We’d be sending a lot more data back and forth than we were going to need to when we’d actually run the robot.
Let’s say we wanted the mannequin to walk. We would have to develop all of the motions for all the different limbs, then write a scripting language that we could send to the mannequin’s computers. We had 40 computers, one on each joint, and they all had to operate in a synchronized way to control all these joints.
Tech Briefs: Any good stories?
Gordon Anderson: One day in the lab, when we were just goofing around, we had Manny hold a Nerf ball in his hand and drop it and kick it in a trash can that was a few feet away. It took us a few tries to get it, but it was part of the fun of having everything working. We could do stunts like that.
Tech Briefs: How does Manny compare to today’s humanoid robots?
Gordon Anderson: By today’s standards, it’s pretty crude. There are so many advancements that have been made in high-speed actuators, and the electronics and computer advancements are just mind blowing. You have much more computational power. What we had in a card that was probably 4x6 with electronics could all be put in the actual joints today quite easily.
Our objective then wasn’t to try to make it walk. If [the U.S. Army’s Dugway Proving Ground] would’ve came to me in the 1980s and said we had to make this thing walk, I would’ve said, “Good luck with that.” But today they’re doing that.