Kim Hambuchen is currently building user interfaces for Vakyrie, a six-foot-two, 286-pound humanoid robot. The two-legged Valkyrie builds on NASA’s Robonaut, a robotic assistant currently onboard the International Space Station.

Kimberly Hambuchen, Deputy Project Manager, Human Robotics System Project, Johnson Space Center, Houston, TX

NASA Tech Briefs: What kinds of tasks can we expect Valkyrie to perform?

Kim Hambuchen: Valkyrie is a robot that we designed and developed for the DARPA Robotics Challenge, a 16-month venture. Right now we’re looking at sending a robot like Valkyrie ahead of a mission — on Mars or to a lunar surface — and having it actually set up habitats and bases before the crew comes. We also are envisioning a robot like Valkyrie being an actual astronaut assistant. We know that our crews in future missions will be quite small. If we can have a robot that interacts with the humans and can use the tools that humans use, then we can possibly double the crew size.

NTB: What are Valkyrie’s technical capabilities?

Hambuchen: The biggest technical capability is the two legs. Robonaut 2, which is on the space station, has two legs, but they’re climbing legs used to move around on the handrails and seat tracks. Right now, we’re partnered with the Florida Institute for Human and Machine Cognition to put their very advanced walking software on Valkyrie. They’re doing so, and she’s walking quite beautifully right now. In the robotics community, there is an open source project called the Robot Operating System, or ROS. It allows everyone to have their robots communicate the same way. People can share sensory-processing algorithms, control algorithms, and different kinds of applications that they create with their robots. They can share that with everyone else who has a robot that speaks this ROS language. Robonaut has been converted to speak ROS, and Valkyrie speaks ROS.

NTB: What needs to be improved before we see fully autonomous robots?

Hambuchen: So much. The perception problem isn’t solved. We’ve come a lot further in a very short time, but trying to get a robot to understand an unknown environment is still very difficult. We currently use humans in the loop to help the robot understand things about its environment and how to direct its behavior. All of the interfaces that I’ve been working on directly involve the human to make the decisions, and then the human sends these high-level commands to the robot. For instance, with the DARPA Robotics Challenge, we had to turn three different valves. For the robot to do that fully autonomously, it has to understand what a valve is, it has to accurately locate the valve, and then understand how to turn the valve when the valve is shut. Most of that is very difficult to do. We told the robot where the valve is, how to turn the valve, and then gave it a high-level command to do what we instructed it to do. It’s not really fully autonomous, but we’re hoping that we can create tools to allow the robots to learn from the human assistance.

To listen this interview as a podcast: