RoboSimian, a limbed robot developed at NASA’s Jet Propulsion Laboratory (JPL), is designed to operate in environments too dangerous or difficult for human intervention, such as disaster areas and oil leak sites. After years of engineering in the lab, JPL’s researchers are preparing the tele-operated RoboSimian platform for use in new, complex environments, from deep waters to outer space.

What Makes RoboSimian Move

The RoboSimian exits a vehicle, as part of the June 2015 DARPA Robotics Challenge. (Image Credit: JPL-Caltech)

Though not meant for extremely dexterous tasks, the multi-jointed robot has the ability to grab objects, including human tools. RoboSimian achieves passively stable stances, connects to supports like ladders and railings, and braces itself during forceful handling operations. Four sensors in the wrist and ankle allow the robot to “feel” the terrain as it walks.

Cameras and LIDAR capabilities provide a 3D map, which is sent back to the operator; the operator then decides RoboSimian’s direction. The technology, as currently fitted, has seven sets of stereo cameras, which offer depth perception for robotic mobility and manipulation. Stereo camera pairs estimate the 3D geometry around RoboSimian for labeling objects, and provide situational awareness for the operators.

To address the limitations of stereovision, like the casting of shadows in images, an actuated 2D time-of-flight laser scanner has been integrated into the vision system. The laser scanner, according to the JPL team, is more robust to a wider range of lighting conditions and object properties for estimating depth. By combining the laser data with a wide-angle color camera, object models can be more accurately and autonomously fit for manipulation, with less operator intervention.

“[The RoboSimian] has a ‘super-eye’ view,” said Chuck Bergh, RoboSimian Integration Lead at Jet Propulsion Laboratory. “It can figure out where it is in a room by looking at the walls.”

The essential real-time data that human operators receive from RoboSimian includes the robot state and position, the angles of all the limbs’ joints, and error and status messages from the modules running on the robot. Additionally, the operators can request on-demand images from any of the system’s cameras and a 3D map of the local environment around the robot, estimated by each stereo pair.

The researchers use the data to construct a video-game-like interface in the operator control unit (OCU), which gives the operators a bird’s eye view of the robot in its environment and the ability to preview motion plans.

RoboSimian’s Role

The RoboSimian (Image Credit: JPL-Caltech)

RoboSimian competed in the June 2015 DARPA Robotics Challenge, a contest consisting of several disaster-related tasks for robots to perform: driving and exiting a vehicle, opening a door, cutting a hole in a wall, opening a valve, crossing a field of debris, and climbing stairs. Such capabilities are valuable, especially for rescue tasks too dangerous for human intervention, like the 2011 Fukushima nuclear disaster in Okuna, Japan.

“If you think of a Fukushima event, the people that were directly involved with that said ‘If we could’ve just gone in and twisted this valve and flipped this switch, then maybe a lot of that damage wouldn’t have happened.’ We could’ve stemmed the disaster a lot earlier,” said Bergh.

A similar narrative occurred with the 2010 Deepwater Horizon oil spill in the Gulf of Mexico, Bergh said, adding that a robotic platform could have potentially gone deep below the surface, spotted the problem, and turned, for example, a specific valve, preventing the leak of millions of barrels of oil.

“In just about every domain where we operate, we’re working on that paradigm – just putting eyes on the target and doing simple manipulation in real time, in a decoupled fashion, so that we can send commands to the robot, and the robot executes them without having a human in the loop,” Bergh said.

New Applications

The RoboSimian researchers at JPL are currently expanding the platform’s manipulation capabilities to include bimanual motions: actions that require two hands working together, where each hand takes a different role while being synchronized in time and space. Such functions are particularly helpful for jobs such as clearing rubble or assembling a construction truss.

NASA is also currently developing a waterproof version of the actuator, which would allow underwater and deepwater sampling for applications like an oil spill, where a remotely operated vehicle could go deep below the surface to perform simple manipulation tasks.

The researchers additionally plan to put RoboSimian to work in space, for possible use in exploration or on-orbit space assembly.

“Think of putting together very large space structures, or even maintaining those large space structures. RoboSimian would be well-suited for that because it could crawl around on the trusses and help put together and maintain them,” said Bergh.

The robotic platform could also be used on missions to Mars or other asteroids – one of many exciting possibilities, according to Bergh.

“We’re making robots that are truly useful tools to put out into the world,” he said.

This article was written by Billy Hurley, Associate Editor, NASA Tech Briefs magazine. For questions and comments, email This email address is being protected from spambots. You need JavaScript enabled to view it..