A software architecture to allow semi-autonomous mobile manipulation of highly dexterous robots under degraded communications was developed to enable remote operation of a mobile manipulation robot as a first responder in a disaster-response scenario. The software architecture is structured to be adaptable at the lowest level and repeatable at the highest level. This architecture strikes the right balance between autonomy and supervision, and lets the robot excel in its capabilities (repeatability, strength, precision) and lets the operators excel at their capabilities (situational awareness, context, high-level reasoning).

As part of the the DARPA Robotics Challenge Finals, the RoboSimian robot, developed at JPL, included this software architecture, which enabled it to drive and egress from a vehicle, open doors, turn valves, use power tools to cut holes in walls, and clear debris within severe communication blackouts and delays. The low-level adaptability was achieved by leveraging tactile measurements from force torque sensors in the wrist, coupled with whole-body motion primitives. The term “behaviors” is used to conceptualize this low-level adaptability.

Each behavior is a contact-triggered state machine that enables execution of short-order manipulation and mobility tasks autonomously. At a higher level, a teach-and-repeat style of development was highlighted by storing executed behaviors and navigation poses in an object/task frame for recall later. This enabled performance of tasks with high repeatability on competition day, while being robust to task differences from practice to execution.

Features of the software include:

  1. Parsimonious object fitting: Fitting with annotations on stereo disparity instead of 3D point clouds eliminates the need to send large map messages across the network. This also enables the architecture to work across degraded and bandwidth-limited communications.
  2. Plan mirroring: By mirroring mobility and manipulation planners on both the remote side and robot side, one eliminates the need to send large plan messages across degraded communications.
  3. Whole-body motion planning: In the architecture, the robot makes the decisions about how to move its body. This eliminates the need for constant supervision from the operator, who can focus on task-level commands such as “open the door,” “turn the valve,” etc. This approach also reduces the required bandwidth for communications.

Remote operation of robots under degraded communications is fundamental to space applications. Mobile manipulation robots will in the future perform routine maintenance, assembly, and manipulation tasks with limited input from operators on Earth.

This work was done by Sisir B. Karumanchi, Kyle D. Edelberg, Ian Baldwin, Jeremy Nash, Jason I. Reid, Paul G. Backes, Charles F. Bergh, and Brett A. Kennedy of Caltech; and Brian W. Satzinger for NASA’s Jet Propulsion Laboratory.

In accordance with Public Law 96-517, the contractor has elected to retain title to this invention. Inquiries concerning rights for its commercial use should be addressed to:

Technology Transfer at JPL
JPL
Mail Stop 321-123
4800 Oak Grove Drive
Pasadena, CA 91109-8099
E-mail: This email address is being protected from spambots. You need JavaScript enabled to view it.

Refer to NPO-49968.