Autonomous robotic manipulators have the potential to increase manufacturing efficiency, provide in-home care, and reduce the risk to humans in hazardous situations. The current challenge in autonomous robotic manipulation is to approach the capabilities of dedicated, one-off manipulators in known environments with versatile, inexpensive, and ubiquitous manipulator systems that can operate in a range of environments with only high-level human input.

A sensor-driven, model-based approach continually uses environmental interactions to update the system state, and then uses these state estimates for planning and control.

The ARM Robot is depicted picking up a screw-driver.
Instead of using the traditional sense- plan-act paradigm for planning and control execution of a robot, this innovation uses continuous estimation of the entire system state to update and then pre-plan actions, which are also deliberately executed, thereby increasing system state knowledge. This is done not only for a specific task, but is done across all tasks.

In general, the autonomy approach presented here conforms to standard system decompostions: Objects and the environment are first segmented, classified, and localized using vision. Based on the system state and models of the environment, optimal grasp sets, manipulation strategies, and collision-free motion paths can be computed. As environmental interaction occurs, more sensors such as tactile, force, and strain sensors can be fused with visual sensing to provide updates of system state. Real-time execution of task objectives, using feedback from sensors and state estimation, drives system actuators. For all tasks, including drilling, unlocking, opening, actuating, and grasping of various objects, a single strategy is used. Two diverse examples of picking up a screwdriver and unlocking a door are described in this general manipulation framework:

  1. Non-Contact Perception, in which only visual sensors are used to segment, classify, and localize objects in the scene, and determine their pose relative to environmental constraints (such as a table or wall/door plane).
  2. Approach, where using the initial system state estimates, optimal arm, neck, and finger trajectories are planned to bring the manipulator near the object in a manipulatable configuration. In the case of the screwdriver on the table, the grasp set will include a caging grasp, where widely spaced fingers and the table prevent object escape. For key insertion, the hand is brought near the door handle.
  3. Initial Contact/Relocation, where the manipulator is in the same field of view as the relevant objects, and estimation of the arm position (arm tracking) allows the control to provide more precise relative hand-to-object motions. For all tasks, a general contact strategy, where parts of the manipulator are moved into contact with the environment, is used to provide further relative localization.
  4. Grasping or Manipulation, where once sufficient relative object localization has been achieved, controlled execution of the primary specified task objective (drilling, grasping, compression, actuation, etc.) is conducted.

This work was done by Nicolas H. Hudson, Thomas M. Howard, Paul G. Backes, Abhinandan Jain, Max Bajracharya, Jeremy C. Ma, Joel W. Burdick, Paul Hebert, and Thomas F. Allen of Caltech for NASA’s Jet Propulsion Laboratory. For more information, contact This email address is being protected from spambots. You need JavaScript enabled to view it.. NPO-48095