The term “System for Mobility and Access to Rough Terrain” (SMART) denotes a theoretical framework, a control architecture, and an algorithm that implements the framework and architecture, for enabling a land-mobile robot to adapt to changing conditions. SMART is intended to enable the robot to recognize adverse terrain conditions beyond its optimal operational envelope, and, in response, to intelligently reconfigure itself (e.g., adjust suspension heights or baseline distances between suspension points) or adapt its driving techniques (e.g., engage in a crabbing motion as a switchback technique for ascending steep terrain). Conceived for original application aboard Mars rovers and similar autonomous or semi-autonomous mobile robots used in exploration of remote planets, SMART could also be applied to autonomous terrestrial vehicles to be used for search, rescue, and/or exploration on rough terrain.

In SMART, controlling the motion of the robot, managing the “health” of the robot, and managing resources are considered as parts of a free-flow behavior hierarchy that autonomously adapts to changing conditions. Tasks that must be performed in the continuing development of SMART are to provide for safe, adaptive mobility on highly sloped terrain include:

  • Determination of strategies for adaptive reconfiguration and driving that are nearly optimal with respect to safety and are computationally feasible for on-board implementation,
  • Determination of a representation for uncertainty in sensing and prediction of the state of the robot and its environment, and
  • Determination of resource-management strategies that mitigate such risks as those of the loss of battery power and/or drive motors.

The Free-Flow Action-Selection Hierarchy includes multiple behaviors at different levels. The numerical values shown at several places are examples of weights assigned to inputs of behavioral modes. In general, such weights are changed as needed to adapt to changing or previously unknown environmental conditions.
SMART is based largely on a prior architecture denoted Biologically Inspired System for Map-based Autonomous Rover Control (BISMARC), which, in turn is based on a modified free-flow hierarchy. BISMARC has been used with success in a number of different simulated mission scenarios, wherein it has been demonstrated to afford capabilities for retrieving objects cached at multiple locations, fault tolerance on missions of long duration, and preparing terrain sites for habitation by humans. BISMARC includes provisions for all aspects of safety, self-maintenance, and achievement of goals, as needed to support a sustained presence on the surface of a remote planet.

BISMARC is organized as a two-level system. From stereoscopic images acquired by cameras aboard the robot, the first level generates hypotheses of motor actions. The second level processes these hypotheses, coupled with external and internal inputs, to generate control signals to drive the actuators on the robot.

The figure illustrates the free-flow action-selection hierarchy of BISMARC and SMART. The rectangular boxes represent behaviors, while the ovals represent sensory inputs (either fixed, direct, or derived). At the top are the high-level behaviors, including Don’t Tip Over, Go to Goal, Avoid Obstacles, Preserve Motors, Warm Up, Get Power, and Sleep at Night. The intermediate-level behaviors (Change Center of Gravity, Avoid Obstacles, Rest, and Sleep) are designed to interact with both the short-term memory (which corresponds to perceived sensory stimuli), and the long-term memory (which encodes remembered sensory information). Control loops are prevented by use of temporal penalties, which constrain the system to repeat a given behavior no more than a predetermined number of times. The bottom-level behaviors (Tilt Arm, Change Shoulder Angles, Move, Rest, Stop, Sleep) fuse the sensory inputs and the activations of the higher-level behaviors in order to select appropriate actions for safety and achieving goals.

Inputs to the behavioral nodes are calculated as weighted sums. In BISMARC, the weights are fixed; consequently, BISMARC is not capable of adaptation to changing conditions or to environments outside an original world model. In contrast, SMART includes a learning mechanism that adapts the weights to changing and previously unanticipated conditions: An algorithm, known in the art as the maximize collective happiness (MCH) algorithm, adjusts the weights in such a manner as to maintain the health of the robot while ensuring progress toward the goal.

This work was done by Terrance Huntsberger of Caltech for NASA’s Jet Propulsion Laboratory.

The software used in this innovation is available for commercial licensing. Please contact Karina Edmonds of the California Institute of Technology at (626) 395-2322. Refer to NPO-40899.