A recent study that evaluated issues associated with remote interaction with an autonomous vehicle within the framework of grounding found that missing contextual information led to uncertainty in the interpretation of collected data, and so introduced errors into the command logic of the vehicle. As the vehicles became more autonomous through the activation of additional capabilities, more errors were made. This is an inefficient use of the platform, since the behavior of remotely located autonomous vehicles didn’t coincide with the “mental models” of human operators.

An example of the Inference Mechanism in a Rules-of-the-Road behavior shows two boats approaching each other head-on. The left side shows the sensory inputs that are needed by the behaviors that are competing with each other to control the actuators. The right side shows the behavior network with four behaviors fed into the Arbitration module to produce the settings for the rudder (heading) and throttle (speed) of the vehicle. Mapping of the behavior network to an equivalent cost-calculus expression is shown at the bottom.
One of the conclusions of the study was that there should be a way for the autonomous vehicles to describe what action they choose and why. Robotic agents with enough self-awareness to dynamically adjust the information conveyed back to the Operations Center based on a detail level component analysis of requests could provide this description capability. One way to accomplish this is to map the behavior base of the robot into a formal mathematical framework called a cost-calculus. A cost-calculus uses composition operators to build up sequences of behaviors that can then be compared to what is observed using well-known inference mechanisms.

The explanation system is broken up into three subsystems that address the principal developments needed:

1. An inference mechanism for the mapping of observed behaviors into the cost-calculus: The observation equivalence of behaviors on a single autonomous agent and between two or more agents is done through bi-simulation relations. An example of the inference mechanism at work in a Rules-of-the-Road behavior is shown in the figure.
2. A learning mechanism for the cost-expression generation for observed behaviors outside of the cost-calculus tactical behavior base: Reinforcement learning of observed behavior patterns is used for the common grounding of behaviors sequences that were not previously observed, or that are in the command dictionary of the autonomous agent.
3. Explanation capabilities for the system: A dynamic decision tree decomposition of the observed behaviors is used to generate a set of rules for explanation. An adaptive level of detail is automatically built into this process in that all of the sensory information that led to a behavior is available, and can be conveyed to the operator if the human/machine interface (HMI) has a detail level of request capability.

This work was done by Terrance L. Huntsberger of Caltech for NASA’s Jet Propulsion Laboratory.

The software used in this innovation is available for commercial licensing. Please contact Daniel Broderick of the California Institute of Technology at This email address is being protected from spambots. You need JavaScript enabled to view it.. NPO-46864



This Brief includes a Technical Support Package (TSP).
Document cover
Explanation Capabilities for Behavior- Based Robot Control

(reference NPO-46864) is currently available for download from the TSP library.

Don't have an account?



Magazine cover
NASA Tech Briefs Magazine

This article first appeared in the January, 2012 issue of NASA Tech Briefs Magazine (Vol. 36 No. 1).

Read more articles from this issue here.

Read more articles from the archives here.


Overview

The document titled "Explanation Capabilities for Behavior-Based Robot Control" from NASA's Jet Propulsion Laboratory discusses the challenges and advancements in the interaction between operators and autonomous vehicles. It highlights a study that identified significant issues stemming from missing contextual information during remote operations, which can lead to misunderstandings and inefficiencies in how autonomous vehicles are commanded.

The core premise of the research is that as autonomous vehicles become more capable and independent, their behaviors may not align with the operators' mental models. This disconnect can hinder effective communication and operational efficiency. To address these challenges, the study proposes the development of explanation capabilities that allow autonomous agents to articulate their actions and reasoning. This self-awareness would enable the vehicles to adjust the information they relay back to the operations center based on the context of requests, thereby improving common ground between the operator and the vehicle.

The document outlines a structured approach to developing these explanation capabilities, which is divided into three main subsystems:

  1. Inference Mechanism: This subsystem focuses on mapping observed behaviors of the autonomous vehicle into a formal framework known as the $-Calculus. It evaluates the speed of inference, the accuracy of behavior matching, and the error rates in the inference process.

  2. Learning Mechanism: This component is responsible for generating $-expressions for behaviors that fall outside the predefined tactical behavior base. It emphasizes the importance of speed and accuracy in learning new behaviors.

  3. Explanation Capabilities: This subsystem aims to provide clear and effective explanations of the vehicle's actions. It assesses the complexity of explanations based on the number of statements made versus the complexity of the behavior being explained.

The document serves as a technical support package under NASA's Commercial Technology Program, aiming to disseminate findings that have broader technological, scientific, or commercial applications. It underscores the importance of enhancing communication between human operators and autonomous systems to improve operational effectiveness in aerospace and other fields.

Overall, the research presented in this document is pivotal for advancing the capabilities of autonomous vehicles, ensuring they can operate more effectively in complex environments while maintaining clear communication with their human operators.