A novel computational model allows robots to ask clarifying questions to soldiers, enabling them to be more effective teammates in tactical environments. (Image: 1st Lt. Angelo Mejia)

Future Army missions will have autonomous agents, such as robots, embedded in human teams making decisions in the physical world. One major challenge toward this goal is maintaining performance when a robot encounters something it has not previously seen; for example, a new object or location. Robots will need to be able to learn these novel concepts on the fly in order to support the team and the mission.

Researchers have created a computational model for automated question generation and learning. The model enables a robot to ask effective clarification questions based on its knowledge of the environment and to learn from the responses. This process of learning through dialogue works for learning new words, concepts, and even actions. Researchers integrated this model into a cognitive robotic architecture.

In previous research, the team conducted an empirical study to explore and model how humans ask questions when controlling a robot. This led to the creation of the Human-Robot Dialogue Learning (HuRDL) corpus, which contains labeled dialogue data that categorizes the form of questions that study participants asked. The HuRDL corpus serves as the empirical basis for the computational model for automated question generation.

The model uses a decision network, which is a probabilistic graphical model that enables a robot to represent world knowledge from its various sensory modalities including vision and speech. It reasons over these representations to ask the best questions to maximize its knowledge about unknown concepts.

For example, if a robot is asked to pick up some object that it has never seen before, it might try to identify the object by asking a question such as, “What color is it?” or another question from the HuRDL corpus.

The question generation model was integrated into the Distributed Integrated Affect Reflection Cognition (DIARC) robot architecture originating from collaborators at Tufts University. In a proof-of-concept demonstration in a virtual Unity 3D environment, the researchers showed a robot learning through dialogue to perform a collaborative tool organization task.

While prior research on soldier-robot dialogue enabled robots to interpret soldier intent and carry out commands, there are additional challenges when operating in tactical environments. For example, a command may be misunderstood due to loud background noise or a soldier can refer to a concept with which a robot is unfamiliar. As a result, robots need to learn and adapt on the fly if they are to keep up with soldiers in these environments.

The ability to learn through dialogue is beneficial to many types of language-enabled agents, such as robots and sensors, that can use this technology to better adapt to novel environments. Such technology can be employed on robots in remote collaborative interaction tasks such as reconnaissance and search-and-rescue, or in co-located human-agent teams performing tasks such as transport and maintenance.

This research is different from existing approaches to robot learning in that the focus is on interactive human-like dialogue as a means to learn. This kind of interaction is intuitive for humans and prevents the need to develop complex interfaces to teach the robot. Another innovation of the approach is that it does not rely on extensive training data like so many deep learning approaches.

Finally, this research addresses the issue of explainability. Many commercial AI systems cannot explain why they made a decision. The new approach is inherently explainable in that questions are generated based on a robot’s representation of its own knowledge and lack of knowledge. The DIARC architecture supports this kind of introspection and can even generate explanations about its decision-making. Such explainability is critical for tactical environments, which are fraught with potential ethical concerns.

The next step is to improve the model by expanding the kinds of questions it can ask.

For more information, contact the Army Research Laboratory Public Affairs Office at This email address is being protected from spambots. You need JavaScript enabled to view it.; 301-394-3590.