With Air-Guardian, a computer program can track where a human pilot is looking (using eye-tracking technology), so it can better understand what the pilot is focusing on. This helps the computer make better decisions that are in line with what the pilot is doing or intending to do. (Image: Alex Shipps/MIT CSAIL using the Midjourney AI image generator)

Imagine you're in an airplane with two pilots, one human and one computer. Both have their “hands” on the controllers, but they're always looking out for different things. If they're both paying attention to the same thing, the human gets to steer. But if the human gets distracted or misses something, the computer quickly takes over.

Meet the Air-Guardian, a system developed by researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). As modern pilots grapple with an onslaught of information from multiple monitors, especially during critical moments, Air-Guardian acts as a proactive copilot; a partnership between human and machine, rooted in understanding attention.

But how does it determine attention, exactly? For humans, it uses eye-tracking, and for the neural system, it relies on something called "saliency maps," which pinpoint where attention is directed. The maps serve as visual guides highlighting key regions within an image, aiding in grasping and deciphering the behavior of intricate algorithms. Air-Guardian identifies early signs of potential risks through these attention markers, instead of only intervening during safety breaches like traditional autopilot systems.

The broader implications of this system reach beyond aviation. Similar cooperative control mechanisms could one day be used in cars, drones, and a wider spectrum of robotics.

"An exciting feature of our method is its differentiability," said MIT CSAIL postdoc Lianhao Yin, a lead author on a new paper about Air-Guardian. "Our cooperative layer and the entire end-to-end process can be trained. We specifically chose the causal continuous-depth neural network model because of its dynamic features in mapping attention. Another unique aspect is adaptability. The Air-Guardian system isn't rigid; it can be adjusted based on the situation's demands, ensuring a balanced partnership between human and machine."

In field tests, both the pilot and the system made decisions based on the same raw images when navigating to the target waypoint. Air-Guardian’s success was gauged based on the cumulative rewards earned during flight and shorter path to the waypoint. The guardian reduced the risk level of flights and increased the success rate of navigating to target points.

"This system represents the innovative approach of human-centric AI-enabled aviation," added Ramin Hasani, MIT CSAIL research affiliate and inventor of liquid neural networks. "Our use of liquid neural networks provides a dynamic, adaptive approach, ensuring that the AI doesn't merely replace human judgment but complements it, leading to enhanced safety and collaboration in the skies."

The true strength of Air-Guardian is its foundational technology. With its optimization-based cooperative layer using visual attention from humans and machines, and liquid closed-form continuous-time neural networks (CfC) known for prowess in deciphering cause-and-effect relationships, it analyzes incoming images for vital information. Complementing this, is the VisualBackProp algorithm, which identifies the system's focal points within an image, ensuring clear understanding of its attention maps.

For future mass adoption, there's a need to refine the human-machine interface. Feedback suggests an indicator, like a bar, might be more intuitive to signify when the guardian system takes control.

Air-Guardian heralds a new age of safer skies, offering a reliable safety net for those moments when human attention wavers.

"The Air-Guardian system highlights the synergy between human expertise and machine learning, furthering the objective of using machine learning to augment pilots in challenging scenarios and reduce operational errors," said Daniela Rus, the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT, director of CSAIL, and senior author on the paper.

"One of the most interesting outcomes of using a visual attention metric in this work is the potential for allowing earlier interventions and greater interpretability by human pilots," said Stephanie Gil, assistant professor of computer science at Harvard University, who was not involved in the work. "This showcases a great example of how AI can be used to work with a human, lowering the barrier for achieving trust by using natural communication mechanisms between the human and the AI system."

Source