A self-driving car needs to make quick decisions as it detects its surroundings. But can you trust a vehicle’s ability to make sound choices within fractions of a second — especially when conflicting information is coming from the car’s cameras, LiDAR, and radar?

"There is hope," begins the title of a USC engineering team's research paper.

A new modeling tool from USC indicates when predictions from A.I. algorithms are trustworthy.

The important aspect of the model isn't necessarily how it determines decisions with certainty — in fact, the model quantifies uncertainty.

"Even humans can be indecisive in certain decision-making scenarios. In cases involving conflicting information," said lead author Mingxi Cheng . "Why can't machines tell us when they don't know?"

Cheng and his team, which included Shahin Nazarian and Paul Bogdan of the USC Cyber Physical Systems Group, created a system called DeepTrust. The A.I. model quantifies a level of uncertainty and uses that information to determine when human intervention is necessary.

The team's research paper, "There Is Hope After All: Quantifying Opinion and Trustworthiness in Neural Networks " was featured in the publication Frontiers in Artificial Intelligence.

The DeepTrust tool employs what is known as subjective logic. Subjective logic uses continuous uncertainty and belief parameters, instead of solely discrete truth values, to make decisions.

DeepTrust factors the uncertainty calculations into an assessment of the overall system architecture — not the hundreds to thousands of individual data points — of the neural networks.

"To our knowledge, there is no trust quantification model or tool for deep learning, artificial intelligence and machine learning," said Prof. Bogdan. "This is the first approach and opens new research directions."

In written responses to Tech Briefs, Bogdan and Cheng reveal more about why we can believe them when they say their model is the first of its kind.

Tech Briefs: What is valuable about a machine determining, like a human would, that it doesn’t know the answer? What is the value of a kind of not-knowing or indecisiveness?

Prof. Paul Bogdan: In most machine-learning, especially deep-learning, classification problems, there are pre-defined outcomes that a machine should generate. However, more often for real-world applications, the training data is limited and incomplete, hence the pre-defined outcomes only cover a small portion of the real possible outcomes.

In this case, a pre-trained machine can only force the outcome into one of the known categories, which could result in serious problems. For example, in one of the fatal incidents of a self-driving car, the autonomous driving system is trained with incomplete training data and generates a wrong decision during testing. In such scenarios, if the system has an option to say ‘I don’t know; please switch to human control,’ then the incidents can be avoided.

Tech Briefs: What criteria from your model determines “noise” in a sample?

Lead author Mingxi Cheng: Our model takes data’s trustworthiness into consideration, and data’s trustworthiness can come from multiple aspects, e.g., data quality, such as noise, distortion, missing features, etc. However, this part of data-trustworthiness quantification is still missing in the literature.

In our work, we do not determine “noise” in a sample, but see it as an input we know beforehand. One way to relax this requirement is to assume a maximum uncertainty of data. We are working on follow-up research to quantify data trustworthiness.

One simple approach to determine ‘noise’ is to relate it to trustworthiness. If during the training process, the model cannot generate high trustworthiness results with one sample, then we count a negative evidence in opinion calculation; otherwise, we count a positive evidence. Then, you can use evidence-trust mapping to determine if a sample is trustworthy/noise free.

Tech Briefs: Can you take me through a cutting-edge application that you envision, that incorporates this model? Where do you see DeepTrust being used?

Mingxi Cheng: Applications which utilize deep neural networks as decision-making cores can all incorporate this model, especially those that take data or collect input from multiple sources/sensors. Those applications may face a problem of handling conflicting information, like with a self-driving car motion-control system. If one sensor reports there is an obstacle and another sensor says the contrary, there has to be a good way to combine these conflicting information and make the right decision.

Tech Briefs: Your press release mentioned that this model is the “first of its kind?” Can you say more about why?

Prof. Paul Bogdan: First of all, the definition of trustworthiness in deep learning/AI/machine learning has been non-analytical. Mainstream definition comes from descriptive concepts. Especially for deep neural networks, there is no formal math model or trust metric to analytically calculate the trust value. Most of the robustness, safety, and interpretability considerations evaluate the neural network from certification or explanation aspects based on the output results and accuracy. Considering both trustworthiness of data and inner workings of the neural network makes our model the “first of its kind.”

Secondly, we dig into why the output may not be trustworthy and we find that the trustworthiness of the neural network is not necessarily correlated with its accuracy, which makes our model different than approaches based purely on accuracies.

What do you think? Can you make A.I. trustworthy? Share your questions and comments below.