As a former high-power electronics engineer, I designed and tested high-voltage power supplies for a wide variety of applications from early prototype CAT scan systems to particle accelerators. I did the same with high-power microwave systems for everything from simulators to test the effects of radar on sensitive aircraft electronics at one extreme to processing granola at the other. For more than 30 years, I handled tens of kilovolts of DC voltage and tens of kilowatts of microwave power.

Everything had to be designed with safety margins as large as practical and with the goal of being fail-safe. That meant thinking in advance of the possible ways the system might fail and designing to minimize the chance that a failure would cause injury or damage. For example, we used electronic “crowbars” that would safely short circuit the output in microseconds if a sensor signaled that there was a sudden increase in load current. But that was an active system, which wouldn’t work in the event of a power failure. So, we used a back-up mechanical crowbar, held up by an electromagnet as fail-safe protection. If the input line power failed, a metal bar would drop across the output. This was important because even without power there could be dangerous amounts of energy stored in capacitor banks — something I once scarily discovered for myself when I received a shock from a power supply that I thought was safely off.

It is with that mind-set that I think about artificial intelligence (AI). I don’t worry that AI will displace human intelligence and turn us all into robots. But I do worry about relying too heavily on it especially in at least two areas: safety applications in advanced driver assistance systems (ADAS) and autonomous vehicles and disease diagnosis.

So, it caught my attention when I read about “inherent limitations” in AI . Researchers from the University of Cambridge and the University of Oslo assert that the neural networks that process AI can be unstable under certain conditions, and that uncertainty can’t be fixed by simply adding more training data. According to the researchers, we need more theoretical work to better understand the mathematics of AI computing. To get more reliable results, you have to understand the particular source of error and change the AI method to fix it.

University of California, Berkeley, and the University of Texas at Austin researchers noticed an issue  when they failed to replicate the promising results of a medical imaging study. “After several months of work, we realized that the image data used in the paper had been preprocessed,” said study principal investigator Michael Lustig, UC Berkeley professor of electrical engineering and computer sciences.

Ed Brown, Editor, SAE Media Group

That was the source of the trouble. “We wanted to raise awareness of the problem so researchers can be more careful and publish results that are more realistic,” said Lustig.

They discovered that the inaccuracy was caused by using a biased public database for training the system. The researchers coined the term “implicit data crimes” to describe research results that result when algorithms are developed using faulty methodology.

In a Q&A with Billy Hurley, SAE Digital Editorial Manager, Professor Eckehard Steinbach from the Technical University of Munich (TUM) described potentially critical automotive situations that AI models “may not be capable of recognizing, or have yet to discover.” For example, a pattern of repeated braking might be regular driving in warm weather but might indicate an impending disengagement if the roads are icy and slippery. Such patterns can be hard to recognize.

But on the bright side, Steinbach’s team developed safety technology that learns introspectively from its own previous mistakes. "If the car enters a situation that it has not been trained for, problems can arise,"said Steinbach. "Such novel scenes cause human intervention, which leads to those scenes being used as training data for our approach. While our method can then help to detect such a new challenging environment the next time it is encountered, detecting and correctly managing an entirely novel scene the first time it is encountered remains a challenging task.”

My take-away from all of this is that AI can speed up and improve medical diagnosis. It can also help to make vehicles a lot safer on the road. But you have to pay careful attention to your methods.

It’s vital to think like a high-voltage engineer when you’re designing an AI system: think in advance of the possible ways the system might fail and design it with the goal of minimizing the chances for that to happen. And if a failure does occur, aim to reduce the chances for injury or damage.

Do you agree? Share your questions and comments below.

Read more from Ed's Blog: Designing from the Outside In vs the Inside Out