It’s important to understand how experienced operators have learned to control their system. (Image: goodluz/Adobe Stock)

The title of a SLAC National Accelerator Laboratory press release, “ A Day in the Life of a Human-In-the-Loop Engineer  ,” caught my attention for two reasons. First of all, I am in awe of that fantastic machine, a two-mile-long linear accelerator, which I encountered during my high-power electronics days. And second, that there is such a thing as a “human-in-the-loop engineer” job description.

The SLAC Performance Optimization for Human-in-the-Loop Complex Control Systems project, led by Dr. Wan-Lin Hu, has two main goals: First, optimizing the operation of the accelerator control; and second, by studying how experienced operators have learned to control their machine, using that knowledge to train novice operators in how to interact with these highly automated control systems.

In a previous blog I wrote about humans in the loop, but my take then was to warn about how that could make things unstable. Dr. Hu, however, is using the interaction between people and system constructively, to improve performance.

She makes the important point that: “When people design automatic systems, like self-driving cars, they push very hard to make everything work automatically, like magic. But in reality, humans are still very important. Although humans can’t deal with the same amount of complex data as machines can, they’re much better at adapting to changing situations.”

For example, although ADAS systems for cars are generally quite reliable, if a novel situation that hasn’t been encountered before occurs, the system might make a dangerous error that a human would not.

Looking Under the Hood

One of the great advantages of using a programmable logic controller (PLC) to automate a control system is that you can see inside of it and watch its behavior in real time. Then, you can more easily figure out what happened if something goes wrong with the system.

One of the significant challenges of AI is that it is like the proverbial “black box” — you can’t see what’s going on inside of it, you can only look at its inputs and outputs. But that doesn’t mean you can’t learn to understand a lot about an AI system, you just have to design innovative methods for working with it.

While acknowledging the “black box problem,” Hu and her group are trying out methods for capitalizing on the strengths of both the AI and the system’s human operators. She’s observing a group of about 20 people who operate the Linac Coherent Light Source, which is a “complex and delicate machine,” to observe what they do and to ask them why they do it that way. Experienced operators have accumulated knowledge over years, which they have difficulty putting into words, a condition called flow.

Hu is using her observations to make the interface between the operators and the AI more user-friendly, “so operators can do a better job of interpreting the AI and step in when needed to get better results.”

The point is that just because we can’t know exactly how or why artificial intelligence has done something doesn’t mean we should shrug our shoulders and uncritically accept it. In fact, that would be very dangerous; there are all sorts of ways that AI can make bad mistakes, so it’s critical to keep “humans in the loop.” The danger arises when we believe that AI has more ability than it actually does.

As the computer scientist Yejin Choi said in a New York Times interview  , “The truth is, what’s easy for machines can be hard for humans and vice versa. You’d be surprised how A.I. struggles with basic common sense.”

The Bottom Line

AI is neither good nor bad — it is a tool, a tool for use by us humans. So, it’s critical to work on understanding how to safely use that tool to get the best out of it that we can.