In my first blog, Designing from the Outside In vs. the Inside Out, I wrote about my long-ago design principle of starting a design with the user interface. But in those days, once the design was finalized that was it — I design it, it gets built, and the user uses it.

Starting with the user interface still makes sense, but things are getting much more complicated. With devices that use artificial intelligence — I design it, it gets built, and it changes as it’s being used. As a designer I now must consider back-and-forth interactions between user and device — and that can be trickier than it may seem at first.

Picture a car and driver. In the old days, if I wanted air conditioning, I pressed a button; if I wanted music, I turned on the radio; if I wanted to center my car in the lane, I turned the steering wheel. Coming soon, according to the article AI-Based Machine Vision & the Future of Automotive In-Cabin Technologies, the car will be able to check me out and decide things like whether my body temperature is getting too high. If it is, the air conditioning will start up. Or if it seems like my attention is wandering, maybe it should turn on some of my favorite music. Or more importantly, if I am drifting out of my lane, the car should turn the steering to bring me back.

Putting People into the Feedback Control Loop

These all involve feedback loops. For example, if the cabin temperature system monitors the body temperature of the driver, or even the passengers, it might decide to turn on the AC. That will change the occupants’ body temperatures, which will in turn react back on the system, closing the loop. But what if the driver decides they are enjoying the heat, can they manually turn off the AC?

Optimizing a feedback loop was a daunting challenge in my engineering days. I wonder what happens when a human — the driver — becomes part of the loop. There are standard engineering systems, such as proportional-integral-derivative (PID) controllers for optimizing the behavior of a closed-loop electro-mechanical system. They work well when properly tuned, but tuning them is quite difficult, even though the performance of each separate component of the loop is well-defined and predictable. The difficulty arises only when these components interact.

I can’t imagine what happens when one of the major components is a human, whose reactions are only minimally predictable. You can assume, for example, that if a person is getting hot, you should cool them down. That might be fine for most people, but not all. But even with a given person, their desires might be different at different times of the day. Suppose they got into the car after a jog around the park. Depending on the individual, they may want to cool down slowly or quickly. And suppose, although they jogged today, they may not tomorrow — the possibilities for unpredictable changes are practically endless.

Maybe the AI can become sophisticated enough to pick up on this kind of variable behavior, but it’s not going be easy. My personal opinion is that there are so many unpredictable random factors, such as: I had a fight with my spouse this morning or I got a job promotion — that once you insert AI into a human-computer interface, there will always be the possibility of unpredictable outcomes.

How About Relationships Between Humans and Robots?

An article entitled Intelligent Machines: Can You Build a Robot You Can Trust raises some interesting questions about humans and robots relating to each other.

In the Tech Briefs interview, Dr. Philipp Kellemeyer, a clinician at the Freiberg University Medical Center, laid out some prerequisites for a trustworthy human-robot physical rehabilitation relationship:

  • Safety of the robot's behavior
  • Shared intentionality/predictability of behavior
  • Mutual attunement, i.e., sensitivity of the robot toward fluctuations in the human counterpart's abilities (due to fatigue or lack of motivation, for example). The patient, however, also needs the capacity to attune to the robot's behavior.

This, in a nutshell, highlights some of the most important considerations. Are they insurmountable? Dr. Kellemeyer doesn’t think so. When asked if there will always be trust issues between a patient and a robot, he replied: “No. It will depend on the mutual ‘vibe’ between the patient and the robot — again, much like in human-human interactions, which also have the unfortunate tendency to go awry or fall apart because of misunderstandings or for petty reasons.”

Can Inserting a Robot Change the Way People Behave With Each Other?

In an April 2019 Atlantic Magazine article , Nicholas A. Christakis, a physician and sociologist at Yale University, discusses the social effects of AI — “the ways AI could affect how we humans interact with each other.”

I was particularly taken with an experiment he ran in his laboratory. He set up groups of three people plus a humanoid robot to solve a problem. In the experimental groups, the robot was programmed to make occasional errors, but to acknowledge them and apologize. Christakis observed that these groups performed better than the control groups, in which the robot did not apologize. The groups with the apologetic robots “became more relaxed and conversational, consoling group members who stumbled and laughing together more often.”

When people in a group are less worried about making mistakes, they feel freer to take chances — and that’s where creativity comes from.

What designer would have thought in advance, that by acknowledging their mistakes, a humanoid robot could affect the way people relate to one another?

The Takeaway

When AI is designed into a closed-loop system that is intended to interact with humans, it’s very difficult to predict the system’s behavior with certainty. You can count on there always being unpredictable consequences. The problem is that the human becomes a node of the control loop, and as any economist can tell you, the behavior of humans is very hard to predict. True, AI is designed to account for that, but its ability to do so is not unlimited. Perhaps a solution is that in addition to the user being part of the loop, the designer needs to be in there as well — to tweak the design as feedback comes in from the field.

Do you agree? Share your questions and comments below.