An interesting shift is occurring in the robotics landscape.

While we often marvel at a Boston Dynamics robot that can do back-flips or a NASA-developed ‘RoboSimian’ that busts through doors, humanoids are increasingly being asked to have more brains than brawn.

Researchers are currently working on ways to get machines to understand intention and make intelligent decisions – a significant challenge for scientists like Dr. Jean Scholtz and Dr. Leslie Blaha.

Dr. Scholtz, a computer scientist at Pacific Northwest National Laboratories (PNNL), evaluates how humans interact with computers. Dr. Leslie Blaha, a PNNL mathematical psychologist, leverages cognitive human-behavior models to add a more “life-like” intelligence to robots, smart phones, and other form factors.

As we creep closer to creating robots that look and act like humans, there are plenty of questions to be answered:

  • Will we able to trust artificial-intelligence technology – autonomous vehicles, for example – to make our decisions?
  • Can a robot companion offer the same emotional support as a human?
  • Could a chatbot write this blog post?

Dr. Blaha and Dr. Scholtz both spoke with Tech Briefs about how humans are learning about humanity through humanoids.

Tech Briefs: How has robot/human interaction changed the most?

Dr. Jean Scholtz: We’re switching from people doing these automated, physical tasks with robots to people now using automation for more mental efforts. And that’s very interesting because: Do you really feel comfortable letting an automated system make some of your decisions for you?

I’m working right now at looking at people’s trust in different forms of automation. We all have different levels of trust. How many times does a system have to be wrong before I don’t trust it anymore? How long does it take me to build up trust in the system?

Tech Briefs: How have user expectations changed?

Dr. Leslie Blaha: We have these expectations that “I won’t have to change my behavior to interact with that automation," that it will get very smart and be able to do things that I’m capable of.

Dr. Scholtz: So, this is a challenge nowadays too – to make sure that we give these systems the right sets of data, that [robots] can take and process and learn characteristics that are sufficient for making these decisions accurately.

Tech Briefs: What robots do you think are on the cutting edge of innovation?

Dr. Scholtz: A robot named Baxter  can do simple tasks, say, on an assembly line. You can demonstrate to Baxter a simple task – here’s how you work the blender, for example. I thought this was a very interesting way to go because not everybody in the world wants to sit around programming their robot to do something different around the house.

Dr. Blaha: Along those lines of “interactive learning,” a group from the University of Michigan has been teaching the robots to ask questions to get clarity for what they don’t understand . So, if you say: “Go down to the office,” they’ll ask you for more specifics about which office and which direction. The robot can learn rather than getting you to have to program it in.

A collaborative robot known as Baxter (Image Credit: Rethink Robotics)

Tech Briefs: Is it realistic to imagine a future with robotic assistants? Can we imagine Rosey from The Jetsons in our homes someday?

Dr. Blaha: As far as somebody like Rosey from The Jetsons , I don’t think we’ll get the robot to be quite that happy. Humor is very hard to capture in a program! There have been some excellent strides in language processing within machine learning to help that, but still not quite to the level of natural human communication yet.

Dr. Scholtz: There’s a lot of work being done in social robots, especially with elderly people who may be confined and need more interaction. In fact, one of my friends bought her mother a robotic cat. It keeps her company, it curls up in her lap, and meows, and she feels very good about it.

Dr. Blaha: With the humanoid factor in particular, we know how to have [the robots] make noises that we can interpret about emotion – happy and sad sounds, for example. Some of the robotics work will serve a purpose for building that affective connection – for helping the elderly, trauma patients, children with autism, and anyone with a need to connect human to human.

Tech Briefs: What is most exciting to you about the ways humans and “humanoids” interact?

Dr. Blaha: I am excited about how the work in robotics and machine intelligence actually tells us more about human nature. You find some really fascinating things, like very-human, behavior-based avatars that are essentially virtual therapists . There are patients who will open up and talk to these machines more than they do with other people because they don’t think the machine is judging them the way other people do.

The more we work robots, the more we learn about people, which makes our robots better, but it also helps us people and our environment work more effectively for each other.

What do you think? How do you envision humans interacting with humanoids? Share your thoughts below.