Robots that are adapted to respond to physical human-robot interaction (pHRI) traditionally treat such interactions as disturbances, and resume their original behaviors when the interactions end. This pHRI has been enhanced with a method that allows humans to physically adjust a robot's trajectory in real time.

In early studies, gentle feedback was used to train a robot arm to manipulate a coffee cup in real time. (Andrea Bajcsy)

The method was refined to train robots by applying gentle physical feedback to machines while they perform tasks. The goal is to simplify the training of robots expected to work efficiently side-by-side with humans.

Historically, the role of robots was to assume mundane tasks in areas such as manufacturing, assembly lines, welding, and painting. As humans have become more willing to share personal information with technology, that technology moves into embodied hardware as well.

At the heart of the method is the concept of impedance control — how to literally manage what happens when “push comes to shove.” A robot that allows for impedance control through physical input adjusts its programmed trajectory to respond, but returns to its initial trajectory when the input ends.

The new algorithm builds upon that concept, as it allows the robot to adjust its path beyond the input and calculate a new route to its goal, similar to a GPS system that recalculates the route to its destination when a driver misses a turn.

Researchers trained a robot arm and hand to deliver a coffee cup across a desktop, and then used enhanced pHRI to keep it away from a computer keyboard and low enough so that the cup wouldn't break if dropped. The goal was to deform the robot's programmed trajectory through physical interaction. The robot has a plan, or desired trajectory, that describes how it thinks it should perform the task; the real-time algorithm modified, or deformed, the robot's future desired trajectory.

In impedance mode, the robot consistently returned to its original trajectory after an interaction. In learning mode, the feedback altered not only the robot's state at the time of interaction, but also how it proceeded to the goal. If the user directed it to keep the cup from passing over the keyboard, for instance, it would continue to do so in the future. By replanning the robot's desired trajectory after each new observation, the robot was able to generate behavior that matches the human's preference. Further tests used a rehabilitative force-feedback robot — the OpenWrist — to manipulate a cursor around obstacles on a computer screen and land on a blue dot. The tests first used standard impedance control and then impedance control with physically interactive trajectory deformation, an analog of pHRI that allowed the device to be trained to learn new trajectories.

The results showed trials with trajectory deformation were physically easier and required significantly less interaction to achieve the goal. The experiments demonstrated that interactions can program otherwise autonomous robots that have several degrees of freedom; in this case, flexing an arm and rotating a wrist. One current limitation is that pHRI cannot yet modify the amount of time it takes a robot to perform a task.

Watch a video demonstration of the robot guiding technology on Tech Briefs TV here. For more information, contact Mike Williams at This email address is being protected from spambots. You need JavaScript enabled to view it.; 713-348-6728.