Researchers have created a low-cost method for soft, deformable robots to detect a range of physical interactions, from pats to punches to hugs, without relying on touch at all. Instead, a USB camera located inside the robot captures the shadow movements of hand gestures on the robot’s skin and classifies them with machine-learning software.

Touch is an important mode of communication for most organisms but has been virtually absent from human-robot interaction. One of the reasons is that full-body touch used to require a massive number of sensors and was therefore not practical to implement.

The ShadowSense technology originated through work to develop inflatable robots that could guide people to safety during emergency evacuations. Such a robot would need to be able to communicate with humans in extreme conditions and environments. Imagine a robot physically leading someone down a noisy, smoke-filled corridor by detecting the pressure of the person’s hand.

Rather than installing a large number of contact sensors — which would add weight and complex wiring to the robot and would be difficult to embed in a deforming skin — the team took a counterintuitive approach. In order to gauge touch, they looked to sight. By placing a camera inside the robot, they can infer how the person is touching it and what the person’s intent is just by looking at the shadow images.

The prototype robot consists of a soft, inflatable bladder of nylon skin that is stretched around a cylindrical skeleton, roughly four feet in height, that is mounted on a mobile base. Under the robot’s skin is a USB camera that connects to a laptop. The researchers developed a neural-network-based algorithm that uses previously recorded training data to distinguish among six touch gestures — touching with a palm, punching, touching with two hands, hugging, pointing, and not touching at all — with an accuracy of 87.5 to 96%, depending on the lighting.

The robot can be programmed to respond to certain touches and gestures such as rolling away or issuing a message through a loudspeaker. And the robot’s skin has the potential to be turned into an interactive screen. By collecting enough data, a robot could be trained to recognize an even wider vocabulary of interactions, custom-tailored to fit the robot’s task.

The robot doesn’t even have to be a robot. ShadowSense technology can be incorporated into other materials, such as balloons, turning them into touch-sensitive devices. In the future, the researchers will try using optical devices such as lenses and mirrors to enable additional form factors.

ShadowSense also offers privacy. If the robot can only see a human in the form of their shadow, it can detect what the person is doing without taking high-fidelity images of their appearance. This provides a physical filter and protection, and provides psychological comfort.

The ability to physically interact and understand a person’s movements and moods could ultimately be just as important to the person as it is to the robot.

For more information, contact Jeff Tyson at This email address is being protected from spambots. You need JavaScript enabled to view it.; 607-793-5769.