There are some tasks that traditional robots — the rigid and metallic kind — cannot perform. Soft-bodied robots may be able to interact with people more safely or slip into tight spaces with ease. But for robots to reliably complete their programmed duties, they need to know the whereabouts of all their body parts. That’s a difficult task for a soft robot that can deform in an infinite number of ways.

Researchers have developed an algorithm to help engineers design soft robots that collect more useful information about their surroundings. The deep-learning algorithm suggests an optimized placement of sensors within the robot’s body, allowing it to better interact with its environment and complete assigned tasks. The advance is a step toward the automation of robot design. The system not only learns a given task but also how to best design the robot to solve that task.

Creating soft robots that complete real-world tasks has been a challenge in robotics. Rigid robots have a built-in advantage: a limited range of motion. Rigid robots’ finite array of joints and limbs usually makes for manageable calculations by the algorithms that control mapping and motion planning. Soft robots are not so tractable.

Soft-bodied robots are flexible and pliant — they generally feel more like a bouncy ball than a bowling ball. Any point on a soft-bodied robot can, in theory, deform in any way possible. That makes it tough to design a soft robot that can map the location of its body parts. Past efforts have used an external camera to chart the robot’s position and feed that information back into the robot’s control program. But the researchers wanted to create a soft robot untethered from external aid.

They developed a novel neural network architecture that both optimizes sensor placement and learns to efficiently complete tasks. First, they divided the robot’s body into regions called “particles.” Each particle’s rate of strain was provided as an input to the neural network. Through a process of trial and error, the network “learns” the most efficient sequence of movements to complete tasks, like gripping objects of different sizes. At the same time, the network keeps track of which particles are used most often and culls the lesser-used particles from the set of inputs for the networks’ subsequent trials.

By optimizing the most important particles, the network also suggests where sensors should be placed on the robot to ensure efficient performance. In a simulated robot with a grasping hand, the algorithm might suggest that sensors be concentrated in and around the fingers, where precisely controlled interactions with the environment are vital to the robot’s ability to manipulate objects. While that may seem obvious, it turns out the algorithm vastly outperformed humans’ intuition on where to site the sensors.

The work could help to automate the process of robot design. In addition to developing algorithms to control a robot’s movements, designers need to think about how to place sensors on robots and how that will interplay with other components of that system. Better sensor placement could have industrial applications, especially where robots are used for fine tasks like gripping.

For more information, contact Abby Abazorius at This email address is being protected from spambots. You need JavaScript enabled to view it.; 617-253-2709.