By connecting tactile sensors with intelligent software, the robot hands control their strength for a fine-touch grip that won’t damage delicate objects. (Bielefeld University)

A new grasp system with robotic hands works without previously knowing the characteristics of objects. The system, which learns by trial and error, was developed by researchers at Bielefeld University in Bielefeld, Germany. It features two hands that are based on human hands in terms of both shape and mobility. The robot brain for the hands must learn how everyday objects like pieces of fruit or tools can be distinguished based on their color or shape, as well as what matters when attempting to grasp the object; for example, a banana can be held, and a button can be pressed. The system learns to recognize such possibilities as characteristics, and constructs a model for interacting with and re-identifying the object.

To accomplish this, the researchers investigated which characteristics are perceived to be significant in grasping actions. Their work discovered that humans rely mostly on shape and size to differentiate objects. Studies also were done to determine how humans handle cubes that differ in weight, shape, and size.

The robot was taught to “learn” by acquiring familiarity with new objects. A human researcher instructed the robot hands which object on a table should be inspected next. To do this, the researcher pointed to individual objects, or gave spoken hints such as in which direction an interesting object for the robot can be found (e.g. “behind, at left”). Using color cameras and depth sensors, two monitors displayed how the system perceives its surroundings and reacts to instructions from humans. The robot’s head, called Flobi, complements the robot’s language and actions with facial expressions. From one of the monitors, Flobi follows the movements of the hands and reacts to the researchers’ instructions.

In order to understand which objects they should work with, the robot hands have to be able to interpret not only spoken language, but also gestures. The hands must be able to put themselves in the position of a human to also ask themselves if they have correctly understood.

The project can benefit self-learning robots in industry, contributing to the future use of complex, multi-fingered robot hands that are too costly or complex today to be used in industry.

Source