Anew grasp system with robotic hands works without previously knowing the characteristics of objects, learning by trial and error. The robot features two hands based on human hands in terms of both shape and mobility. The robot brain for the hands has to learn how everyday objects like pieces of fruit or tools can be distinguished on the basis of their color or shape, as well as what matters when attempting to grasp the object; for example, a banana can be held, and a button can be pressed. The system learns to recognize such possibilities as characteristics, and constructs a model for interacting with and re-identifying the object.

To accomplish this, the researchers investigated which characteristics are perceived to be significant in grasping actions. They discovered that humans rely mostly on shape and size to differentiate objects — weight hardly plays a role. Studies also were done to determine how humans handle cubes that differ in weight, shape, and size.

Even though the robot's hands are strong enough to crush the apple, they dole out their strength for a fine-touch grip that won't damage delicate objects. This is made possible by connecting tactile sensors with intelligent software. (Bielefeld University)

The robot was taught to “learn” by acquiring familiarity with new objects. A human researcher instructed the robot hands which object on a table should be inspected next. To do this, the researcher pointed to individual objects, or gave spoken hints such as in which direction an interesting object for the robot can be found (e.g. “behind, at left”). Using color cameras and depth sensors, two monitors displayed how the system perceives its surroundings and reacts to instructions from humans. The robot's head, called Flobi, complements the robot's language and actions with facial expressions. From one of the monitors, Flobi follows the movements of the hands and reacts to the researcher's instructions.

In order to understand which objects they should work with, the robot hands have to be able to interpret not only spoken language, but also gestures. The hands have to be able to put themselves in the position of a human to also ask themselves if they have correctly understood.

The project can benefit self-learning robots in industry, contributing to the future use of complex, multi-fingered robot hands that are too costly or complex today to be used in industry.

For more information, contact Dr. Helge Ritter at This email address is being protected from spambots. You need JavaScript enabled to view it.; +49 521 106-12123.