Wearing a sensor-packed glove while handling a variety of objects, researchers compiled a dataset that enables an AI system to recognize objects through touch alone. The information could be leveraged to help robots identify and manipulate objects and may aid in prosthetics design. The tactile sensing system could be used in combination with traditional computer vision and image-based datasets to give robots a more human-like understanding of interacting with objects.

A low-cost, sensor-packed glove captures pressure signals as humans interact with objects. The glove can be used to create high-resolution tactile datasets that robots can leverage to better identify, weigh, and manipulate objects. (MIT)

The low-cost knitted glove, called scalable tactile glove (STAG), is equipped with about 550 tiny sensors across nearly the entire hand. Each sensor captures pressure signals as humans interact with objects in various ways. A neural network processes the signals to “learn” a dataset of pressure-signal patterns related to specific objects. Then, the system uses that dataset to classify the objects and predict their weights by feel alone, with no visual input needed.

The dataset used STAG for 26 common objects including a soda can, scissors, tennis ball, spoon, pen, and mug. Using the dataset, the system predicted the objects’ identities with up to 76 percent accuracy. The system can also predict the correct weights of most objects within about 60 grams. Similar sensor-based gloves used today often contain only 50 sensors that capture less information.

The dataset also was used to measure the cooperation between regions of the hand during object interactions; for example, when someone uses the middle joint of their index finger, they rarely use their thumb. But the tips of the index and middle fingers always correspond to thumb usage. Prosthetics manufacturers can potentially use information to choose optimal spots for placing pressure sensors and help customize prosthetics to the tasks and objects people regularly interact with.

STAG is laminated with an electrically conductive polymer that changes resistance to applied pressure. The researchers sewed conductive threads through holes in the conductive polymer film, from fingertips to the base of the palm. The threads overlap in a way that turns them into pressure sensors. When someone wearing the glove feels, lifts, holds, and drops an object, the sensors record the pressure at each point. The threads connect from the glove to an external circuit that translates the pressure data into tactile maps, which are essentially brief videos of dots growing and shrinking across a graphic of a hand. The dots represent the location of pressure points, and their size represents the force — the bigger the dot, the greater the pressure.

From those maps, the researchers compiled a dataset of about 135,000 video frames from interactions with 26 objects. Those frames can be used by a neural network to predict the identity and weight of objects and provide insights about the human grasp.

To identify objects, the researchers designed a convolutional neural network (CNN) — which is usually used to classify images — to associate specific pressure patterns with specific objects. The idea was to mimic the way humans can hold an object in a few different ways in order to recognize it without using their eyesight.

For weight estimation, the researchers built a separate dataset of around 11,600 frames from tactile maps of objects being picked up by finger and thumb, held, and dropped. The CNN wasn’t trained on any frames it was tested on, meaning it couldn’t learn to just associate weight with an object. In testing, a single frame was inputted into the CNN. Essentially, the CNN picks out the pressure around the hand caused by the object’s weight and ignores pressure caused by other factors such as hand positioning to prevent the object from slipping. Then it calculates the weight based on the appropriate pressures. The system could be combined with the sensors already on robot joints that measure torque and force to help them better predict object weight.

For more information, contact Abby Abazorius at This email address is being protected from spambots. You need JavaScript enabled to view it.; 617-253-2709.