Could touch be the new avenue for communications? Researchers from MIT and Purdue University think so and are working on a “general-purpose” tactile system that delivers information using text or speech symbols.

Professor Hong Tan led the Purdue phoneme project, developing the buzzworthy way of receiving messages through the skin.

The yearlong project was done with researchers from Facebook and the Massachusetts Institute of Technology, including MIT's Dr. Charlotte Reed.

In a study, subjects wore a material cuff encircling the forearm from the wrist to below the elbow. The instrument, wrapped around the test subject’s non-dominant arm, featured 24 “tactor” sensors that, when stimulated, emitted a vibration against the skin, varying strength and position in the process.

The communication system relies upon the mapping of 39 English-language phonemes, or sound units that distinguish one word from another.

The sounds of consonants such as K, P, and T, for example, initiated stationary sensations on different areas of the arm, while vowels were indicated by stimulations that moved up, down, or around the forearm.

“We used anything that can help you establish the mapping and recognize and memorize it,” said Prof. Tan in a June press release from Purdue University . “This is based on a better understanding of how to transmit information through the sense of touch in a more intuitive and effective manner.”

Twelve subjects learned haptic symbols through the phoneme-based method at a schedule of 100 words in 100 minutes.

The yearlong project was done with researchers from the Massachusetts Institute of Technology and Facebook Inc. The research results were presented in June at the EuroHaptics 2018 conference  in Pisa, Italy.

Although the system is intended for people with all levels of sensory capabilities, the researchers see the communication system someday offering sensory substitution aids for the deaf and blind.

Prof. Tan and Dr. Reed answered questions, via email, regarding the kind of haptic future they envision.

Tech Briefs: What inspired this work?

Hong Tan, professor of electrical and computer engineering, Purdue University and Dr. Charlotte Reed, MIT: This work was inspired first by the communication capabilities of persons with deafness and blindness. We have known for a long while that the skin is capable of communication, having documented the performance of persons with deaf-blindness who are able to communicate using natural methods of haptic communication.

In one of these methods, known as Tadoma , the deaf-blind receiver places one or both hands on the face of a person who is talking. By feeling the movements and actions of the face and neck during speech, Tadoma users are able to understand conversational speech with remarkable levels of performance. Our goal has been to translate this type of performance into a tactile device, which has not yet been achieved.

Tech Briefs: What does the technology look like?

Prof. Tan and Dr. Reed: Over the course of the past year, we have conducted a series of experiments with a tactile device consisting of a 4x6 array of vibrators that are fit around the user’s forearm between the elbow and the wrist.

Postdoctoral student Yang Jiao communicates words to Jaeong Jung, an undergraduate student, using phoneme signals transmitted to the haptic device on his forearm. (Image Credit: Purdue University/Brian Huchel)

Tech Briefs: How are the words created in the system?

Prof. Tan and Dr. Reed: Our approach has been to develop a haptic code for each of the 39 phonemes of the English language. These phonemes can then be put together to form words and sentences. This approach assumes that we have a speech recognizer at the front end of our system, whose output will be a string of phonemes. The phonemes can then be presented as haptic codes on the tactile device.

Tech Briefs: What were some of the challenges you faced?

Prof. Tan and Dr. Reed: Our first challenge in this project was to develop a code that maps each phoneme as a symbol on the haptic device. The codes for consonants are static symbols which map some of the features of speech production to the tactile array. For example, sounds that are made at the front of the mouth such as P and B are presented near the wrist, and sounds made at the back of the mouth such as K and G are presented near the elbow. The codes for vowels all consist of moving patterns made on the vibrators at different locations and directions on the tactile array.

Tech Briefs: What were the experiments like?

Prof. Tan and Dr. Reed: Our experiments have examined the ability of participants to identify the 39 haptic phonemes individually, and then to identify words and pairs of words composed of the haptic symbols. We have conducted tests with over 60 participants over the course of our work. Most participants are able to identify the 39 phonemes with a high degree of accuracy (less than 10% error rate) within several hours of training, and are also successful at identifying words.

For a set of 100 words, accuracy was greater than 90% correct. It becomes more difficult to identify words as the vocabulary increases. However, highly trained participants are able to identify 500 words with an accuracy of 85%. We feel that this research has been successful in advancing progress on the development of a general-purpose haptic communication system.

Tech Briefs: What kinds of training is required to understand the haptic feedback?

Prof. Tan and Dr. Reed: Training is crucial in the use of a haptic device. For one thing, most people are not accustomed to attending to tactile information for communication. And then users of the device must learn to associate the arbitrary haptic codes to the correct phoneme.

We took a number of steps to aid people in learning how to use the tactile device to understand speech. In the course of our studies, we found that it was helpful to introduce the phonemes in small groups and use them to form words, then gradually increase more phonemes and words until the full set of phonemes becomes available to form words.

Within each stage, participants were first given the opportunity to select phonemes and words for presentation through the device and/or to see a visual display that corresponded to the location of the vibrators on their forearm. They were then given tests in which they had to identify the phoneme or word that had been presented, and were told whether their response was right or wrong. In the case of incorrect responses, they were told what the correct answer was, and given the opportunity to play both stimuli through the device.

Tech Briefs: How much time is required to train an individual?

Prof. Tan and Dr. Reed: We found that the amount of time required to train participants in the use of the device was on the order of hours or tens of hours — a reasonable amount of time for someone who has a need for the device. We should also stress that the device can be useful for a small vocabulary size (e.g., 100 words) in as little time as about 100 minutes. As the user becomes more proficient, vocabulary size and communication rate continue to increase. So, a user’s ability to receive English through the skin grows with time.

Tech Briefs: What is most exciting to you about your work with haptic communications?

Prof. Tan and Dr. Reed: We have always known that communication through the skin is possible. We now have data as evidence that it can be done. Our work expands the horizons of communication for anyone, but also may provide benefits to persons with profound sensory deficits, in terms of giving them greater access to the word around them.

What do you think? Do you see a future of haptics communication? Share your thoughts below.