People with amyotrophic lateral sclerosis (ALS) suffer from a gradual decline in their ability to control their muscles. As a result, they often lose the ability to speak, making it difficult to communicate with others. A team of researchers has designed a stretchable, skin-like device that can be attached to a patient’s face to measure small movements such as a twitch or a smile. Using this approach, patients could communicate a variety of sentiments with small movements that are measured and interpreted by the device.

The researchers hope that the device would allow patients to communicate in a more natural way, without having to deal with bulky equipment. The soft, disposable, wearable sensor is thin and can be camouflaged with makeup to match any skin tone, making it unobtrusive. The initial version of the device was tested on two ALS patients and showed that it could accurately distinguish three different facial expressions — smile, open mouth, and pursed lips.

The device consists of four piezoelectric sensors embedded in a thin silicone film. The sensors, which are made of aluminum nitride, can detect mechanical deformation of the skin and convert it into an electric voltage that can be easily measured. All of these components are easy to mass-produce, so the researchers estimate that each device would cost around $10.

The team used a process called digital imaging correlation on healthy volunteers to help them select the most useful locations to place the sensor. They painted a random black-and-white speckle pattern on the face and then took many images of the area with multiple cameras as the subjects performed facial motions such as smiling, twitching the cheek, or mouthing the shape of certain letters. The images were processed by software that analyzes how the small dots move in relation to each other to determine the amount of strain experienced in a single area.

They also used the measurements of skin deformations to train a machine-learning algorithm to distinguish among a smile, open mouth, and pursed lips. Using this algorithm, they tested the devices with two ALS patients and were able to achieve about 75 percent accuracy in distinguishing among these different movements. The accuracy rate in healthy subjects was 87 percent.

Based on these detectable facial movements, a library of phrases or words could be created to correspond to different combinations of movements. Technically, thousands of messages could be created. The information from the sensor is sent to a handheld processing unit that analyzes it using the algorithm trained to distinguish facial movements. In the current prototype, this unit is wired to the sensor but the connection could also be made wireless for easier use.

In addition to helping patients communicate, the device could also be used to track the progression of a patient’s disease or to measure whether treatments they are receiving are having any effect.

For more information, contact Abby Abazorius at This email address is being protected from spambots. You need JavaScript enabled to view it.; 617-253-2709.