Distinguishing Between Human Silhouettes Through Walls Using RF Signals
Since 2013, researchers at MIT's Computer Science and Artificial Intelligence Lab (CSAIL) have been developing technologies that use wireless signals to track human motion. The team has shown that it can detect gestures and body movements as subtle as the rise and fall of a person's chest from the other side of a house, allowing a firefighter to determine if there are survivors inside a burning building. Now, by testing different human subjects and using metrics such as height and body shape to create concrete 'silhouette fingerprints' for each person, researchers can use wireless reflections to differentiate between individuals from the other side of wall. By tracking the silhouette, the device can trace a person's hand as he writes in the air and even distinguish between 15 different people through a wall with nearly 90 percent accuracy. The device works by transmitting wireless signals that traverse the wall and reflect off a person's body back to the device. The device captures these reflections and analyzes them in order to see the person's silhouette.
Transcript
00:00:00 This video describes a device that can capture the human figure through walls using RF signals. We place our device behind a wall, and it can see the silhouette of a person who walks in an adjacent room. The device works correctly even if the room is completely closed. For example, if a person stands behind the wall, the device's output looks like this.
00:00:23 In particular, the output on the right shows the background in navy blue and the various human body parts in red, orange, and yellow. Here, we can see the person's head, chest, arms, and feet. How does this work? The device operates by transmitting wireless signals that traverse the wall, reflect off the human body, and come back. At every point in time, only a subset of human body parts reflect the signal back to the device.
00:00:49 Here, we show the output of a device as the person walks. At different points in time, different parts of the human body reflect the signal, and the device captures multiple snapshots at various points in time. It then combines the snapshots through a reconstruction algorithm that allows the device to recover the human silhouette through the wall. Here, we can see the person's head, chest, arms, and feet.
00:01:15 The device can distinguish between different people behind the wall. So for example over here, we asked two different people to stand behind the wall. And this is the output of our device. By training on different subjects, we can use a classifier to distinguish between them. The device can also distinguish between certain human postures. If the person stands straight, the output looks like this. Well, when someone stands in other postures,
00:01:40 the device's output reflects his postures. The device can also track human limbs from behind the wall. Here, we show a scenario where a person draws a shape in the air and we show the output of our device to the right. The device can trace the person's hand with high accuracy. We compared the device's output to that of a Kinect placed directly in front of the person, and show that connects up with red. In comparison to Kinect, our immediate error
00:02:09 is around 2 centimeters. For more information on how the device works, check out our Project web page.

