Artificial intelligence (AI) has been used to teach wireless devices to sense people's postures and movement, even from the other side of a wall. RF-Pose uses a neural network to analyze radio signals that bounce off people's bodies, and creates a dynamic stick figure that walks, stops, sits, and moves its limbs as the person performs those actions.

The technology could provide a better understanding of disease progression and allow doctors to adjust medications accordingly. It could also help elderly people live more independently, while providing the added security of monitoring for falls, injuries, and changes in activity patterns. Other applications include new classes of video games where players move around the house, or even in search-and-rescue missions to help locate survivors.

One challenge addressed is that most neural networks are trained using data labeled by hand. A neural network trained to identify cats, for example, requires that people look at a big dataset of images and label each one as either “cat” or “not cat.” Radio signals, meanwhile, can't be easily labeled by humans. To address this, examples were collected using both the wireless device and a camera. Thousands of images of people doing activities like walking, talking, sitting, opening doors, and waiting for elevators were gathered.

The system uses a neural network to analyze radio signals that bounce off people's bodies, and creates a dynamic stick figure that walks, stops, sits, and moves its limbs as the person performs those actions.

The images from the camera were then used to extract the stick figures, which were shown to the neural network along with the corresponding radio signal. This combination of examples enabled the system to learn the association between the radio signal and the stick figures of the people in the scene.

Post-training, RF-Pose was able to estimate a person's posture and movements without cameras, using only the wireless reflections that bounce off people's bodies. Since cameras can't see through walls, the network was never explicitly trained on data from the other side of a wall. Besides sensing movement, the wireless signals were used to accurately identify somebody 83 percent of the time out of a lineup of 100 individuals. This ability could be particularly useful for the application of search-and-rescue operations, when it may be helpful to know the identity of specific people.

In this work, the model outputs a 2D stick figure, but the team is also working to create 3D representations that would be able to reflect even smaller micromovements; for example, it might be able to see if an older person's hands are shaking regularly enough that they may want to get a checkup.

A key advantage of the technology is that people do not have to wear sensors or remember to charge their devices.

Watch a video demo of the technology on Tech Briefs TV here. For more information, contact Adam Conner-Simons at This email address is being protected from spambots. You need JavaScript enabled to view it..