Tech Briefs

By detecting nearly imperceptible changes in skin color, emerging imaging technologies have been able to extract pulse rate, breathing rate, and other vital signs from a person facing a camera. The videography tools have struggled, however, to compensate for low light conditions, dark skin tones, and movement.

Rice University’s Ashok Veeraraghavan, an assistant professor of electrical and computer engineering, sits in front of a webcam to have his pulse and breathing analyzed. (Image Credit: Jeff Fitlow/Rice University)
Rice University graduate student Mayank Kumar is currently leading a project to refine the video monitoring of vital signs. Kumar and his team have developed the DistancePPG algorithm to average skin-color change signals and track a subject’s entire face, including his or her nose, eyes, and mouth.

The DistancePPG software could ultimately end up on smartphones, allowing users to assess their health at any time.

Imaging Technology: How did the DistancePPG project come about?

Mayank Kumar: We visited Texas Children’s Hospital some time back and found multiple probes and patches that are used in neonatal wards to continuously monitor babies’ vital signs. These patches and probes damage the delicate skin of premature babies. That motivated us to think about developing new techniques for monitoring vital signs without touching the babies.

Rice University graduate student Mayank Kumar (Image Credit: Jeff Fitlow/ Rice University)
DistancePPG uses a camera — an iPhone camera or even a normal Web camera — to record the videography of the person facing it. From that video we can extract the vital signs: pulse rate as well as the breathing rate.

We started by testing a few previous methods of using a camera to record vital signs. During testing, we faced a problem: that this idea does not work for people having darker skin tones. It also does not work under low lighting conditions, like ambient lighting, or even if the person moves in front of the camera. Our algorithm solves these challenges, and it opens up avenues for many new applications.

Imaging Technology: How does the camera-based vital sign monitoring work?

Kumar: In camera-based methods, we look at the slight changes in skin color when the heart pumps the blood. At each heartbeat, there is more blood in the face; the face starts to get slightly red. We cannot see these small color changes with our naked eye, but a normal camera is able to capture that, albeit with some difficulties, as the skin-color change due to blood flow is really small.

The basic challenge was: How do we extract this small signal from a sea of “noise” reliably? Darker skin tones as well as low lighting conditions made the SNR (signal-to-noise ratio) even worse. We devised an algorithm to improve the signal-to-noise ratio.

Imaging Technology: What are the key innovations of this software/ algorithm?

Kumar: There are three key innovations: First, the software uses a novel method to identify which regions in the face are better for estimating vital signs. As depth and density of arteries underneath the skin surface vary, the signal strength of skin-color change varies in different regions of the face. Our algorithm provides a “goodness” score for each facial region by directly analyzing the recorded video of the face, thus providing a way to reject not-so-good regions.

From left, researchers developing the DistancePPG software include Ashutosh Sabharwal, Mayank Kumar, and Ashok Veeraraghavan. (Image Credit: Jeff Fitlow/Rice University)
Second, the new method combines the small skin-color change signals obtained from the different areas, using a weighted averaging algorithm to maximize the signal quality (signal-to-noise ratio) of the estimated signal and thereby improving the accuracy of vital sign estimation.

Finally, the software uses an improved method for tracking the face during naturalistic motion to compensate for motion-related artifacts.

Imaging Technology: What are the more intelligent ways that DistancePPG is handling motion?

Kumar:For improving vital sign estimation accuracy under motion scenarios, one needs to work on two aspects: (i) tracking the person’s facial movement in front of the camera, and (ii) compensating for the change of skin surface reflection during motion due to change in the angle of the camera. We tackle the first challenge by using a deformable face model to divide the face into multiple regions, which are tracked separately. For the second aspect, we have used time and frequency filters to separate out small skin-color change signals from large surface reflection changes due to motion.

« Start Prev 1 2 Next End»

The U.S. Government does not endorse any commercial product, process, or activity identified on this web site.