The wearable ultrasound sensor is flexible, allowing it to provide a continuous recording of cardiac activities before, during, and after exercise or movement. (Image: Xu Laboratory at UC San Diego)

A University of California San Diego-led team has developed a wearable ultrasound device — about the size of a postage stamp — that can assess both the structure and function of the human heart. The portable device can be worn for up to 24 consecutive hours and works even during strenuous exercise.

The goal is to make ultrasound more accessible to a larger population, as echocardiograms currently require highly trained technicians and bulky devices.

The wearable heart-monitoring system uses ultrasound to continuously capture images of the four heart chambers in different angles and analyze a clinically relevant subset of the images in real time using a custom-built AI technology. The work, as described in Nature, builds on the team’s previous advances in wearable imaging technologies for deep tissues.

“The increasing risk of heart diseases calls for more advanced and inclusive monitoring procedures,” said Sheng Xu, Professor, UC San Diego. “By providing patients and doctors with more thorough details, continuous and real-time cardiac image monitoring is poised to fundamentally optimize and reshape the paradigm of cardiac diagnoses.”

“The device can be attached to the chest with minimal constraint to the subjects’ movement, even providing a continuous recording of cardiac activities before, during, and after exercise,” said Xiaoxiang Gao, Postdoctoral Researcher, UC San Diego.

The system gathers information through a wearable patch — measuring 1.9 cm (L) × 2.2 cm (W) × 0.09 cm (T) — as soft as human skin. It sends and receives the ultrasound waves which are used to generate a constant stream of images of the structure of the heart in real time. The system can examine the left ventricle of the heart in separate bi-plane views using ultrasound, generating more clinically useful images than were previously available.

“A deep learning model automatically segments the shape of the left ventricle from the continuous image recording, extracting its volume frame-by-frame and yielding waveforms to measure stroke volume, cardiac output, and ejection fraction,” said Mohan Li, master’s student, UC San Diego.

“Specifically, the AI component involves a deep learning model for image segmentation, an algorithm for heart volume calculation, and a data imputation algorithm,” said Ruixiang Qi, master’s student, UC San Diego. “We use this machine learning model to calculate the heart volume based on the shape and area of the left ventricle segmentation.

“The imaging-segmentation deep learning model is the first to be functionalized in wearable ultrasound devices. It enables the device to provide accurate and continuous waveforms of key cardiac indices in different physical states, including static and after exercise, which has never been achieved before.”

To produce the wearable device itself, the researchers used a piezoelectric 1-3 composite bonded with Ag-epoxy backing as the material for transducers in the ultrasound imager, which reduced risk and improved efficiency over previous methods. When choosing the transmission configuration of the transducer array, the team achieved superior results through wide-beam compounding transmission. They also chose FCN-32 from nine popular models for machine-learning-based image segmentation, which bore the highest possible accuracy.

Currently, the patch is connected via cables to a computer, which can download the data automatically while the patch is still on. The team has also developed a wireless circuit for the patch.

Xu aims to commercialize this technology through Softsonics, a company that he cofounded with engineer Shu Xiang. Xu recommends four immediate next steps: B-mode imaging, which allows more diagnostic capabilities involving different organs; the design of the soft imager, which allows researchers to fabricate large transducer probes that cover multiple positions simultaneously; miniaturization of the back-end system that powers the soft imager; and working toward a general machine learning model that fits more subjects.

For more information, contact Katherine Connor at This email address is being protected from spambots. You need JavaScript enabled to view it..