With the help of 12 transmitting antennas, Fabio da Silva's m-Widar system can spot — and image — humans and objects hidden behind a wall.
“Because we use radio signals, they go through almost everything, like concrete, drywall, wood and glass,” said physicist Fabio da Silva , who led the development of the technology while working at the National Institute of Standards and Technology (NIST). “It’s pretty cool because not only can we look behind walls, but it takes only a few microseconds of data to make an image frame."
The new imaging method, described June 25 in Nature Communications , has a variety of potential applications beyond detecting a stranger in another room. Other possibilities for m-Widar include missile or space-debris tracking, vital-sign monitoring, and navigation help for firefighters.
The sampling happens at the speed of light, as fast as physically possible, says da Silva.
Da Silva has since applied for a patent and begun commercializing the device through the startup company, Wavsens LLC, based in Westminster, CO.
The NIST imaging method is a variation on standard radar, which sends an electromagnetic pulse, waits for the reflections, and measures the round-trip time to determine distance to a target. Unlike multisite radar, which usually features one transmitter and several receivers, m-Widar has the reverse setup: lots of transmitters and only one receiver.
“That way, anything that reflects anywhere in space, we are able to locate and image,” said da Silva.
How m-Widar Works
The m-Widar technique is based on a recent computational method known as transient rendering , which has been used in machine learning and other forms of advanced imaging in recent years. Transient rendering employs a small sample of signal measurements to reconstruct images based on random patterns and correlations.
What kinds of patterns?
Each transmitter emits different pulse signatures simultaneously, in a specific type of random sequence, which interfere in space and time with the pulses from the other transmitters and produce enough information to build an image.
The NIST team used the pattern-recognition method to reconstruct a scene with 1.5 billion samples per second, a corresponding image frame rate of 366 kilohertz (frames per second) — a rate equal to about 100 to 1,000 times more frames per second than a cellphone video camera.
In a chamber, the NIST team demonstrated the technique by imaging a three-dimensional scene of a person moving behind drywall. (See the above video.)
Images were made from a distance of about 30 feet through the wall — and with a transmitter power equivalent of 12 cell phones sending signals simultaneously.
With 12 antennas, the NIST system generated 4096-pixel images. The images had a resolution of about 10 centimeters across a 10-meter scene.
The transmitting antennas operated at frequencies from 200 megahertz to 10 gigahertz, roughly the upper half of the radio spectrum, which includes microwaves.
The receiver consisted of two antennas connected to a signal digitizer. The digitized data were transferred to a laptop computer and uploaded to the graphics processing unit to reconstruct the images.
The image resolution could be improved, says da Silva, by upgrading the system using existing technology, including more transmitting antennas and faster random signal generators and digitizers.
According to da Silva, the current system also has a potential range of up to several kilometers. With some improvements the range could be much farther, limited only by transmitter power and receiver sensitivity.
In a short Q&A with Tech Briefs below, da Silva explains what got him "hooked" on the m-Widar technology, and what kinds of applications are possible as the system gets upgraded.
Tech Briefs: One application that I had trouble visualizing: Space junk. How can this tool be especially suited for finding orbital debris? How would it work?
Fabio da Silva: To answer this question it helps to realize that the m-Widar works like a fast camera that has ranging information embedded in each pixel. For space debris, it is important to acquire information such as size, composition, position, and velocity of the debris particles.
So, for example, say the ISS needs to change its position to avoid collision with some space debris that is scheduled to intercept its orbit. To study the passing debris, one would need a fast camera to take a snapshot of the debris as it comes close to the ISS. This type of standoff imaging needs to be fast enough to freeze the particles in time and well resolved enough in space to analyze the debris. The m-Widar has the ability to take such images from a standoff position (say, a few kilometers) and provide valuable information such as size, composition, position, velocity, and even orientation of the debris particles.
Tech Briefs: What are the components of m-Widar, and what technology component in m-Widar enables such speed?
Fabio da Silva: The components of the m-Widar are a signal source, a transmitter antenna array, a receiver antenna, a digitizer, and a signal processing module. At first you may say that this resembles a radar design. In fact the m-Widar is a variation on the multi-static radar system (in the m-Widar case: multiple transmitters and one receiver). In radar systems, a beam containing a pulse or pulse sequence is emitted and the position of a reflecting object is calculated by measuring the roundtrip time from the transmitter to the reflector and back to the receiver. This roundtrip time is measured along one beam direction and then that beam direction is scanned to map the volume of interest.
The m-Widar imaging algorithm modifies this radar design by scanning all relevant directions simultaneously through a very broad beam. The m-Widar also structures the beam to produce unique interference patterns at different points in space. The knowledge of the interference patterns allows the reflected signal acquisition to be very short (a few microseconds for an area of a few tens of square meters).
Tech Briefs: What application is most exciting for you to explore?
Fabio da Silva: We are currently exploring the use of the m-Widar for indoor localization. This application can help first responders and military in mission-critical scenarios such as locating people in building fires, active shooters, and collapsed structures. We are also considering applications to hypersonic platform detection and tracking, health care, non-destructive evaluation, and transportation.
Tech Briefs: What inspired you to try this single-pixel approach?
Fabio da Silva: I remember reading this article in Science ( DOI: 10.1126/science.1234454 ) where the authors shined a number of random sampling patterns onto a 3D object and picked up the reflected light from four different angles using single-pixel detectors.
In the article you can see the image forming as the response from the patterns accumulate. I immediately wrote a simple program to replicate the results and was blown away when I saw the image appearing on my screen. I used over 1 million sampling patterns in my program, but later found out that there are algorithms such as Compressive Sampling Matching Pursuit that can reduce the number of samples to a few thousand. I was hooked!
I also came to appreciate the fact that single-pixel cameras enable imaging at wavelengths where a multi-pixel camera sensor is too expensive or impractical. The translation of this idea to radio-frequency (RF) wavelengths (in our publication they were roughly between 10 cm and 100 cm) now required a computational imaging model. First, I developed the model using Dirac delta functions and afterwards found out the different names it had in the literature. I ended up using the [proposed] term Transient Rendering .
Tech Briefs: What are you working on now, as it relates to m-Widar?
Fabio da Silva: At Wavsens, I am currently working on the translation of the laboratory version of the m-Widar to a portable device that has more range and spatial resolution. I am also working on the analysis algorithms for material and geometry identification using machine learning tools.
This work was funded in part by the Public Safety Trust Fund , which provides funding to organizations across NIST, in areas including communications, cybersecurity, manufacturing, and sensors for research on critical, life-saving technologies for first responders.
What do you think? Share your questions and comments below.