Engineers at Duke University have developed a way to extract a sequence of images from light scattered through a mostly opaque material — or even off a wall — from one long photographic exposure. The technique has applications in a wide range of fields from security to healthcare to astronomy.
When light gets scattered as it passes through a translucent material, the emerging pattern of “speckle” looks as random as static on a television screen with no signal — but it isn't random. Because the light coming from one point of an object travels a path very similar to that of the light coming from an adjacent point, the speckle pattern from each looks very much the same, just shifted slightly.
With enough images, astronomers used to use this “memory-effect” phenomenon to create clearer images of the heavens through a turbulent atmosphere, as long as the object being imaged was sufficiently compact. The technique fell out of favor, however, with the development of adaptive optics, which do the same job by using adjustable mirrors to compensate for the scattering.
A few years ago, however, the memory-effect technique became popular with scientists again. Because modern cameras can record hundreds of millions of pixels at a time, only a single exposure is needed for statistical reconstruction to be workable. While the approach can reconstruct a scattered image, it has its limitations. The object has to remain motionless and the scattering medium has to be constant.
The new approach to memory-effect imaging breaks through these limitations by extracting a sequence of images from a single, long exposure. The trick is to use a coded aperture. Think of this as a set of filters that allows light to pass through some areas but not others in a specific pattern. As long as this pattern is known, scientists can computationally extract what the original image looked like.
This technique is to use a sequence of coded apertures to stamp which light is coming from which moment in time. But because each image is collected on a single, long photographic exposure, the resulting speckle ends up even more of a jumbled mess than usual. However, since today's cameras have such extremely high resolution, if you look closely, there's still enough of a pattern to computationally get a toehold and tease them apart.
In their experiment, a simple sequence of four backlit letters appeared one after the other behind a coded aperture and a scattering material. The shutter of a 5.5-megapixel CCD camera was left open for more than a minute during the sequence to gather the images.
While the best results were achieved with a 100-second exposure time, good results could still be obtained with much shorter times. After only a few seconds of processing, the computer successfully returned the individual images of a D, U, K and E from the sequence. The researchers then showed the approach also works when the scattering medium is changed, and even when both the images and scattering mediums are changing.
The best results were achieved when the letters appeared for 25 seconds each because the intensity of the backlight was not very high to begin with and was even further diminished by the coded aperture and scattering material. But with a more sensitive camera or a brighter source, according to the researchers, there's no reason the approach couldn't be used to capture live-action images.
In the medical arena, many light-based devices look to gather data through skin and other tissues — for example, Fitbit capturing a person's pulse through their wrist. Light scattering as it travels through the skin and flowing blood cells, however, poses a challenge to more advanced measurements. This technique may provide a path forward.
The researchers are also looking to see if this approach can be used to separate different aspects of light, particularly color. For example, one could imagine using coded apertures to gain more information about a single image rather than using the technique to obtain a sequence of images.
For more information, contact Ken Kingery at