An inability to handle misty driving conditions has been one of the chief obstacles to the development of autonomous vehicular navigation systems that use visible light. These systems are preferable to radar-based systems for their high resolution and ability to read road signs and track lane markers.

The depth-sensing system was able to resolve images of objects and gauge their depth at a range of 57 centimeters. (Image: Melanie Gonick/MIT)

A system was developed that can produce images of objects shrouded by fog so thick that human vision cannot penetrate it. It can also gauge the objects’ distance. The system was tested using a small tank of water with the vibrating motor from a humidifier immersed in it. In fog so dense that human vision could penetrate only 36 centimeters, the system was able to resolve images of objects and gauge their depth at a range of 57 centimeters.

The fog produced for the study is far denser than any that a human driver would have to contend with; in the real world, typical fog might afford a visibility of about 30 to 50 meters. The system performed better than human vision, whereas most imaging systems perform far worse.

The new system uses a time-of-flight camera that fires ultrashort bursts of laser light into a scene and measures the time it takes their reflections to return. On a clear day, the light's return time accurately indicates the distances of the objects that reflected it. But fog causes light to “scatter,” or bounce around in random ways. In foggy weather, most of the light that reaches the camera's sensor will have been reflected by airborne water droplets, not by the types of objects that autonomous vehicles need to avoid. And even the light that does reflect from potential obstacles will arrive at different times, having been deflected by water droplets on both the way out and the way back.

The new system gets around this problem by using statistics. The patterns produced by fog-reflected light vary according to the fog's density. On average, light penetrates less deeply into a thick fog than it does into a light fog. No matter how thick the fog, the arrival times of the reflected light adhere to a statistical pattern known as a gamma distribution. Gamma distributions are somewhat more complex than Gaussian distributions, the common distributions that yield the familiar bell curve. They can be asymmetrical, and they can take on a wider variety of shapes. But like Gaussian distributions, they're completely described by two variables. The new system estimates the values of those variables on the fly and uses the resulting distribution to filter fog reflection out of the light signal that reaches the time-of-flight camera's sensor.

The system calculates a different gamma distribution for each of the 1,024 pixels in the sensor, enabling it to handle the variations in fog density that foiled earlier systems — it can deal with circumstances in which each pixel sees a different type of fog.

The camera counts the number of light particles, or photons, that reach it every 56 picoseconds, or trillionths of a second. The system uses those raw counts to produce a histogram — essentially a bar graph with the heights of the bars indicating the photon counts for each interval. Then it finds the gamma distribution that best fits the shape of the bar graph and simply subtracts the associated photon counts from the measured totals. What remain are slight spikes at the distances that correlate with physical obstacles.

The system was tested using a fog chamber a meter long. Inside the chamber, regularly spaced distance markers were mounted that provided a rough measure of visibility. A series of small objects — a wooden figurine, wooden blocks, silhouettes of letters — also were placed. The system was able to image even when they were indiscernible to the naked eye.

For more information, contact Sara Remus at This email address is being protected from spambots. You need JavaScript enabled to view it.; 617-253-2709.