A new approach to time-of-flight imaging that increases its depth resolution 1,000-fold has been presented by the MIT Camera Culture group. That type of resolution could make self-driving cars practical. The new approach could also enable accurate distance measurements through fog, which has proven to be a major obstacle to the development of self-driving cars.
At a range of 2 meters, existing time-of-flight systems have a depth resolution of about a centimeter. That's good enough for the assisted-parking and collision-detection systems on today's cars. But as you increase the range, resolution goes down exponentially. Let's say you have a long-range scenario, where you want your car to detect an object further away so it can make a fast update decision. You may have started at a resolution of 1 centimeter, but at a longer distance you could be down to a resolution of a foot or even 5 feet — if you make a mistake, it could lead to loss of life.
At distances of 2 meters, the MIT researchers’ system, by contrast, has a depth resolution of 3 micrometers. The researchers conducted tests in which they sent a light signal through 500 meters of optical fiber with regularly spaced filters along its length, to simulate the power falloff incurred over longer distances, before feeding it to their system. Those tests suggest that at a range of 500 meters, the system should still achieve a depth resolution of one centimeter.
With time-of-flight imaging, a short burst of light is fired into a scene, and a camera measures the time it takes to return, which indicates the distance of the object that reflected it. The longer the light burst, the more ambiguous the measurement of how far it's traveled. So light-burst length is one of the factors that determines system resolution.
The other factor, however, is detection rate. Modulators, which turn a light beam off and on, can switch a billion times a second, but today's detectors can make only about 100 million measurements a second. Detection rate is what limits existing time-of-flight systems to centimeter-scale resolution. There is, however, another imaging technique that enables higher resolution: interferometry, in which a light beam is split in two, and half of it is kept circulating locally while the other half — the sample beam — is fired into a visual scene. The reflected sample beam is recombined with the locally circulated light, and the difference in phase between the two beams — the relative alignment of the troughs and crests of their electromagnetic waves — yields a very precise measure of the distance the sample beam has traveled.
But interferometry requires careful synchronization of the two light beams, which wouldn't be possible on a car because of vibrations. The team answered that problem by combining some ideas from interferometry, some from LIDAR, and the idea of beat frequencies from acoustics.
If a time-of-flight imaging system is firing light into a scene at the rate of a billion pulses a second, and the returning light is combined with light pulsing 999,999,999 times a second, the result will be a light signal pulsing once a second — a rate easily detectable with a commodity video camera. That slow “beat” will contain all the phase information necessary to gauge distance.
Rather than try to synchronize two high-frequency light signals the researchers modulate the returning signal, using the same technology that produced it in the first place. That is, they pulse the already pulsed light.
Gigahertz optical systems are naturally better at compensating for fog than lower-frequency systems. Fog is problematic for time-of-flight systems because it scatters light — it deflects the returning light signals so that they arrive late and at odd angles. Trying to isolate a true signal in all that noise is too computationally challenging to do on the fly.
With low-frequency systems, scattering causes a slight shift in phase that muddies the signal reaching the detector. But with high-frequency systems, the phase shift is much larger relative to the frequency of the signal. Scattered light signals arriving over different paths will cancel each other out. Theoretical analyses performed at the University of Wisconsin and Columbia University suggest that this cancellation will be widespread enough to make identifying a true signal much easier.