For drivers to trust self-driving cars, Stanford University researchers know that vehicles will need to “see” obstacles and hazards as well as – if not better than – a human can.

To improve autonomous driving capabilities, the university team is developing a laser-based imaging technology that allows cars to peek around corners and detect roadside objects like signs, safety vests, and street markers.

How the technology works: the laser, set next to a highly sensitive photon detector, reflects off walls to illuminate a nearby object; the light pulses hit the unseen items around the bend, bouncing to the wall and back to the detector.

By measuring the light’s return time, the researchers then use a computationally efficient algorithm to sort the paths of the captured photons and recover the hidden object’s 3D structure.

“It sounds like magic but the idea of non-line-of-sight imaging is actually feasible,” said Gordon Wetzstein, assistant professor of electrical engineering and senior author of the paper describing the work, published March 5 in Nature.

At present, scan times range from two minutes to an hour, depending on conditions such as lighting and the hidden object’s reflectivity.

The team is currently finding ways to improve scan speeds and to fine-tune the system’s ability to handle the gamut of real-world conditions.

“We believe the computation algorithm is already ready for LIDAR systems,” said Matthew O’Toole, a postdoctoral scholar in the Stanford Computational Imaging Lab and co-lead author of the paper. “The key question is if the current hardware of LIDAR systems supports this type of imaging.”

O’Toole spoke with Tech Briefs about the future of autonomous driving – and what’s right around the corner.

Tech Briefs: What inspired your team to create this system?

Matthew O’Toole: The idea that it’s possible to image objects hidden from sight is technologically exciting, and involves an interesting mix of optics and computation. Our motivation for this project is to make non-line-of-sight imaging practical for real-world scenarios.

Tech Briefs: Quickly take me through the process: How is the technology able to “see the unseen?”

O’Toole: Our technology essentially turns walls into diffuse mirrors, by using the light reflecting off these surfaces to see areas hidden from view.

The process involves bouncing light off walls to illuminate hidden objects, and measuring the time it takes for light to come back in response. A reconstruction algorithm processes the measured signal to recover the 3D shape of these hidden objects.

Tech Briefs: Why has this capability been so challenging to enable?

O’Toole: In the past, the reconstruction algorithms have been extremely computationally expensive. We discovered a few computational and optical techniques to significantly reduce the computation time down from hours to seconds, making this type of imaging far more practical.

Tech Briefs: How does the system differentiate between obstacles and other objects?

O’Toole: Our system differentiates between the light interacting with visible and hidden objects by measuring the literal time it takes for individual photons to travel through an environment. The light bouncing off visible objects always comes back first, so we look for the secondary reflections that come later in time when imaging hidden objects.

Tech Briefs: How did you test the system?

O’Toole: We tested our system for a variety of different objects and environments. This includes imaging both normal objects and highly-reflective objects like street signs. Once the measurements are captured, it takes less than a second to reconstruct the images.

Tech Briefs: When will this technology be road-ready, and what kinds of challenges do you face for this to be road-ready?

O’Toole: To make this technology road-ready, we still need to speed-up the acquisition process. The light reflected by hidden objects tends to be very weak, and it currently takes several minutes to capture enough light for our reconstruction procedure. This issue can be addressed by upgrading our prototype with more powerful lasers and better sensors.

Tech Briefs: What other applications do you envision?

O’Toole: There are a number of potential applications for this type of technology, including in robotics and medical imaging. Specifically, it may be possible to use variations of this technology to help us see deeper into highly-scattering tissue.

What do you think? Will laser-based imaging help self-driving cars someday see around corners? Share your comments below.



Transcript

00:00:00 [MUSIC PLAYING] Stanford University. The idea is that we want to image objects where we don't have direct line of sight. That is, we want to capture an image of an object where there's an occluder-- something blocking the direct view of that object. The technique works very similar to LIDAR systems and autonomous driving. You have a laser that shoots a very short pulse of light

00:00:30 into the scene. Some of the light is directly reflected. What we're looking for are indirect reflections where you shoot a short laser pulse into the scene, the light scatters outside the line of sight of the camera-- We're interested in this multiply scattered light. So as light reflects off the wall, interacts with this unknown object, and then comes back to our sensor, we are actually picking up information

00:00:52 about the geometry of this object that we can't directly observe. These are, at most, a few photons that we're recording, and they don't really resemble the shape of the scene that we're trying to recover-- this hidden scene. So we need to build computational reconstruction methods to try to resolve these shapes that we're looking for. We found a way to actually do this, a very memory-efficient, computationally-efficient way that drastically lowers

00:01:17 the same amount of resources that's required to actually perform this type of computation. So we go from basically hours to seconds. The applications of no-line-of-sight imaging in general are, of course, in autonomous driving. If your car could look around the corner, it could make decisions probably more reliably, and further ahead of time. A benefit of our algorithm, as well, is that it's compatible with existing scanning LIDAR

00:01:41 systems, so you can conceivably take our algorithm, apply it to these existing systems, and be able to perform this non-line-of-sight imaging. We're also thinking about, for example, rescue scenarios. You can think about microscopy, where you can look at round structures that are very small, or aerial vehicles that could look through foliage or into buildings. So there's a lot of different applications where you want to be able to look outside the line of sight.

00:02:15 For more, please visit us at Stanford.edu.