For drivers to trust self-driving cars, Stanford University researchers know that vehicles will need to “see” obstacles and hazards as well as – if not better than – a human can.

To improve autonomous driving capabilities, the university team is developing a laser-based imaging technology that allows cars to peek around corners and detect roadside objects like signs, safety vests, and street markers.

How the technology works: the laser, set next to a highly sensitive photon detector, reflects off walls to illuminate a nearby object; the light pulses hit the unseen items around the bend, bouncing to the wall and back to the detector.

By measuring the light’s return time, the researchers then use a computationally efficient algorithm to sort the paths of the captured photons and recover the hidden object’s 3D structure.

“It sounds like magic but the idea of non-line-of-sight imaging is actually feasible,” said Gordon Wetzstein, assistant professor of electrical engineering and senior author of the paper describing the work, published March 5 in Nature.

At present, scan times range from two minutes to an hour, depending on conditions such as lighting and the hidden object’s reflectivity.

The team is currently finding ways to improve scan speeds and to fine-tune the system’s ability to handle the gamut of real-world conditions.

“We believe the computation algorithm is already ready for LIDAR systems,” said Matthew O’Toole, a postdoctoral scholar in the Stanford Computational Imaging Lab and co-lead author of the paper. “The key question is if the current hardware of LIDAR systems supports this type of imaging.”

O’Toole spoke with Tech Briefs about the future of autonomous driving – and what’s right around the corner.

Tech Briefs: What inspired your team to create this system?

Matthew O’Toole: The idea that it’s possible to image objects hidden from sight is technologically exciting, and involves an interesting mix of optics and computation. Our motivation for this project is to make non-line-of-sight imaging practical for real-world scenarios.

Tech Briefs: Quickly take me through the process: How is the technology able to “see the unseen?”

O’Toole: Our technology essentially turns walls into diffuse mirrors, by using the light reflecting off these surfaces to see areas hidden from view.

The process involves bouncing light off walls to illuminate hidden objects, and measuring the time it takes for light to come back in response. A reconstruction algorithm processes the measured signal to recover the 3D shape of these hidden objects.

Tech Briefs: Why has this capability been so challenging to enable?

O’Toole: In the past, the reconstruction algorithms have been extremely computationally expensive. We discovered a few computational and optical techniques to significantly reduce the computation time down from hours to seconds, making this type of imaging far more practical.

Tech Briefs: How does the system differentiate between obstacles and other objects?

O’Toole: Our system differentiates between the light interacting with visible and hidden objects by measuring the literal time it takes for individual photons to travel through an environment. The light bouncing off visible objects always comes back first, so we look for the secondary reflections that come later in time when imaging hidden objects.

Tech Briefs: How did you test the system?

O’Toole: We tested our system for a variety of different objects and environments. This includes imaging both normal objects and highly-reflective objects like street signs. Once the measurements are captured, it takes less than a second to reconstruct the images.

Tech Briefs: When will this technology be road-ready, and what kinds of challenges do you face for this to be road-ready?

O’Toole: To make this technology road-ready, we still need to speed-up the acquisition process. The light reflected by hidden objects tends to be very weak, and it currently takes several minutes to capture enough light for our reconstruction procedure. This issue can be addressed by upgrading our prototype with more powerful lasers and better sensors.

Tech Briefs: What other applications do you envision?

O’Toole: There are a number of potential applications for this type of technology, including in robotics and medical imaging. Specifically, it may be possible to use variations of this technology to help us see deeper into highly-scattering tissue.

What do you think? Will laser-based imaging help self-driving cars someday see around corners? Share your comments below.