Having a camera detect objects outside of its line of sight is not impossible.
A team at Stanford, in fact, has experience doing just that. In March of 2018, the university's researchers designed a laser-based system that produced images of objects located around corners and hidden from view.
To be detected by the bouncing laser, however, the objects had often needed to reflect light evenly and strongly; retroreflective objects, like safety apparel or traffic signs, were easily seen, while moving objects like a bouncing ball or a running child were more challenging to detect.
With an updated algorithm and a better hardware setup, the new-and-improved system from Stanford University captures light from a greater variety of surfaces, allowing a wider, farther imagery than ever before, including around corners.
The camera's engineers hope the enhanced vision capabilities will support the next generation of autonomous cars, robots, and medical applications.
How it Works
The system's laser scans a wall opposite the scene of interest, bounces off the wall to the objects in the field of view, hits the wall again, and returns to the camera sensor.
When the laser light reaches the camera, only specks remain. The scene is then reconstructed as the sensor captures each of these individual reflection points.
The researchers drew inspiration from seismic imaging systems, which bounce sound waves off underground layers of Earth to learn about conditions beneath the surface, like deformation.
The Stanford team reconfigured its algorithm to likewise interpret bouncing light as waves emanating from the hidden objects. The algorithm is more robust to noise and can recover a wider variety of items, including those with glossy or mirror-like surface appearances.
The new hardware setup captures large hidden scenes outdoors, thanks to a better laser. The system scans at four frames per second, reconstructing a scene at speeds of 60 frames per second on a computer.
"The key behind the improved hardware setup is a fast pulsed laser that is 10,000 brighter than our previous setup, and an improved single-photon detector that is more sensitive to the multiply scattered light," David Lindell, a lead researcher and graduate student in electrical engineering at Stanford University, told Tech Briefs.
"These are really significant improvements compared to our previous work, and move us towards enabling 'seeing around corners' in more practical applications, like for autonomous cars."
Beyond the self-driving car, Lindell spoke with Tech Briefs about what kinds of other practical applications are possible.
Tech Briefs: Can you bring us through a memorable test? I imagine this kind of work leads to some fun moments.
David Lindell: From the start of the project, we’ve had a goal of demonstrating non-line-of-sight imaging in more practical scenarios, and so one of the most memorable tests was actually taking the system outdoors.
We wanted to test if our system could use the side of a building as a mirror to see objects hidden around the corner. In the experiment, our laser illuminated the side of the building with incredibly short bursts of light, and our detector captured the echoes of light bouncing back to the side of the building from an assortment of hidden objects, including a large statue of a discus thrower. Our reconstruction algorithm then revealed an image of the hidden scene, uncovering the details of our discus thrower from what were essentially just images of the side of the building.
Tech Briefs: What kinds of objects can be detected with the technology?
Lindell: A main contribution of our work is an algorithm that can reveal hidden objects with different surface reflectances. While previous algorithms have been limited to imaging hidden objects that are diffuse (reflecting light in all directions), our approach is robust to the range of complex reflectances that we commonly encounter, including diffuse, glossy, and even mirror-like surfaces.
Using our new hardware setup, we’re able to capture large-scale hidden scenes of a few meters in size and moving hidden objects, like a person walking around. This is in part due to a much more powerful laser than previous hardware setups, and to a detector that is more sensitive to indirect echoes of light.
Tech Briefs: You said you want to build systems that go beyond autonomous cars and robots. Like what exactly? What kinds of applications do you envision?
Lindell: Autonomous cars may be one of the most practical applications of non-line-of-sight imaging because roadways have some of the easiest objects to detect: retroreflective objects. Street signs, lane markers, license plates, headlights, traffic cones, and construction worker vests all have retroreflective coatings that make these objects very visible in measurements captured for non-line-of-sight imaging.
We think that this technology could be very exciting for autonomous cars and robots, but also for other remote sensing applications and medical imaging. Fundamentally, our work is about trying to recover a clear image from light that has scattered and bounced around. This is a challenge that comes up in satellite-based imaging to capture the surface of the Earth, Moon, or other planets, in medical imaging to see through tissue to detect cancers or lesions, and in other domains. The most exciting thing is that we can apply these types of computational algorithms to see what was previously invisible.
Tech Briefs: Was there a kind of breakthrough moment? What was the biggest challenge to get this camera system working the way that you wanted it to, and when were you able to achieve the imagery you were looking for?
Lindell: I think the breakthrough moment was capturing the first usable measurements on the physical hardware. There’s a lot of effort that goes into getting the hardware working that goes unnoticed.
For example, I spent many hours just getting the system calibrated, including aligning the laser and detector, characterizing the detector, synchronizing the scanning system, and writing the software to capture the raw data, and process it into the necessary format. When I finally got to the point where all the basic hardware modules were working together, I captured a scene with a variety of hidden objects, including a big glossy statue of a dragon and a disco ball. When I pulled up a visualization of the measurements, I saw this beautiful pattern of all the overlaid echoes of light and it was really stunning.
Tech Briefs: How is the detector improved?
Lindell: Normally the multiply scattered light from the hidden object is difficult to observe because the detector is saturated by the very strong reflection of light from the visible wall. This causes the detector to miss much of the light returning from the hidden object which arrives later in time. The improved detector has a fast-gating capability, which essentially functions as an ultrafast electronic shutter.
So we can prevent saturation by keeping the detector turned off until the strong reflection of light from the visible wall has dissipated, and then turn the detector on just before the multiply scattered light from the hidden object arrives. Since the laser sends out short pulses of light 10 million times every second, this fast electronic switching occurs very rapidly so that we can ignore the direct reflection and capture the multiply scattered light from each laser pulse.
Tech Briefs: What are the most challenging objects to detect with your system?
Lindell: The biggest challenge in non-line-of-sight imaging is the small amount of signal that we’re working with. After the light has bounced around a couple times, the number of photons that actually make it back to our detector is incredibly small. So the most challenging hidden objects to image are the ones that are dark or far away and reflect very little light back. Of course, if you’re going to deploy these systems on autonomous cars, this is something you have to handle. So we’re interested in techniques to image these hidden objects outdoors under low-signal conditions and trying to make these techniques work on the commercial LIDAR systems commonly seen on self-driving cars.
What do you think? Share your comments and questions below.