A new imaging sensor created by a team at Carnegie Mellon University and the University of Toronto allows depth cameras to operate effectively in bright sunlight. The researchers, including Srinivasa Narasimhan, CMU associate professor of robotics, developed a mathematical model to help cameras capture 3D information and eliminate unneeded light or “noise” that often wash out the signals necessary to detect a scene’s contours.

Srinivasa Narasimhan, CMU associate professor of robotics.

Sensor Technology: Why has capturing light been such a challenge?

Srinivasa Narasimhan: There’s no problem capturing light. In fact, the more light the better, in many cases, but here you have two competing sources of light. You have the light source that the sensor itself has, which is typically very low power, and you have the outdoor light, which is typically much higher power. You want to capture one, but not the other.

Sensor Technology: How does the sensor choose which light rays to capture?

Narasimhan: The sensor is only capturing the light that it is sending out, rather than anything else, by using a very-lowpower laser scanner or projector. It is shining light in one row and capturing that light. That allows us to be energy efficient.

Sensor Technology: What does your prototype look like?

Narasimhan: We have two prototypes. One has a laser projector plus a rolling shutter camera — the most common sensor that’s out there today. Most iPhone cameras have rolling shutter cameras.

The depth-sensing camera technology captures 3D information in brightly lit scenes; a prototype senses the shape of a lit CFL bulb (above), without producing glare. (Image Credit: Carnegie Mellon University)

Rolling shutter means that you’re capturing light one row at a time. What we thought was: If a laser projector is projecting one line at a time, and the rolling shutter is capturing one line at a time, you can synchronize those two things. They’ll capture only the lines that the laser projector is sending out. That led us to this more important theory of energy efficiency. For example, with this prototype, you’re only capturing light rays that are along the single plane. So you scan a sheet of light, and you just capture that same sheet of light.

Sensor Technology: Why is it important for these sensors to be energy efficient?

Narasimhan: If you want to put this [technology] on a cell phone or a mobile robot, it can’t be really heavy. You really have to think about energy. Once you make light sources smaller, they have to be low power. It’s very hard to be very high power and very small. Of course, if you send a rover to the moon, a lot of the energy is being spent by sensing applications rather than exploration or driving the robot itself. Therefore, you have to conserve energy.

Sensor Technology: How does the sensor enable 3D imaging?

Narasimhan: 3D imaging means that you’re capturing light rays that are intersecting or triangulating. If you wanted to capture only light rays that maybe bounce off three times, or two times, or ten times in the scene, those kinds of things are not easy to do now. With this kind of technology, you can capture and choose the particular light rays, even exotic ones, that you could not before. Ordinary cameras and ordinary illumination systems blast light everywhere, and then capture light from everywhere. That’s usually not a great thing to do.

An experimental set up in full sunlight, with the depth-sensing camera on a tripod opposite the subject. (Image Credit: Carnegie Mellon University)

We can automatically remove all of the reflections from a disco ball, because we don’t capture any of them. Or, if we want, we can capture only those exotic reflections. Light bounces around in many interesting ways in a scene, and we now have a way of controlling what to project and what to capture in a very efficient way.

Sensor Technology: What kinds of applications are possible because of this kind of technology?

Narasimhan: You can take these sensors outdoors. So you can put them on mobile robots, you can put them on your cell phones, and you can create these outdoor Kinect-type sensors. We’re also now thinking of putting this on rovers that might go to distant planets or icy moons one day.

One of the challenges on the Earth’s moon is that, at the poles, there’s about 30 percent more sunlight because there’s no atmosphere. So you have to beat a really strong component of light, with very-low-power devices.

A three-dimensional capture of a subject’s face. (Image Credit: Carnegie Mellon University)

The same type of idea can also be used for medical imaging: seeing through skin to measure micro-circulation of vessels.

Sensor Technology: How can the technology be used with self-driving cars?

Narasimhan: Imagine if you wanted to do a platooning of vehicles. You’re in a busy city street, and you have a lot of traffic signals. Usually traffic backs up 20 or 30 cars. You’re not able to react as quickly as possible when you’re stopping at a traffic sign.

But imagine if you go to a traffic light and you give up control of the vehicle. The cars are going at very low speeds, and all the vehicles there just start at the same time instant. For that, you can use the sensor to estimate the distance to the vehicle in front. So just think of Kinect-type depth sensing of the vehicle in front of you. Because we have a lot of data from a single image to estimate depth, the depth quality will be much higher than a single beam.

Today you have adaptive cruise control, which works at about 25-50 meter distances when you’re driving at about 50 miles per hour. Now we have a way of getting depth outdoors in bright sunlight, so you can make the cars 1 or 2 meters apart. That means you can have dense platoons of cars.

Sensor Technology: What is the state of the technology now? What are you working on?

The 3D shape of an illuminated light bulb, taken by the sensor and depth-sensing camera. (Image Credit: Carnegie Mellon University)

Narasimhan: We are trying to figure out how to build different form-factor sensors for different applications and range requirements. A Kinect-type sensor might just require a range of a few meters. If you want to build something at 10 meters, or if you wanted to build something for a millimeter, to look through skin, how would you build these kinds of prototypes? The mathematical models that we have should guide us in trying to figure out how to build the light source part and the level-sensing part.

Sensor Technology: What other work are you doing at Carnegie Mellon’s Robotics Institute?

Narasimhan: We build different camera and lighting prototypes. We’re building programmable headlights that can allow you to use high beams without glaring anybody on the road. You can see better in snowstorms and rainstorms when you’re driving at night. You can have better road visibility in your lane versus another lane.

We do a lot of research work in underwater imaging as well: How do you control lighting and imaging so that you see better in murky waters? We build different types of 3D sensors and scanners. My lab really is about codesign of hardware, optics, and software to hopefully do what was not possible before, or do something much, much better.

Sensor Technology: What are the sensor’s drawbacks?

Narasimhan: Still, power is a big issue. This is not a silver bullet. There’s an idea here, but it has its limitations in terms of range. We may go to 7-10 meters, but if you want to go to 20 or 50 or 100, it’s going to cost much more power. We’re not really beating the most fundamental physics here. We’re just cleverly figuring out how to not waste our signal and measure unwanted noise.

To learn more, visit www.cmu.edu  and www.utoronto.ca . Other researchers responsible for the sensor development included Kyros Kutulakos, University of Toronto professor of computer science, Supreeth Achar, a Carnegie Mellon Ph.D. student in robotics, and Matthew O’Toole, a University of Toronto Ph.D. computer science student.


NASA Tech Briefs Magazine

This article first appeared in the November, 2015 issue of NASA Tech Briefs Magazine.

Read more articles from this issue here.

Read more articles from the archives here.