A new imaging sensor created by a team at Carnegie Mellon University and the University of Toronto allows depth cameras to operate effectively in bright sunlight. The researchers, including Srinivasa Narasimhan, CMU associate professor of robotics, developed a mathematical model to help cameras capture 3D information and eliminate unneeded light or “noise” that often wash out the signals necessary to detect a scene’s contours.

Srinivasa Narasimhan, CMU associate professor of robotics.

Sensor Technology: Why has capturing light been such a challenge?

Srinivasa Narasimhan: There’s no problem capturing light. In fact, the more light the better, in many cases, but here you have two competing sources of light. You have the light source that the sensor itself has, which is typically very low power, and you have the outdoor light, which is typically much higher power. You want to capture one, but not the other.

Sensor Technology: How does the sensor choose which light rays to capture?

Narasimhan: The sensor is only capturing the light that it is sending out, rather than anything else, by using a very-lowpower laser scanner or projector. It is shining light in one row and capturing that light. That allows us to be energy efficient.

Sensor Technology: What does your prototype look like?

Narasimhan: We have two prototypes. One has a laser projector plus a rolling shutter camera — the most common sensor that’s out there today. Most iPhone cameras have rolling shutter cameras.

The depth-sensing camera technology captures 3D information in brightly lit scenes; a prototype senses the shape of a lit CFL bulb (above), without producing glare. (Image Credit: Carnegie Mellon University)

Rolling shutter means that you’re capturing light one row at a time. What we thought was: If a laser projector is projecting one line at a time, and the rolling shutter is capturing one line at a time, you can synchronize those two things. They’ll capture only the lines that the laser projector is sending out. That led us to this more important theory of energy efficiency. For example, with this prototype, you’re only capturing light rays that are along the single plane. So you scan a sheet of light, and you just capture that same sheet of light.

Sensor Technology: Why is it important for these sensors to be energy efficient?

Narasimhan: If you want to put this [technology] on a cell phone or a mobile robot, it can’t be really heavy. You really have to think about energy. Once you make light sources smaller, they have to be low power. It’s very hard to be very high power and very small. Of course, if you send a rover to the moon, a lot of the energy is being spent by sensing applications rather than exploration or driving the robot itself. Therefore, you have to conserve energy.

Sensor Technology: How does the sensor enable 3D imaging?

Narasimhan: 3D imaging means that you’re capturing light rays that are intersecting or triangulating. If you wanted to capture only light rays that maybe bounce off three times, or two times, or ten times in the scene, those kinds of things are not easy to do now. With this kind of technology, you can capture and choose the particular light rays, even exotic ones, that you could not before. Ordinary cameras and ordinary illumination systems blast light everywhere, and then capture light from everywhere. That’s usually not a great thing to do.

An experimental set up in full sunlight, with the depth-sensing camera on a tripod opposite the subject. (Image Credit: Carnegie Mellon University)

We can automatically remove all of the reflections from a disco ball, because we don’t capture any of them. Or, if we want, we can capture only those exotic reflections. Light bounces around in many interesting ways in a scene, and we now have a way of controlling what to project and what to capture in a very efficient way.

Sensor Technology: What kinds of applications are possible because of this kind of technology?

Narasimhan: You can take these sensors outdoors. So you can put them on mobile robots, you can put them on your cell phones, and you can create these outdoor Kinect-type sensors. We’re also now thinking of putting this on rovers that might go to distant planets or icy moons one day.