Imagine if every police car, bus, and garbage truck gathered important infrastructure data as they made their usual routes through the city.
In different field tests around the world, re searchers from the Massachusetts Institute of Technology (Cam bridge, MA) mounted a camera-and-sensor system to the tops of vehicles. As the cars traveled up and down streets, sophisticated software recognized and recorded the presence and strength of street lights. The software estimated pole heights and distinguished between the street lamp’s output and other points of illumination.
Using GPS, the MIT technology precisely determines a vehicle’s location, and can thus create a comprehensive map and accurate database of light sources. Much like how search giant Google creates its “Street View” from camera-mounted technologies, professor Sanjay Sarma hopes that the system will build a continuously updating map, one that efficiently spots lighting inefficiencies. Sarma, the Fred Fort Flowers and Daniel Fort Flowers Professor in Mechanical Engineering, led a team of students in the effort.
Photonics & Imaging Technology: How does your software system more efficiently monitor street lighting?
Professor Sanjay Sarma: Most cities don’t know what their street light infrastructure is. They don’t know where the street lights are; they don’t know if they have tall, short, or medium poles; they don’t know what kind of bulbs they have, whether they are fluorescent or LED; they don’t know if the bulb is working, or if, most importantly, it has been calibrated to provide enough light at the surface to be convenient, useful, and safe for the citizen.
We built a sensor, placed it on city vehicles like police cars and maintenance vehicles, and constructed maps of cities.
P&IT: How does the technology determine a failing light?
Sarma: The technology looks upward. If you scan from different positions, you can determine the height of the street light. If you scan in the night, you can see that the street light is putting out light or not.
P&IT: What role does the software play in these kinds of determinations?
Sarma: It’s rather difficult to do. If you start off with no a priori information, you need to first detect these street lights in the sky. Then you need to triangulate them and get an altitude. It’s a form of simultaneous localization and mapping (SLAM): You’re building an image map of the street’s light infrastructure. Then, you’re checking that the light is working or changing.
You have to take location information that you get from GPS. The system is a combination of different technological elements: location, machine learning and classification, and a lot of geometry. Then, it’s matching different criteria, and picking up and sensing the type of light.
P&IT: For these types of applications, do you have to “train” the software?
Sarma: You need huge data sets to train the software. You need to train the software on street lights and roads. The training sets are created by human beings. The fact is: A lot of today’s street light inspection is done by human beings. They walk around with a notepad and a pen; they look at a street light; and they say, “That one doesn’t seem to be working very well.” With this system, it will be more of a matter of training the software.
P&IT: How is this setup inexpensive?
Sarma: We want to go to very inexpensive sensors over time. My ultimate goal is to use a $50 Android tablet — pointed upwards with an LTE connection. Then, we’re doing this in real-time and can place one on every vehicle.
The problem is that the lights are too bright and saturate your typical camera. There is a combination of electrical, sensor, and computer software challenges to make the package work in a way that is scalable and inexpensive.
P&IT: What are some other challenges in making sure that the software has an accurate reading?
Sarma: It’s surprisingly complex. For example, trees and foliage can be a problem. If we built a really expensive technology, we could probably do this more easily. But we are also trying to make the system cheap and scalable.
Another challenge is inaccurate location, especially in urban canyons where GPS is denied. GPS, in particular, is not accurate to begin with. GPS can be off by a few meters. An additional challenge is reflections from a glass building or window.
P&IT: Field tests were carried out in four cities: Cambridge, Massachusetts; Malaga and Santander in Spain; and Birmingham, UK. What did you learn from those field tests?
Sarma: Once we got the system working, the results were very interesting. You can actually see how — as you go away from a street light — the light that reaches the top of the car’s surface, where we do the sensing, dips and then increases again when you come to the next light. You can see when a light is broken. You can see — and this is the most important part — when there’s incident light from somewhere else.
For example, a commercial district may not need a street light on if a street is getting enough spillover. Maybe once the shops close is the time when the street light needs to come on. [During a field test], there was a certain area where there was a lot of lighting. We realized that there were floodlights from tennis courts.
P&IT: How can this system help cities be more efficient with their lighting?
Sarma: The most important direction for this is cities are right now grappling with the question of: Should we go with LEDs or should we not? The problem with LEDs is that they’re expensive; the upside is that you save an enormous amount of energy.
With this [MIT] system, we can figure out a much more favorable onramp for LEDs. We can say, for example, “Hey, listen; you’re poorly lit here. You need more street lamps. Why don’t you just go LED?” or “This bulb is broken. Go LED.”
Once you have LEDs, LEDs can be modulated much better than other lighting technologies. Now you can have the LED become brighter as the Sun sets, and more dim as the Sun rises. We can say, “This commercial district of yours is overlit. Turn the LED off until 9 pm when the shops close.”
P&IT: What kinds of other valuable data can be collected from this system, and what other uses do you see for this technology?
Sarma: We’re all excited about self-driving cars; self-driving cars will need real-time information about roads. We have another demonstration, a system that looks like an iPad pointing at the street, and we use that to collect all sorts of information about the road: both impermanent or transient debris, cracks, and bottles. This application provides road-condition crowdsourcing.
We can also point [the device] sideways and look at streets, to find everything from graffiti, to an overflowing trash can, to a bench that’s broken. This is part of a family [of technologies]. We also have a startup company that uses long-wave infrared technology to examine buildings in the night; we determine which buildings are leaking heat, so that homeowners can fix their homes. You can also put sensors on a car to look at air quality, and map the air quality in the city.
The basic idea is: When these cars are driving around, a lot of information can be cloud-sourced from them.
P&IT: What is most exciting to you about this kind of technology?
Sarma: You can think of Google Street View as imaging, done sideways from a car. The system similarly creates a generalization of imaging. This form of crowdsourcing of environmental and infrastructure information can revolutionize the way in which citizens participate in making our infrastructure better. Cities can be much more deliberate about fixing issues and improving the lives of our citizens.
The MIT team included Sumeet Kumar PhD ’14; Ajay Deshpande PhD ’08, who is now a researcher at IBM’s T.J. Watson Research Center; Stephen Ho PhD ’04, a research scientist with MIT’s Field Intelligence Laboratory; and postdoc student Jason Ku. Sumeet Kumar, the lead author, is now a data scientist at Facebook. For more information, visit news.mit.edu .