Two new systems for driverless cars can identify a user’s location and orientation in places where GPS does not function, and identify the various components of a road scene in real time on a regular camera or smartphone, performing the same job as sensors costing much more. The separate but complementary systems have been designed by researchers from the University of Cambridge.
The first system, called SegNet, can take an image of a street scene it hasn’t seen before and classify it, sorting objects into 12 different categories – such as roads, street signs, pedestrians, buildings and cyclists – in real time. It can deal with light, shadow and night-time environments, and currently labels more than 90% of pixels correctly. Previous systems using expensive laser or radar based sensors have not been able to reach this level of accuracy while operating in real time.
A separate but complementary system uses images to determine both precise location and orientation. The localization system runs on a similar architecture to SegNet, and is able to localize a user and determine their orientation from a single color image in a busy urban scene. The system is far more accurate than GPS, and works in places where GPS does not, such as indoors, in tunnels, or in cities where a reliable GPS signal is not available.