SLAM, or simultaneous localization and mapping, enables mobile autonomous robots to map their environments and determine their locations. SLAM can be used to improve object-recognition systems, a vital component of future robots that have to manipulate the objects around them in arbitrary ways. A new system developed at MIT uses SLAM information to augment existing object-recognition algorithms. And because a SLAM map is three-dimensional, it does a better job of distinguishing objects that are near each other than single-perspective analysis can.
One of the central challenges in SLAM is what roboticists call “loop closure.” As a robot builds a map of its environment, it may find itself somewhere it’s already been — entering a room from a different door. The robot needs to be able to recognize previously visited locations, so that it can fuse mapping data acquired from different perspectives. Object recognition could help with that problem.