A closed-loop pattern-recognition system is designed to provide guidance for maneuvering a small exploratory robotic vehicle (rover) on Mars to return to a landed spacecraft to deliver soil and rock samples that the spacecraft would subsequently bring back to Earth. The system could be adapted to terrestrial use in guiding mobile robots to approach known structures that humans could not approach safely, for such purposes as reconnaissance in military or law-enforcement applications, terrestrial scientific exploration, and removal of explosive or other hazardous items.

The system has been demonstrated in experiments in which the Field Integrated Design and Operations (FIDO) rover (a prototype Mars rover equipped with a video camera for guidance) is made to return to a mockup of Mars-lander spacecraft. The FIDO rover camera autonomously acquires an image of the lander from a distance of 125 m in an outdoor environment. Then under guidance by an algorithm that performs fusion of multiple line and texture features in digitized images acquired by the camera, the rover traverses the intervening terrain, using features derived from images of the lander truss structure. Then by use of precise pattern matching for determining the position and orientation of the rover relative to the lander, the rover aligns itself with the bottom of ramps extending from the lander, in preparation for climbing the ramps to deliver samples to the lander.

Features of the Ramps are used in close-range navigation.
The most innovative aspect of the system is a set of pattern-recognition algorithms that govern a three-phase visualguidance sequence for approaching the lander. During the first phase, a multifeature fusion algorithm integrates the outputs of a horizontal-line-detection algorithm and a wavelet-transformbased visual-area-of-interest algorithm for detecting the lander from a significant distance. The horizontal-linedetection algorithm is used to determine candidate lander locations based on detection of a horizontal deck that is part of the lander. The wavelet transform is then performed on an acquired image, and a texture signature is extracted in a local window of the wavelet coefficient space for each of the candidate lander locations. The multifeature fusion algorithm eliminates false positives arising from terrain features. The multifeature fusion algorithm is coupled with a three-dimensional visual-terminal-guidance algorithm that can extract and utilize cooperative features of the lander to accurately, iteratively estimate the pose of the rover relative to the lander and then steer the rover to the bottom of the ramps.

The second phase begins when the rover has arrived at ≈25 m from the lander and ends at a distance of ≈5 m. Inasmuch as the distance is more important than the position of the rover during this phase, the system utilizes parallel line features extracted from a truss structure that is part of the lander. In order to correctly distinguish the truss structure, the deck, which appears as a set of nearly level lines, is detected first. Any parallel lines below the deck are considered to be parts of the truss structure. Average distances between parallel lines are then used to estimate the distance from the rover to the lander, and the centroid of the parallel lines is used to compute the heading from the rover to the lander.