A closed-loop pattern-recognition system is designed to provide guidance for maneuvering a small exploratory robotic vehicle (rover) on Mars to return to a landed spacecraft to deliver soil and rock samples that the spacecraft would subsequently bring back to Earth. The system could be adapted to terrestrial use in guiding mobile robots to approach known structures that humans could not approach safely, for such purposes as reconnaissance in military or law-enforcement applications, terrestrial scientific exploration, and removal of explosive or other hazardous items.
The system has been demonstrated in experiments in which the Field Integrated Design and Operations (FIDO) rover (a prototype Mars rover equipped with a video camera for guidance) is made to return to a mockup of Mars-lander spacecraft. The FIDO rover camera autonomously acquires an image of the lander from a distance of 125 m in an outdoor environment. Then under guidance by an algorithm that performs fusion of multiple line and texture features in digitized images acquired by the camera, the rover traverses the intervening terrain, using features derived from images of the lander truss structure. Then by use of precise pattern matching for determining the position and orientation of the rover relative to the lander, the rover aligns itself with the bottom of ramps extending from the lander, in preparation for climbing the ramps to deliver samples to the lander.
The most innovative aspect of the system is a set of pattern-recognition algorithms that govern a three-phase visualguidance sequence for approaching the lander. During the first phase, a multifeature fusion algorithm integrates the outputs of a horizontal-line-detection algorithm and a wavelet-transformbased visual-area-of-interest algorithm for detecting the lander from a significant distance. The horizontal-linedetection algorithm is used to determine candidate lander locations based on detection of a horizontal deck that is part of the lander. The wavelet transform is then performed on an acquired image, and a texture signature is extracted in a local window of the wavelet coefficient space for each of the candidate lander locations. The multifeature fusion algorithm eliminates false positives arising from terrain features. The multifeature fusion algorithm is coupled with a three-dimensional visual-terminal-guidance algorithm that can extract and utilize cooperative features of the lander to accurately, iteratively estimate the pose of the rover relative to the lander and then steer the rover to the bottom of the ramps.
The second phase begins when the rover has arrived at ≈25 m from the lander and ends at a distance of ≈5 m. Inasmuch as the distance is more important than the position of the rover during this phase, the system utilizes parallel line features extracted from a truss structure that is part of the lander. In order to correctly distinguish the truss structure, the deck, which appears as a set of nearly level lines, is detected first. Any parallel lines below the deck are considered to be parts of the truss structure. Average distances between parallel lines are then used to estimate the distance from the rover to the lander, and the centroid of the parallel lines is used to compute the heading from the rover to the lander.
During the third phase, distinctive pattern of six stripes on the ramps is utilized in a close-range algorithm for positioning the rover within 10 cm of the bottom of the ramps. At the beginning of this phase, when the rover is ≈5 m from the lander, the rover circles around the lander. The close-range algorithm includes three major steps: feature extraction, feature match, and pose estimation. The features are the six stripes, which are arranged so that any two of them is a topologically and spatially unique combination, so as to greatly reduce uncertainty and computational complexity.
An edge-detection subalgorithm is applied first. Then all straight-line segments are extracted. In order to find the stripes, the close-range algorithm looks for the ramps, which are defined by a set of long straight and nearly parallel lines (see figure). When a single stripe is detected in the image, a linear affine transformation based on its four corners can be constructed. If transformation provides a correct match with a known stripe, it can help to find other matches. A search on all stripes is performed to find the best matches. Once the matches are found, the position and orientation of the rover relative to the lander are estimated by use of the outside corners of the stripes. A minimum of four stripes is used to ensure safe navigation.
This work was done by Terrance Huntsberger and Yang Cheng of Caltech for NASA's Jet Propulsion Laboratory.
The software used in this innovation is available for commercial licensing. Please contact Karina Edmonds of the California Institute of Technology at (626) 395-2322. Refer to NPO-41867.
This Brief includes a Technical Support Package (TSP).
Pattern-Recognition System for Approaching a Known Target
(reference NPO-41867) is currently available for download from the TSP library.
Don't have an account? Sign up here.