An algorithm and software to implement the algorithm are being developed as means to estimate the state (that is, the position and velocity) of an autonomous vehicle, relative to a visible nearby target object, to provide guidance for maneuvering the vehicle. In the original intended application, the autonomous vehicle would be a spacecraft and the nearby object would be a small astronomical body (typically, a comet or asteroid) to be explored by the spacecraft. The algorithm could also be used on Earth in analogous applications — for example, for guiding underwater robots near such objects of interest as sunken ships, mineral deposits, or submerged mines.

For the purpose of the algorithm, it is assumed that the robot would be equipped with a vision system that would include one or more electronic cameras, image-digitizing circuitry, and an imagedata- processing computer that would generate feature-recognition data products. Such products customarily include bearing angles of lines of sight from the camera(s) [and, hence, from the vehicle] to recognized features. The data products that are processed by the present algorithm are of the following types:

  • The Cartesian vector from the camera to a reference point on or in the target body;
  • Bearing angles from the camera to the reference point;
  • A landmark table (LMT);
  • A paired-feature table (PFT); and
  • A range point table (RPT).

The incorporation of the LMT and PFT is particularly important. LMT and PFT data are generated by typical computer- vision systems that could be used in the contemplated applications. In an LMT, a vision system recognizes landmarks from an onboard catalog and reports their bearing angles and associated known locations on the target body. In a PFT, a vision system reports bearing angles to features recognized as being common to two images taken at different times. Relative to the LMT, the PFT can be generated with less computation because it is necessary only to track features frame-to-frame; it is not necessary to associate the features with landmarks. However, it is more challenging to incorporate the PFT in a state-estimation algorithm for reasons discussed below. The LMT and PFT are complementary in the sense that the LMT provides positiontype information while the PFT provides velocity-type information. However, the velocity-type information from the PFT is incomplete because it includes an unknown scale factor. A state-estimation algorithm must fuse the aforementioned data types to make an optimal estimate.

The following three main challenges arise as parts of this data-fusion problem:

  • The first challenge is posed by the large number of features (typically ≥50) that a typical computerized vision system can recognize during any given frame period. The large number of features imposes a heavy burden for real-time computation.
  • The second challenge is associated with the lack of range information when camera measurements are the only measurements available. Camera's measurements consist only of bearings to specific feature points in images. The PFT data type is especially challenging inasmuch as recognized features do not necessarily represent known objects and do not contain location information.
  • The third challenge is posed by the fact that computer vision information often relates to images taken in the past. For example, the PFT data type reports features that were recognized as being common to two images taken at earlier times. The need to update the current state estimate by use of information from the past presents a challenge because prior recursive state-estimating algorithms typically only propagate the current state.

The present algorithm addresses these challenges by incorporating the following innovations:

The first innovation is a preprocessing step, based on QR factorization (a particular matrix factorization, a description of which would exceed the scope of this article), that provides for optimal compression of LMT, PFT, and RPT updates that involve large numbers of recognized features. This compression eliminates the need for a considerable amount of real-time computation.

The second innovation is a mathematical annihilation method for forming a linear measurement equation from the PFT data. The annihilation method is equivalent to a mathematical projection that eliminates the dependence on the unknown scale factor.

The third innovation is a state-augmentation method for handling PFT and other data types that relate states from two or more past instants of time. The state-augmentation method stands in contrast to a prior stochastic cloning method. State augmentation provides an optimal solution to the state-estimation problem, while stochastic cloning can be shown to be suboptimal.

This work was done by David Bayard and Paul Brugarolas of Caltech for NASA's Jet Propulsion Laboratory.

The software used in this innovation is available for commercial licensing. Please contact Karina Edmonds of the California Institute of Technology at (626) 395-2322. Refer to NPO-41321.

NASA Tech Briefs Magazine

This article first appeared in the September, 2007 issue of NASA Tech Briefs Magazine.

Read more articles from the archives here.