An improved method of computing positions of objects from digitized images acquired by two or more cameras (see figure) has been developed for use in tracking debris shed by a spacecraft during and shortly after launch. The method is also readily adaptable to such applications as (1) tracking moving and possibly interacting objects in other settings in order to determine causes of accidents and (2) measuring positions of stationary objects, as in surveying. Images acquired by cameras fixed to the ground and/or cameras mounted on tracking telescopes can be used in this method.

Two Cameras Are Aimed at a pair of possibly moving objects, at least one of which is known. The positions and orientations of the cameras relative to the known object need not be known initially:instead, they are determined by means of photogrammetric computations.
In this method, processing of image data starts with creation of detailed computer-aided design (CAD) models of the objects to be tracked. By rotating, translating, resizing, and overlaying the models with digitized camera images, parameters that characterize the position and orientation of the camera can be determined. The final position error depends on how well the centroids of the objects in the images are measured; how accurately the centroids are interpolated for synchronization of cameras; and how effectively matches are made to determine rotation, scaling, and translation parameters.

The method involves use of the perspective camera model (also denoted the point camera model), which is one of several mathematical models developed over the years to represent the relationships between external coordinates of objects and the coordinates of the objects as they appear on the image plane in a camera. The point camera model is implemented in a commercially available software system for three-dimensional graphics and animation used in television, film, industrial design, architecture, and medical imaging.

The method also involves extensive use of the affine camera model, in which the distance from the camera to an object (or to a small feature on an object) is assumed to be much greater than the size of the object (or feature), resulting in a truly two-dimensional image. Using a technique common in photogrammetry as practiced in aerial surveying, depth information is obtained from a combination of image data acquired from two or more cameras. Synchronized image data from two or more cameras are combined following an error-minimization approach. Precise measurements are obtained by synchronizing data by use of linear interpolation and a dual-camera trajectory solution. Velocities of objects are also estimated in this model.

The affine camera model does not require advance knowledge of the positions and orientations of the cameras. This is because ultimately, positions and orientations of the cameras and of all objects are computed in a coordinate system attached to one object as defined in its CAD model.

Initially, the software developed to solve the equations of the affine camera model implemented a gradient-descent algorithm for finding a solution of a matrix · vector equation that minimizes an error function. Whereas photogrammetric analyses typically entailed weeks of measurements and computations to obtain accurate results from a given set of images, this software yielded solutions in times of the order of minutes. A more recent version of the software solves the affine-camera-model equations directly by means of a matrix inversion in a typical computation time of the order of a second.

This work was done by Steve Klinko, John Lane, and Christopher Nelson of ASRC Aerospace for Kennedy Space Center.

KSC-12665/3/705