A method and apparatus have been developed to solve the problem of automated acquisition and tracking, from a location on the ground, of a luminous moving target in the sky. The method involves the use of two electronic cameras: (1) a stationary camera having a wide field of view, positioned and oriented to image the entire sky; and (2) a camera that has a much narrower field of view (a few degrees wide) and is mounted on a two-axis gimbal. The wide-field-of-view stationary camera is used to initially identify the target against the background sky. So that the approximate position of the target can be determined, pixel locations on the image-detector plane in the stationary camera are calibrated with respect to azimuth and elevation. The approximate target position is used to initially aim the gimballed narrow-field-of-view camera in the approximate direction of the target. Next, the narrow-field-of view camera locks onto the target image, and thereafter the gimbals are actuated as needed to maintain lock and thereby track the target with precision greater than that attainable by use of the stationary camera.

Figure 1. This Prototype Apparatus was built and tested, yielding the images shown in Figure 2.

Figure 1 shows a prototype of the apparatus. The stationary, wide-field-of-view camera includes a fish-eye lens that projects a full view of the sky (the full 360° of azimuth and the full 90° of elevation) onto a 512×512-pixel image detector of the active-pixel- sensor type. The gimballed narrow-field-of-view camera contains a charge-coupled- device (CCD) image detector. The apparatus also includes circuitry that digitizes the image-detector outputs and a computer that processes the image data and generates gimbal-control commands.

Figure 2. Images of the International Space Station (ISS) were acquired by the prototype apparatus and used to track the ISS as it moved across the sky.

The stationary, wide-field-of-view camera repeatedly takes pictures of the sky. In processing of the image data for each successive frame period, the immediately preceding frame is subtracted from the current frame, so that all that remains in the image is what has changed between the two successive frames. Hence, if there is a moving luminous target, it manifests itself in the processed image as a bright spot on a dark background (see Figure 2). The moving target is detected computationally as a spot of pixels brighter than a set threshold level. The location of the target is determined, to within a fraction of a pixel, as a brightness-weighted average pixel location. By use of a straightforward transformation that utilizes the image-detector-plane calibration, the target location is converted to azimuth and elevation coordinates, then by use of another calibrated transformation, the azimuth and elevation coordinates are converted to gimbal commands for initial aiming of the narrow-field-of-view camera.

Once the narrow-field-of view camera has been initially aimed and has acquired an image of the target, the apparatus switches into a tracking mode. In this mode, the gimbal commands are formulated to move the image of the target toward the center of the CCD image plane.

This work was done by Abhijit Biswas, Christopher Assad, Joseph M Kovalik, Bedabrata Pain, Chris J. Wrigley, and Peter Twiss of Caltech for NASA’s Jet Propulsion Laboratory.

NPO-45237



This Brief includes a Technical Support Package (TSP).
Document cover
Two-Camera Acquisition and Tracking of a Flying Target

(reference NPO-45237) is currently available for download from the TSP library.

Don't have an account? Sign up here.