The Video Guidance Sensor (VGS) system is an optoelectronic sensor that provides automated guidance between two vehicles. In the original intended application, the two vehicles would be spacecraft docking together, but the basic principles of design and operation of the sensor are applicable to aircraft, robots, vehicles, or other objects that may be required to be aligned for docking, assembly, resupply, or precise separation.

The Sensor Head and the Target Assembly are mounted on two different vehicles. The sensor head generates data to guide the automated docking of the two vehicles. The sensor head and target assembly are positioned so that when docking is complete, they are aligned with each other.

The system includes a sensor head containing a monochrome charge-coupled- device video camera and pulsed laser diodes mounted on the tracking vehicle, and passive reflective targets on the tracked vehicle. The lasers illuminate the targets, and the resulting video images of the targets are digitized. Then, from the positions of the digitized target images and known geometric relationships among the targets, the relative position and orientation of the vehicles are computed.

As described thus far, the VGS system is based on the same principles as those of the system described in "Improved Video Sensor System for Guidance in Docking" (MFS-31150), NASA Tech Briefs, Vol. 21, No. 4 (April 1997), page 9a. However, the two systems differ in the details of design and operation.

The VGS system is designed to operate with the target completely visible within a relative-azimuth range of ±10.5° and a relative-elevation range of ±8°. The VGS acquires and tracks the target within that field of view at any distance from 1.0 to 110 m and at any relative roll, pitch, and/or yaw angle within ±10°. The VGS produces sets of distance and relative-orientation data at a repetition rate of 5 Hz. The software of this system also accommodates the simultaneous operation of two sensors for redundancy.

The figure depicts the sensor head and the targets. In addition to the sensor head, the active part of the VGS includes an electronics module. The sensor head must be mounted where it will be aligned with the target when the two vehicles are docked, and the electronics module could be mounted nearby, connected by a set of cables. In addition to the laser diodes and the video camera, the sensor head contains a sunlight-rejection filter, thermoelectric coolers (to keep the laser-diode temperatures below 25°C), heaters (to keep the entire sensor head above 3°C), and temperature sensors.

There are eight laser diodes, of which four operate at a wavelength of 800 nm and the other four at 850 nm. The laser outputs are fed through optical fibers to emission points arranged in a circle as close as possible to the video-camera lens. The camera is equipped with an infrared lens and with a double-hump band-pass filter where the humps are 800 and 850-nm.

The electronics module (not shown) contains a power supply, temperature sensors, and five electronic-circuit cards, as follows: (1) a single-board computer, (2) a frame grabber/digital signal processor (DSP) card, (3) a camera-control card, (4) a laser-diode control card, and (5) a control card for the thermoelectric coolers. The frame grabber/digital signal processor card performs most of the work of the system, capturing images and processing the image data to calculate the relative positions and orientations.

The target assembly contains subassemblies of long-range and short-range targets, all of which are corner-cube retroreflectors. The targets are covered with filters that pass the 850-nm light and block the 800-nm light. The short-range targets are positioned around, and are smaller than, the long-range targets. The short-range targets are equipped with plano-concave lenses; this is necessary to make these targets visible to the camera over a range of angles at distances <1.5 m.

The operating cycle begins with the firing of the 800-nm lasers and capturing the resulting frame of video data; this frame represents a background image because the target filters block the returns from the targets at the 800-nm wavelength. Then the 850-nm lasers are fired to capture another frame of video data; this frame represents an image that contains the target spots because target filters allow reflection at the 850-nm wavelength. Because the camera operates at a standard 30-Hz video frame rate, the time between frames is short enough to reduce motion-induced noise to an acceptably low level.

To remove the background (and thereby obtain target-image data alone), the DSP subtracts the first frame of video data from the second, and then subtracts a threshold from the resulting frame. Then the DSP processes the image data to group the illuminated pixels into spots and recognizes the targets by associating the patterns of spots with the known target patterns. The number of targets and their positions in the assembly are designed so that the relative positions and orientations of the sensor head and the target assembly can be computed by iterative numerical solution of the equations that relate the camera/sensor-head geometry to the positions of the target spots in the video image.

This work was done by Richard T. Howard, Thomas C. Bryan, and Michael L. Book of Marshall Space Flight Center and John L. Jackson of Micro Craft, Inc. For further information, contact Sammy Nabors, MSFC Commercialization Assistance Lead, at This email address is being protected from spambots. You need JavaScript enabled to view it. . MFS-31283