A proposed video guidance sensor (VGS) would be based mostly on the hardware and software of a prior Advanced VGS (AVGS), with some additions to enable it to function as a time-of-flight rangefinder (in contradistinction to a triangulation or image-processing rangefinder). It would typically be used at distances of the order of 2 or 3 kilometers, where a typical target would appear in a video image as a single blob, making it possible to extract the direction to the target (but not the orientation of the target or the distance to the target) from a video image of light reflected from the target.

As described in several previous NASA Tech Briefs articles, an AVGS system is an optoelectronic system that provides guidance for automated docking of two vehicles. In the original application, the two vehicles are spacecraft, but the basic principles of design and operation of the system are applicable to aircraft, robots, objects maneuvered by cranes, or other objects that may be required to be aligned and brought together automatically or under remote control. In a prior AVGS system of the type upon which the now-proposed VGS is largely based, the tracked vehicle is equipped with one or more passive targets that reflect light from one or more continuous-wave laser diode(s) on the tracking vehicle, a video camera on the tracking vehicle acquires images of the targets in the reflected laser light, the video images are digitized, and the image data are processed to obtain the direction to the target.

The design concept of the proposed VGS does not call for any memory or processor hardware beyond that already present in the prior AVGS, but does call for some additional hardware and some additional software. It also calls for assignment of some additional tasks to two subsystems that are parts of the prior VGS: a field-programmable gate array (FPGA) that generates timing and control signals, and a digital signal processor (DSP) that processes the digitized video images.

The additional timing and control signals generated by the FPGA would cause the VGS to alternate between an imaging (direction-finding) mode and a time-of-flight (range-finding mode) and would govern operation in the range-finding mode. In the direction-finding mode, the VGS would function as described above. In the range-finding mode, the laser diode(s) would be toggled between two programmed power levels, while the intensities of the outgoing and return laser beams would be sensed by two matched photodetectors. The outputs of the photodetectors would be sent to dedicated high-speed analog-to-digital converters, the outputs of which would be stored (buffered) for processing.

The DSP would execute algorithms that would determine the time between corresponding transitions of the outgoing and return signals and, hence, equivalently, the time of flight of the laser signal and the distance to the target. The algorithms would be modern ones that would enable determination of the time of flight to within a small fraction of the transition time between the two laser power levels, even if the outgoing and return laser waveforms were slow, nonlinear, or noisy. The DSP would also execute an algorithm that would determine the return signal level and would accordingly adjust the laser output and the gain of a programmable-gain amplifier.

This work was done by Thomas Bryan, Richard Howard, Joseph L. Bell, Fred D. Roe, and Michael L. Book of Marshall Space Flight Center. For further information, access the Technical Support Package (TSP) free on-line at www.techbriefs.com/tsp under the Electronics/Computers category.

This invention has been patented by NASA (U.S. Patent No. 7,006,203). Inquiries concerning nonexclusive or exclusive license for its commercial development should be addressed to Sammy Nabors, MSFC Commercialization Assistance Lead, at This email address is being protected from spambots. You need JavaScript enabled to view it.. Refer to MFS- 31785-1.