An optoelectronic system senses rotational and translational misalignment between two objects. The system might be used in such diverse applications as aligning construction equipment, mating parts of prefabricated buildings, and aligning vehicles for docking. It could replace more-expensive alignment systems like laser theodolites.

In an experimental version of the system, a video camera is mounted on the end effector of a robot, which is to be aligned with a fixture to which a reflective target is attached (see figure). A light-emitting diode (LED) is positioned at the center of the camera lens, aimed away from the camera. The target includes an ordinary mirror and several retroreflectors, which reflect light from the LED back to the camera, regardless of the orientation of the target.

The Video Camera Observes a target that includes an ordinary mirror plus retroreflectors. Processed video images are used to adjust the orientation and position of the camera with respect to the target.

Typically, the optical axis of the camera is not perpendicular to the mirror plane at the beginning of an alignment sequence. Such misalignment is sensed when the reflection of the LED in the mirror appears off center on a video monitor connected to the camera. To bring the optical axis into alignment with the perpendicular to the mirror surface, the robot is commanded to turn the camera until the video image of the LED appears at the center.

Translational misalignment in a plane parallel to the mirror surface is corrected next. Such misalignment is sensed when the video image of the retroreflectors appears displaced from the video image of the LED. The robot arm moves the camera until the centroid of the image of the retroreflectors coincides with the centroid of the image of the LED.

In processing the video-image data to compute the centroids of the LED and retroreflector images, it is necessary to eliminate data on such background features as the manipulator and the camera lens. For this purpose, a picture is taken with LED off, then quickly followed by a second picture taken with the LED on. The data processor then effectively subtracts the first picture from the second picture and performs a binary threshold operation. Only the images of the LED and the retroreflectors remain and are used to compute the centroids.

This work was done by Leo Monford of Johnson Space Center and Robin Redfield, Michael Bradham, Louis Everett, and Jeffrey Pafford of Texas A&M Research Foundation. MSC-21977

NASA Tech Briefs Magazine

This article first appeared in the November, 1999 issue of NASA Tech Briefs Magazine.

Read more articles from the archives here.