Stereoscopic Machine-Vision System Using Projected Circles
- Created: Monday, 01 March 2010
This system identifies obstacles in relatively short processing times.
A machine-vision system capable of detecting obstacles large enough to damage or trap a robotic vehicle is undergoing development. The system includes (1) a pattern generator that projects concentric circles of laser light forward onto the terrain, (2) a stereoscopic pair of cameras that are aimed forward to acquire images of the circles, (3) a frame grabber and digitizer for acquiring image data from the cameras, and (4) a single-board computer that processes the data. The system is being developed as a prototype of machine-vision systems to enable robotic vehicles (“rovers”) on remote planets to avoid craters, large rocks, and other terrain features that could capture or damage the vehicles. Potential terrestrial applications of systems like this one could include terrain mapping, collision avoidance, navigation of robotic vehicles, mining, and robotic rescue.This system is based partly on the same principles as those of a prior stereoscopic machine-vision system in which the cameras acquire images of a single stripe of laser light that is swept forward across the terrain. However, this system is designed to afford improvements over some of the undesirable features of the prior system, including the need for a pan-and-tilt mechanism to aim the laser to generate the swept stripe, ambiguities in interpretation of the single-stripe image, the time needed to sweep the stripe across the terrain and process the data from many images acquired during that time, and difficulty of calibration because of the narrowness of the stripe.
In this system, the pattern generator does not contain any moving parts and need not be mounted on a pan-and-tilt mechanism: the pattern of concentric circles is projected steadily in the forward direction. The system calibrates itself by use of data acquired during projection of the concentric-circle pattern onto a known target representing flat ground. The calibration-target image data are stored in the computer memory for use as a template in processing terrain images.
During operation on terrain, the images acquired by the left and right cameras are analyzed. The analysis includes (1) computation of the horizontal and vertical dimensions and the aspect ratios of rectangles that bound the circle images and (2) comparison of these aspect ratios with those of the template. Coordinates of distortions of the circles are used to identify and locate objects. If the analysis leads to identification of an object of significant size, then stereoscopic-vision algorithms are used to estimate the distance to the object. The time taken in performing this analysis on a single pair of images acquired by the left and right cameras in this system is a fraction of the time taken in processing the many pairs of images acquired in a sweep of the laser stripe across the field of view in the prior system.
The results of the analysis include data on sizes and shapes of, and distances and directions to, objects. Coordinates of objects are updated as the vehicle moves so that intelligent decisions regarding speed and direction can be made. The results of the analysis are utilized in a computational decision-making process that generates obstacle-avoidance data and feeds those data to the control system of the robotic vehicle.
This work was done by Jeffrey R. Mackey of ASRC Aerospace Corp. for Glenn Research Center. For more information, download the Technical Support Package (free white paper) at www.techbriefs.com/tsp under the Physical Sciences category.
Inquiries concerning rights for the commercial use of this invention should be addressed to NASA Glenn Research Center, Innovative Partnerships Office, Attn: Steve Fedor, Mail Stop 4–8, 21000 Brookpark Road, Cleveland, Ohio 44135. Refer to LEW-18320-1