A prototype of an advanced robotic vision system both (1) measures its own motion with respect to a stationary background and (2) detects other moving objects and estimates their motions, all by use of visual cues. Like some prior robotic and other optoelectronic vision systems, this system is based partly on concepts of optical flow and visual odometry. Whereas prior optoelectronic visual-odometry systems have been limited to frame rates of no more than 1 Hz, a visual-odometry subsystem that is part of this system operates at a frame rate of 60 to 200 Hz, given optical-flow estimates. The overall system operates at an effective frame rate of 12 Hz. Moreover, unlike prior machine-vision systems for detecting motions of external objects, this system need not remain stationary: it can detect such motions while it is moving (even vibrating).

The system includes a stereoscopic pair of cameras mounted on a moving robot. The outputs of the cameras are digitized, then processed to extract positions and velocities. The initial image-data-processing functions of this system are the same as those of some prior systems: Stereoscopy is used to compute three-dimensional (3D) positions for all pixels in the camera images. For each pixel of each image, optical flow between successive image frames is used to compute the two-dimensional (2D) apparent relative translational motion of the point transverse to the line of sight of the camera.

The challenge in designing this system was to provide for utilization of the 3D information from stereoscopy in conjunction with the 2D information from optical flow to distinguish between motion of the camera pair and motions of external objects, compute the motion of the camera pair in all six degrees of translational and rotational freedom, and robustly estimate the motions of external objects, all in real time. To meet this challenge, the system is designed to perform the following image-data-processing functions:

The visual-odometry subsystem (the subsystem that estimates the motion of the camera pair relative to the stationary background) utilizes the 3D information from stereoscopy and the 2D information from optical flow. It computes the relationship between the 3D and 2D motions and uses a least-mean-squares technique to estimate motion parameters. The least-mean- squares technique is suitable for real-time implementation when the number of external-moving-object pixels is smaller than the number of stationary-background pixels.

In another subsystem, pixels representative of external transversely moving objects are detected by means of differences between (1) apparent transverse velocities computed from optical flow and (2) the corresponding relative transverse velocities estimated from visual odometry under the temporary assumption that all pixels belong to the stationary background.

In yet another subsystem, pixels representative of radially moving objects are detected by means of differences between (1) changes in radial distance estimated from changes in stereoscopic disparities between successive image frames and (2) the corresponding relative radial velocities estimated from visual odometry under the temporary assumption that all pixels belong to the stationary background. However, it is more difficult to detect radial than to detect transverse motion, especially at large distances. This difficulty is addressed by incorporating several additional processing features, including means to estimate rates of change of stereoscopic disparities, post- processing to prevent false alarms at low signal-to-noise ratios, and taking advantage of sometimes being able to distinguish between radial-motion optical flow and transverse-motion optical flow at short distances.

This work was done by Ashit Talukder and Larry Matthies of Caltech for NASA's Jet Propulsion Laboratory.

NPO-40687



This Brief includes a Technical Support Package (TSP).
Document cover
Vision System Measures Motions of Robot and External Objects

(reference NPO-40687) is currently available for download from the TSP library.

Don't have an account?



Magazine cover
NASA Tech Briefs Magazine

This article first appeared in the November, 2008 issue of NASA Tech Briefs Magazine (Vol. 32 No. 11).

Read more articles from this issue here.

Read more articles from the archives here.


Overview

The document is a Technical Support Package from NASA's Jet Propulsion Laboratory (JPL) that discusses a vision system designed to measure the motions of robots and external objects. It is part of the NASA Tech Briefs and is intended to disseminate aerospace-related developments with broader technological, scientific, or commercial applications.

The vision system utilizes advanced techniques in passive visual sensing, primarily through the use of stereo cameras. It focuses on detecting various types of motion, including translational and rotational movements of robots, as well as the motion of external objects. The document outlines different categories of motion:

  1. Translational Motion: This includes movements parallel and perpendicular to the image plane, applicable to both robots and objects.
  2. Rotational Motion: This refers to the angular movements around the image plane.
  3. Transversal Motion: Objects moving parallel to the image plane fall under this category.
  4. Radial Motion: This involves movements that are perpendicular to the image plane, such as objects approaching or leaving the camera.

The document emphasizes the importance of detecting both transverse and radial motions, which are crucial for applications in robotics and autonomous systems. It discusses the integration of visual odometry and optical flow techniques to estimate 3D motion from moving robot platforms. The system aims to achieve accurate motion detection by combining data from stereo image sequences, ensuring spatial range consistency and optical flow consistency.

Additionally, the document highlights the challenges of distinguishing between the motion of the camera itself and the motion of objects within the scene. It presents various methods for segmentation and clustering to improve the accuracy of motion detection.

Overall, the vision system represents a significant advancement in the field of robotics, providing essential capabilities for navigation, obstacle avoidance, and interaction with dynamic environments. The document serves as a resource for researchers and developers interested in the applications of this technology, offering insights into the methodologies and potential uses in various fields, including aerospace and robotics.

For further inquiries or assistance, the document provides contact information for the Innovative Technology Assets Management at JPL.