Active-pixel integrated-circuit image sensors that can be programmed in real-time to effect artificial reconfigurable vision on demand have been developed and demonstrated. In imitation of a natural eye, the image sensor is designed to offer high resolution in a small region of interest (ROI) within its field-of-view (FOV). In this manner, the imager reduces the amount of unwanted information that must be transferred from the image sensor to external image-data-processing circuitry. Such vision systems are especially attractive for applications that involve recognition of objects and/or tracking of moving targets.

The reconfigurable-vision imager developed at JPL is more powerful than a natural eye or a conventional foveal vision system in the following respects. A conventional foveal vision system is characterized by a number of limitations that have been overcome in the reconfigurable imager:

  • The multiresolution lattices are hardwired; thus, neither ROI location, size, nor the resolution can be changed.
  • The multiresolution lattices are typically of a log-polar configuration, which makes them very sensitive to alignment errors and is not compatible with readily available image-data processing algorithms.
  • Pixels must be read out serially.
  • The high-resolution windows are typically located near the centers of the image frames, effectively restricting their FOVs and requiring mechanical pointing for target acquisition and tracking. Size, power demands, and slow responses of mechanical aiming mechanisms make it difficult or impossible to realize low-power, compact, real-time, active vision systems.
A Prototype Programmable Reconfigurable-Vision Imager contains an array of 256 by 256 pixels. This unit can capture image data from as many as three windows. Window 1 can overlap windows 2 and 3.

The appropriate resolution for a given ROI is generated using on-focal-plane analog circuits described in “Active-Pixel Image Sensors With Programmable Resolution” (NPO-19510), NASA Tech Briefs, Vol. 20, No. 5, (May 1996), page 26. Column- parallel arrays of complementary metal oxide semiconductor (CMOS) integrated circuits residing at the bottom of the CMOS pixel array operate simultaneously on the output from a row of pixels to read out of individual pixels or to combine and read the outputs of a designated rectangular group of contiguous pixels (superpixels). Unlike the previous work, the reconfigurable imager (see figure) supports multiple ROI, with each ROI independently programmable with respect to size, resolution, and location within the FOV. The imager also consists of a new on-chip digital column-processor to generate and sequence superpixels in real-time. The column-processor features independent and fast reconfiguration of the ROIs in order to provide multiresolution data in real-time. Six control bits per column are needed for each ROI to control the generation of superpixel size, resolution, and location, as well as to sequence the readout of the same. The use of multiple ports and column-processor design allows the imager to simultaneously provide both high- and low-resolution images from the same FOV.

Unlike a traditional foveal vision, the reconfigurable vision system adapts its acuity profile on a frame-by-frame basis to improve update rates, and to eliminate altogether mechanical gazing. Targets are initially detected in a wide FOV, fast frame rate, and coarse acuity configuration. Following detection, spatial resolution is increased only in the vicinity of the detected objects in order to better resolve targets without wasting system resources on irrelevant scene regions. The coarse-to-fine refinement is analogous to pyramid machine vision techniques, except that the dynamically reconfigurable vision (DRV) system does not require generation of the complete pyramid data structure or the overhead due to high-resolution, wide- FOV, uniform acuity imagery.

This work was done by Bedabrata Pain and Guang Yang of Caltech for NASA’s Jet Propulsion Laboratory. For further information, access the Technical Support Package (TSP) free on-line at www.nasatech.com/tsp  under the Electronic Components and Circuits category.

In accordance with Public Law 96-517, the contractor has elected to retain title to this invention. Inquiries concerning rights for its commercial use should be addressed to

Intellectual Property group
JPL
Mail Stop 202-233
4800 Oak Grove Drive
Pasadena, CA 91109
(818) 354-2240

Refer to NPO-20866, volume and number of this NASA Tech Briefs issue, and the page number.



This Brief includes a Technical Support Package (TSP).
Document cover
Real-Time-Programmable Reconfigurable-Vision Active-Pixel Sensors

(reference NPO-20866) is currently available for download from the TSP library.

Don't have an account?



Magazine cover
NASA Tech Briefs Magazine

This article first appeared in the January, 2002 issue of NASA Tech Briefs Magazine (Vol. 26 No. 1).

Read more articles from the archives here.


Overview

The document presents a technical overview of real-time programmable reconfigurable vision active-pixel sensors developed by Bedabrata Pain and Guang Yang at NASA's Jet Propulsion Laboratory (JPL). This innovative imaging technology aims to enhance the capabilities of active vision systems, which are crucial for various applications, including autonomous vehicles, robotic exploration, and military operations.

The core novelty of this invention lies in its focal-plane architecture, which allows for the intelligent reduction of non-redundant data. This enables real-time search and tracking of multiple objects by providing user-programmed variable resolution data from designated regions of interest (ROIs). The system can dynamically adjust the size, resolution, and location of these ROIs within a given field of view (FOV), allowing for efficient data capture and processing.

Unlike traditional imaging systems that require mechanical movement to focus on different areas, this reconfigurable vision system adapts its acuity profile on a frame-by-frame basis. Initially, it detects targets in a wide FOV with a fast frame rate and coarse resolution. Once targets are identified, the system increases spatial resolution only in the vicinity of these objects, optimizing resource usage and enhancing target resolution without wasting processing power on irrelevant scene regions.

The document also describes the technical specifications of the imaging system, which features a 256 x 256 pixel array capable of capturing data from multiple overlapping windows simultaneously. Each window can be independently programmed, allowing for real-time adjustments to the imaging parameters. The use of on-chip digital column processors facilitates the generation and sequencing of superpixels, enabling the system to provide multiresolution data efficiently.

The potential applications of this technology are vast, ranging from commercial and industrial uses, such as mobile robots and automatic inspection systems, to advanced military applications, including smart weapons and missile defense systems. The document emphasizes the importance of this technology in future space missions, where autonomous systems will be essential for tasks like docking with space stations and planetary exploration.

In summary, the document outlines a significant advancement in imaging technology that enhances the efficiency and effectiveness of active vision systems, paving the way for improved autonomous operations across various fields.