A versatile, ultrafast, low-power machine vision system in the form of a single integrated-circuit chip has been proposed for use in military targeting, industrial robotics, and other applications in which there are requirements for utilizing visual information in real time. The conceptual design of the system takes advantage of recent advances in the design of integrated image-sensor/processor circuits, electronic neural networks, microprocessors, submicron very-large-scale integrated (VLSI) circuits, and massively parallel computation. The system could be characterized as an eye/brain machine (EBM) because the conceptual design is intended to mimic basic functions of biological vision systems. The system would be programmable to perform vision processing at all levels analogous to those of vision processing in the human eye and brain. The system would be capable of computation at the rate of 1012 operations per second — about 100 times the rate achievable with state-of-the-art microcomputers and digital signal-processing chips.

Figure 1 depicts the computational aspect of the conceptual EBM design. Visual information would be processed in five stages: (1) collection of raw images from sensors, (2) generation of synthetic images that augment raw images with additional information, (3) fusion of all images, (4) analysis of fused images, and (5) semantic interpretation of fused images.

The EBM would comprise the following two major subsystems:

  • The EBM eye would be a compact optoelectronic subsystem that would integrate a variety of sensors with different geometric, radiometric, and spectral parameters chosen to satisfy requirements for a specific application.
  • The EBM brain would be a high-performance control and data-handling subsystem that would contain most of the computational resources needed to perform machine-vision tasks.
Figure 1. The Computational Aspect of the Proposed System is based on a simplified model of the human vision system, with five stages of processing of image data.

An earlier version of the EBM, called the "Viewing Imager/Gimballed Instrumentation Laboratory and Analog Neural Three-Dimensional Processing Experiment" (VIGILANTE), was described in "Smart" Optoelectronic Sensor System for Recognizing Targets" (NPO-20357), NASA Tech Briefs, Vol. 22, No. 8 (August 1998), page 46. A prototype of the VIGILANTE was assembled largely from commercially available components. The conceptual design of the proposed system is based partly on lessons learned from the VIGILANTE prototype.

Figure 2 depicts the electronic aspect of the conceptual EBM design. An active-pixel-sensor (APS) camera, a "smart" window handler, a programmable neural computer, and a microcomputer, and other subsystems would be put together on a single chip. All subsystems on the chip would be connected in a row/column-parallel image-data-flow architecture that would eliminate the data-bandwidth bottlenecks of older data-bus architectures.

The APS camera would constitute the array of sensors of the system. Under control by the "smart" window handler, windowed image data from the APS camera would be fed to the neural computer.

Figure 2. All of the Subsystems of the proposed system would be put together on a single IC chip.

The neural computer would generate synthetic images, fuse all images, and analyze the fused images (stages 2, 3, and 4 of image-data processing) at a high speed that would be achieved by a combination of programmability and massively parallel computing structures. A prototype of the neural computer was described in "VLSI Neural Processors Based on Optimization Neural Networks" (NPO-19989), NASA Tech Briefs, Vol. 22, No. 1 (January 1998), page 58.

The microcomputer would control the overall operation of the system and would perform the scene-interpretation functions (stage 5 of image-data processing). The system could communicate with a host computer via a multibus interface unit.

The following are the anticipated advantages of building the system as a single IC chip instead of assembling it from components that are now commercially available:

  • A higher degree of system-level integration could be achieved. System-level integration offers the capability to implement innovative parallel-processing architectures and ultrafast data-transfer structures while increasing the robustness of the system.
  • The power consumed by the proposed single-chip system would be about 10 W, whereas a version of system capable of equal data throughput and assembled from commercially available components would consume about 100 W.
  • Integration of all subsystems onto a single chip would entail shorter interconnections with fewer contacts and driver circuits, enabling the proposed system to operate at greater frame rates and processing speeds.
  • The size of the proposed system would be about one-tenth that of the version of the system assembled from commercially available parts. The cost of mass-producing the system would be reduced accordingly.

This work was done by Wai-Chi Fang of Caltech for NASA's Jet Propulsion Laboratory. For further information, access the Technical Support Package (TSP) free on-line at www.nasatech.com/tsp under the Electronics & Computers category. NPO-20449

This Brief includes a Technical Support Package (TSP).
Low-power, fast machine vision system on a single IC chip

(reference NPO-20449) is currently available for download from the TSP library.

Don't have an account? Sign up here.