Synthetic Foveal imaging Technology (SyFT) is an emerging discipline of image capture and image-data processing that offers the prospect of greatly increased capabilities for real-time processing of large, high-resolution images (including mosaic images) for such purposes as automated recognition and tracking of moving objects of interest. SyFT offers a solution to the image-data-processing problem arising from the proposed development of gigapixel mosaic focal-plane image-detector assemblies for very wide field-of-view imaging with high resolution for detecting and tracking sparse objects or events within narrow subfields of view. In order to identify and track the objects or events without the means of dynamic adaptation to be afforded by SyFT, it would be necessary to post-process data from an image-data space consisting of terabytes of data. Such post-processing would be time-consuming and, as a consequence, could result in missing significant events that could not be observed at all due to the time evolution of such events or could not be observed at required levels of fidelity without such real-time adaptations as adjusting focal-plane operating conditions or aiming of the focal plane in different directions to track such events.
The basic concept of foveal imaging is straightforward: In imitation of a natural eye, a foveal-vision image sensor is designed to offer higher resolution in a small region of interest (ROI) within its field of view. Foveal vision reduces the amount of unwanted information that must be transferred from the image sensor to external image-data-processing circuitry. The aforementioned basic concept is not new in itself: indeed, image sensors based on these concepts have been described in several previous NASA Tech Briefs articles. Active-pixel integrated-circuit image sensors that can be programmed in real time to effect foveal artificial vision on demand are one such example. What is new in SyFT is a synergistic combination of recent ad vances in foveal imaging, computing, and related fields, along with a generalization of the basic foveal-vision concept to admit a synthetic fovea that is not restricted to one contiguous region of an image.
The figure depicts a mesh-connected SyFT architecture as applied to a focal-plane mosaic of homogeneous or hetero geneous image sensors. The architecture provides a networked array of reprogrammable controllers for autonomous low-level control with on-the-fly processing of image data from individual image sensors. Each image sensor in the mosaic focal plane is mapped to one of the controllers so that taken together the reprogrammable controllers constitute a conceptual (though not necessarily a geometric) image-processing plane corresponding to the mosaic focal plane. The controllers can be made versatile enough to control and to process pixel data from both charged-coupled-device (CCD) and complementary metal oxide/semiconductor (CMOS) image sensors in the mosaic focal plane. The image sensors can also have multiple pixel data outputs where each output has dedicated processing circuitry in its associated controller to achieve high throughput with real-time processing for feature detection and processing.
Each controller includes a routing processor to implement the network protocol and define the network topology for real-time transfer of raw pixel data and processed results between controllers. The network protocol and the capability to implement it are essential to realization of the capability for synthetic foveal imaging across the entire mosaic focal plane. The processing and networking capabilities of the controllers will enable real-time access to data from multiple image sensors, with application-level control of one or more ROI(s) within the mosaic focal plane array for sharing of detected data features among controllers. These capabilities will effectively facilitate the equivalent of rewiring and reconfiguration with different sensors in the mosaic, with scalability to different mosaic sizes dictated by application requirements. Consequently, the mosaic focal plane is treated as an integrated ensemble of synthetic foveal regions that can traverse the entire mosaic for autonomous intelligent feature detection and tracking capability. Unlike the current state-of-the-art in image sensors, “SyFTing” enables intelligent viewing through vast amounts of image data by treating a mosaic focal plane of sensors as an integrated ensemble rather than a collection of isolated sensors.
This work was done by Michael Hoenk, Steve Monacos, and Shouleh Nikzad of Caltech for NASA’s Jet Propulsion Laboratory. For more information, download the Technical Support Package (free white paper) at www.techbriefs.com/tsp under the Electronics/Computers category.
In accordance with Public Law 96-517, the contractor has elected to retain title to this invention. Inquiries concerning rights for its commercial use should be addressed to:
Innovative Technology Assets Management
JPL
Mail Stop 202-233
4800 Oak Grove Drive
Pasadena, CA 91109-8099
(818) 354-2240 E-mail: This email address is being protected from spambots. You need JavaScript enabled to view it.
Refer to NPO-44209, volume and number
of this NASA Tech Briefs issue, and the page number.
This Brief includes a Technical Support Package (TSP).

Synthetic Foveal Imaging Technology
(reference NPO-44209) is currently available for download from the TSP library.
Don't have an account?
Overview
The document outlines NASA's Synthetic Foveal Imaging Technology (SyFT), a cutting-edge system designed for real-time identification and tracking of targets across various applications. Developed by NASA's Jet Propulsion Laboratory, SyFT is particularly notable for its ultrahigh resolution and wide field of view (FOV) capabilities, making it suitable for diverse fields such as astronomy, military operations, and civil investigations.
SyFT employs a sophisticated architecture that includes a data storage plane, an image processing plane, and an imager plane. The data storage plane is responsible for managing raw pixel data, processed pixel data, and markers for detected features. The image processing plane consists of programmable imager controllers that facilitate event detection, feature identification, tracking, and intelligent data mining. This modular and scalable architecture allows for the integration of various imager types, including CCD and CMOS imagers, enabling flexibility in application.
The technology's applications are extensive. In the realm of NASA, SyFT can be utilized for observing supernovae, tracking dust devils, and identifying stars in cloudy fields. In military contexts, it aids in object identification in urban environments and enhances optical communication capabilities. Additionally, in civil and commercial sectors, SyFT can be instrumental in crime scene investigations, providing law enforcement with advanced tools for evidence analysis.
The document also highlights the concept of a Smart Imager Cell, which serves as a building block for the system. This cell features a re-programmable controller that allows for adaptive control and configuration, enhancing the system's responsiveness to various imaging needs. The block diagram of the re-programmable imager controller illustrates the intricate design that supports pixel data routing and processing logic, ensuring efficient operation.
Overall, the Synthetic Foveal Imaging Technology represents a significant advancement in imaging capabilities, combining high-resolution imaging with intelligent processing to meet the demands of various scientific, military, and commercial applications. The document serves as a technical support package, providing insights into the technology's development and potential uses, while also emphasizing the importance of compliance with export regulations and proprietary information considerations.

