A novel cognitive computing architecture is conceptualized for processing multiple channels of multi-modal sensory data streams simultaneously, and fusing the information in real time to generate intelligent reaction sequences. This unique architecture is capable of assimilating parallel data streams that could be analog, digital, synchronous/asynchronous, and could be programmed to act as a knowledge synthesizer and/or an “intelligent perception” processor. In this architecture, the bio-inspired models of visual pathway and olfactory receptor processing are combined as processing components, to achieve the composite function of “searching for a source of food while avoiding the predator.” The architecture is particularly suited for scene analysis from visual data and odorant signature identification in a heterogeneous environment.

In this architecture, there are four basic blocks: input, output, processing, and storage. The input block consists of sensing devices including IR, lidar, radar, visual, chemical, and biosensors, at their various sampling data rates. Based on application scenario, selected sensory streams are sent by the input block to the subsequent “processing” block in a fully parallel fashion. Feature data is extracted from the analog/digital sensory streams and is accumulated in the storage block for enriching the “knowledge base” as a situation unfolds. The incoming raw data is not stored as is the usual approach in current computer architecture, and is reconstructed if required during the process in real time. The output block sends the output signal to various interfaces (actuating interfaces), such as other machines, humans, or RF devices. The processing block consists of several mathematical constructs including Principal Component Analysis (PCA), Independent Component Analysis (ICA), Neural Network (NN), Genetic Algorithm (GA), etc., and is controlled by a hierarchy of logical rules to enact reasoning, reconfiguring, and adapting as required when the target is changing in the dynamic environment. There fore, the processing block can select an architecture for each particular application as needed, dynamically, and still remain compatible with a digital environment. The conceptualized architecture, capable of extracting knowledge from information and using the knowledge for reasoning, adapting, and reacting therefore qualifies as a cognitive architecture for real-time data fusion in a dynamic environment. Furthermore, its dynamic autonomous reconfigurability makes it versatile as a “general- purpose” intelligent system to accomplish the “searching for a source of food while avoiding the predator” function.

This work was done by Tuan A. Duong and Vu A. Duong of Caltech for NASA’s Jet Propulsion Laboratory.

In accordance with Public Law 96-517, the contractor has elected to retain title to this invention. Inquiries concerning rights for its commercial use should be addressed to:

Innovative Technology Assets Management JPL Mail Stop 202-233 4800 Oak Grove Drive Pasadena, CA 91109-8099 E-mail: This email address is being protected from spambots. You need JavaScript enabled to view it..

NPO-46633



This Brief includes a Technical Support Package (TSP).
Document cover
Real-Time Cognitive Computing Architecture for Data Fusion in an Dynamic Environment

(reference NPO-46633) is currently available for download from the TSP library.

Don't have an account?



Magazine cover
NASA Tech Briefs Magazine

This article first appeared in the January, 2012 issue of NASA Tech Briefs Magazine (Vol. 36 No. 1).

Read more articles from this issue here.

Read more articles from the archives here.


Overview

The document presents a technical overview of a Real-Time Cognitive Computing Architecture developed by NASA's Jet Propulsion Laboratory (JPL) for processing and fusing multi-modal sensory data streams in dynamic environments. This architecture is designed to handle various types of data, including analog and digital signals, and is capable of real-time analysis to generate intelligent responses.

The architecture integrates a bio-inspired model of the visual cortex with a powerful technique known as Spatial Invariant Independent Component Analysis (SPICA). This combination allows the system to effectively process sensory information, akin to how biological systems operate, particularly in tasks such as searching for food while avoiding predators. The architecture is structured to assimilate parallel data streams, making it suitable for applications that require rapid and efficient data processing.

Key components of the architecture include Real-Time Principal Component Analysis (PCA) for feature extraction, a Cascade Error Projection neural network for processing, and a feedback network for analyzing color, shape, and edge features. These elements work together to enhance the system's ability to recognize visual patterns and detect chemical signatures in complex environments.

The document highlights two primary applications of the architecture: visual recognition in dynamic settings and odorant detection in mixtures. The results from simulations indicate that the integration of visual and olfactory data significantly improves the system's performance in identifying and responding to environmental stimuli.

The architecture's design addresses the limitations of traditional sequential computing systems, which can be inefficient for applications that require parallel processing. By leveraging cognitive computing principles, the architecture aims to provide faster and more effective solutions for real-time data fusion and analysis.

Overall, this research showcases the potential of cognitive computing in enhancing sensor fusion technologies, with implications for various fields, including robotics, environmental monitoring, and intelligent systems. The document serves as a technical support package, providing insights into the architecture's capabilities and its relevance to broader technological and scientific advancements.