Simple and effective learning functions and adaptive elements can be placed into small hardware systems to include instruments for space, bioimplantable devices, and stochastic observers.

This innovation represents a method by which single-tomulti- input, single-to-many-output system transfer functions can be estimated from input/output data sets. This innovation can be run in the background while a system is operating under other means (e.g., through human operator effort), or may be utilized offline using data sets created from observations of the estimated system. It utilizes a set of fuzzy membership functions spanning the input space for each input variable. Linear combiners associated with combinations of input membership functions are used to create the output(s) of the estimator. Coefficients are adjusted online through the use of learning algorithms.

This innovation has been demonstrated to be capable of creating usable models that can effect any number of complex transfer functions such as a continuous exclusive OR function, time domain (slew rate) filter, automatic gain controller, nonlinear algebraic function calculator, and more. This innovation was created specifically for embedment within microcontrollers, allowing for simple and effective placement of learning functions and adaptive elements into small hardware systems to include instruments for space, bioimplantable devices, stochastic observers, etc.

Small spaceflight (and other) instruments have been confined to simple systems utilizing microcontrollers and a microcontroller core. Since most learning algorithms typically reside in larger computational frames and are rather complex (neural nets, for example), a simpler solution to self-learning, autoadaptive systems would be attractive for smaller embodiments. Fuzzy logic systems lend themselves well to microcontrollers, but adaptive fuzzy systems also require a good deal of computational power. Thus, the simpler components of both fuzzy systems (input membership functions) and the back error propagation neural net (the linear combiner) were selected and fused into a simple two-layer system that can be easily embedded into common microcontrollers.

The training method used is an LMS (least mean square) algorithm based on a modification to the Widrow-Hoff learning algorithm. Coefficients and constants for each linear combiner were initialized to random values. Training data from observations of a user's input(s) to a system and the resultant output(s) in real time or a posteriori, or from software-generated data sets, were presented to the system, which generated outputs. Once a system is learned, the coefficients and constants can be frozen and the algorithm embedded in an application.

This work was done by Michael J. Krasowski and Norman F. Prokop of Glenn Research Center.

Inquiries concerning rights for the commercial use of this invention should be addressed to NASA Glenn Research Center, Innovative Partnerships Office, Attn: Steven Fedor, Mail Stop 4–8, 21000 Brookpark Road, Cleveland, Ohio 44135. LEW-18887-1