An improved algorithm has been devised for training a recurrent multilayer perceptron (RMLP) for optimal performance in predicting the behavior of a complex, dynamic, and noisy system multiple time steps into the future. [An RMLP is a computational neural network with self-feedback and cross-talk (both delayed by one time step) among neurons in hidden layers]. Like other neural-network-training algorithms, this algorithm adjusts network biases and synaptic-connection weights according to a gradientdescent rule. The distinguishing feature of this algorithm is a combination of global feedback (the use of predictions as well as the current output value in computing the gradient at each time step) and recursiveness. The recursive aspect of the algorithm lies in the inclusion of the gradient of predictions at each time step with respect to the predictions at the preceding time step; this recursion enables the RMLP to learn the dynamics. It has been conjectured that carrying the recursion to even earlier time steps would enable the RMLP to represent a noisier, morecomplex system.

This work was done by Alexander G. Parlos, Omar T. Rais, and Sunil K. Menon of Texas A&M University and Amir F. Atiya of Caltech for Johnson Space Center. For further information, contact:

Dr. Alexander G. Parlos
Dept. of Nuclear Engineering
Texas A&M University
College Station, TX 77843
Telephone No.: (409) 845-7092
Fax No.: (409) 845-6443

Refer to MSC-22893.