A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks.
The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often-unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white.
In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering.
The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process-model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum-variance filters. In that they do not require statistical models of noise, the neural-network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.
The figure schematically depicts an example of a process system connected to a neural-network state filter that is a hybrid of nonadaptive and adaptive parts. The nonadaptive part of the system is an available system model (predictor), which could be a conventional mathematical model or a neural network. This hybrid implementation is suitable if the inaccuracy in the system model is of a deterministic nature. The output of the system model is corrected by use of an error model implemented by a neural network. The inputs to the error model at the (t+1)st sampling time are the past inputs u(t), the past output measurements y(t + 1), and the past outputs of the error model, ye(t|t). The output of the error model is the output correction term ye(t + 1|t + 1), and the corrected system-model output yc(t + 1|t + 1) is the sum of the correction term plus the predictor output. Connection weights and biases in the error-model and state-filter neural networks are updated in response to the output residual term e(t + 1) = y(t + 1) – yc(t + 1|t + 1); the updates are generated in an on-line adaptation scheme implemented by algorithms that seek to minimize quadratic error measures by the gradient-descent method.
This work was done by Alexander G. Parlos and Sunil K. Menon of Texas A&M University and Amir F. Atiya of Caltech for Johnson Space Center. For further information, contact:
Dr. Alexander G. Parlos
Dept. of Nuclear Engineering
Texas A&M University
College Station, TX 77843
Telephone No.: (409) 845-7092
Fax No.: (409) 845-6443
Refer to MSC-22895.