Hardware can be simplified.
Dominant element-based gradient descent and dynamic initial learning rate (DOGEDYN) is a method of sequential principal component analysis (PCA) that is well suited for such applications as data compression and extraction of features from sets of data. In comparison with a prior method of gradient-descent based sequential PCA, this method offers a greater rate of learning convergence. Like the prior method, DOGEDYN can be implemented in software. However, the main advantage of DOGEDYN over the prior method lies in the facts that it requires less computation and can be implemented in simpler hardware. It should be possible to implement DOGEDYN in compact, lowpower, very-large-scale integrated (VLSI) circuitry that could process data in real time.
For the purposes of DOGEDYN, the input data are represented as a succession of vectors measured at sampling times t. The objective function [the error measure (also called “energy” in the art) that one seeks to minimize in gradient-descent iterations] is defined by
where m is the number of principal components, k is the number of sampling time intervals (the number of measurement vectors), xt is the measured vector at time t, and wi is the ith principal vector (equivalently, the ith eigenvector). The term
Ji(wi) in the above equation is further expanded by
The learning algorithm in DOGEDYN involves sequential extraction of the principal vectors by means of a gradient descent in which only the dominant element is used at each iteration. Omitting details of the mathematical derivation for the sake of brevity, an iteration includes updating of a weight matrix according to
where wij is an element of the weight matrix and ζ is the dynamic initial learning rate, chosen to increase the rate of convergence by compensating for the energy lost through the previous extraction of principal components. The value of the dynamic learning rate is given by
where E0 is the energy at the beginning of learning and EI-1 is the energy of the i-1st extracted principal component.
The figure depicts a hardware architecture for implementing DOGEDYN. The raw input data, here denoted xj, are subtracted from the sum of the data previously projected on the previous principal components to obtain yj (which is equivalent to yt as defined above, after appropriate changes in subscripts). The Σ box calculates the inner product of vectors y and wi. The output of the Σ box is summed with the previously computed product of yj and wij and the result multiplied by the dynamic learning rate before updating of wij.
Mail Stop 202-233
4800 Oak Grove Drive
Pasadena, CA 91109-8099
Refer to NPO-40034, volume and number of this NASA Tech Briefs issue, and the page number.
Innovative Technology Assets Management
This Brief includes a Technical Support Package (TSP).
Method of Real-Time Principal-Component Analysis (reference NPO-40034) is currently available for download from the TSP library.
Please Login at the top of the page to download.