Dominant element-based gradient descent and dynamic initial learning rate (DOGEDYN) is a method of sequential principal component analysis (PCA) that is well suited for such applications as data compression and extraction of features from sets of data. In comparison with a prior method of gradient-descent based sequential PCA, this method offers a greater rate of learning convergence. Like the prior method, DOGEDYN can be implemented in software. However, the main advantage of DOGEDYN over the prior method lies in the facts that it requires less computation and can be implemented in simpler hardware. It should be possible to implement DOGEDYN in compact, low-power, very-large-scale integrated (VLSI) circuitry that could process data in real time.

where m is the number of principal components, k is the number of sampling time intervals (the number of measurement vectors), xt is the measured vector at time t, and wi is the ith principal vector (equivalently, the ith eigenvector). The term


in the above equation is further expanded by


The learning algorithm in DOGEDYN involves sequential extraction of the principal vectors by means of a gradient descent in which only the dominant element is used at each iteration. Omitting details of the mathematical derivation for the sake of brevity, an iteration includes updating of a weight matrix according to

where wij is an element of the weight matrix and ζ is the dynamic initial learning rate, chosen to increase the rate of convergence by compensating for the energy lost through the previous extraction of principal components. The value of the dynamic learning rate is given by

where E0 is the energy at the beginning of learning and EI-1 is the energy of the i-1st extracted principal component.

The Hardware Represented by This Diagram can be regarded as a unit that can be cascaded to obtain as many parallel eigenvector extractors as needed in a given application. The cascaded, identical units can be fabricated on a single integrated-circuit chip.

The figure depicts a hardware architecture for implementing DOGEDYN. The raw input data, here denoted xj, are subtracted from the sum of the data previously projected on the previous principal components to obtain yj (which is equivalent to yt as defined above, after appropriate changes in subscripts). The Σ box calculates the inner product of vectors y and wi. The output of the Σ box is summed with the previously computed product of yj and wij and the result multiplied by the dynamic learning rate before updating of wij.

This work was done by Tuan Duong and Vu Duong of Caltech for NASA’s Jet Propulsion Laboratory. For further information, access the Technical Support Package (TSP) free on-line at under the Information Sciences category. In accordance with Public Law 96-517, the contractor has elected to retain title to this invention. Inquiries concerning rights for its commercial use should be addressed to:

Innovative Technology Assets Management
Mail Stop 202-233
4800 Oak Grove Drive
Pasadena, CA 91109-8099
(818) 354-2240
E-mail: This email address is being protected from spambots. You need JavaScript enabled to view it.

Refer to NPO-40034, volume and number of this NASA Tech Briefs issue, and the page number.

This Brief includes a Technical Support Package (TSP).
Method of Real-Time Principal-Component Analysis

(reference NPO-40034) is currently available for download from the TSP library.

Don't have an account? Sign up here.

NASA Tech Briefs Magazine

This article first appeared in the January, 2005 issue of NASA Tech Briefs Magazine.

Read more articles from the archives here.