### Topics

### features

### Publications

### Issue Archive

# Method of Real-Time Principal-Component Analysis

- Created on Saturday, 01 January 2005

### Hardware can be simplified.

Dominant element-based gradient descent and dynamic initial learning rate (DOGEDYN) is a method of sequential principal component analysis (PCA) that is well suited for such applications as data compression and extraction of features from sets of data. In comparison with a prior method of gradient-descent based sequential PCA, this method offers a greater rate of learning convergence. Like the prior method, DOGEDYN can be implemented in software. However, the main advantage of DOGEDYN over the prior method lies in the facts that it requires less computation and can be implemented in simpler hardware. It should be possible to implement DOGEDYN in compact, lowpower, very-large-scale integrated (VLSI) circuitry that could process data in real time.

For the purposes of DOGEDYN, the input data are represented as a succession of vectors measured at sampling times *t*. The objective function [the error measure (also called “energy” in the art) that one seeks to minimize in gradient-descent iterations] is defined by

where *m* is the number of principal components, *k* is the number of sampling time intervals (the number of measurement vectors), *x*_{t} is the measured vector at time *t*, and *w _{i}* is the

*i*th principal vector (equivalently, the ith eigenvector). The term

*J*(

_{i}*w*) in the above equation is further expanded by

_{i}The learning algorithm in DOGEDYN involves sequential extraction of the principal vectors by means of a gradient descent in which only the dominant element is used at each iteration. Omitting details of the mathematical derivation for the sake of brevity, an iteration includes updating of a weight matrix according to

where w

_{i}*is an element of the weight matrix and ζ is the dynamic initial learning rate, chosen to increase the rate of convergence by compensating for the energy lost through the previous extraction of principal components. The value of the dynamic learning rate is given by*

_{j}where

*E*is the energy at the beginning of learning and

_{0 }*E*is the energy of the

_{I-1}*i*-1st extracted principal component.

The figure depicts a hardware architecture for implementing DOGEDYN. The raw input data, here denoted

*x*, are subtracted from the sum of the data previously projected on the previous principal components to obtain

_{j}*y*(which is equivalent to

_{j}*y*as defined above, after appropriate changes in subscripts). The

_{t}**Σ**box calculates the inner product of vectors

*y*and

*w*. The output of the

_{i}**Σ**box is summed with the previously computed product of

*y*and

_{j}*w*and the result multiplied by the dynamic learning rate before updating of

_{ij }*w*.

_{ij}*This work was done by Tuan Duong and Vu Duong of Caltech for NASA’s Jet Propulsion Laboratory. For further information, access the Technical Support Package (TSP) free on-line at www.techbriefs.com/tsp under the Information Sciences category. In accordance with Public Law 96-517, the contractor has elected to retain title to this invention. Inquiries concerning rights for its commercial use should be addressed to:
Innovative Technology Assets Management
JPL
Mail Stop 202-233
4800 Oak Grove Drive
Pasadena, CA 91109-8099
(818) 354-2240
E-mail: This email address is being protected from spambots. You need JavaScript enabled to view it.
Refer to NPO-40034, volume and number of this NASA Tech Briefs issue, and the page number.*

### This Brief includes a Technical Support Package (TSP).

** Method of Real-Time Principal-Component Analysis** (reference NPO-40034) is currently available for download from the TSP library.

Please Login at the top of the page to download.