A method for reducing the number of bits of quantization of synaptic weights during training of an artificial neural network involves the use of the cascade back-propagation learning algorithm. The development of neural networks of adequate synaptic-weight resolution in very-large-scale integrated (VLSI) circuitry poses considerable problems of overall size, power consumption, complexity, and connection density. Reduction of the required number of bits from the present typical value of 12 to a value as low as 5 could thus facilitate and accelerate development.

Learning Trajectories of a neural network are plotted symbolically in a plane, in which the two perpendicular axes represent the many synaptic-connection-weights. During learning according to the method of the text, the size of the steps is reduced when the trajectory reaches the contour at error level B. The size of the steps is further reduced upon reaching error level C. Learning is stopped upon reaching the contour at error level D.

In this algorithm, neurons are added sequentially to a network, and gradient descent is used to permanently fix both the input and output synaptic weights connected to each added neuron before proceeding further. Each added neuron has synaptic connections to the inputs and to the output of every preceding neuron; thus, each added neuron implements a hidden neural layer. The addition of each successive neuron provides an opportunity to further reduce the mean squared error. Because the average number of connections to a neuron is small, learning is quite fast.

To adapt the cascade back-propagation algorithm to neural-network circuitry with limited dynamic range (equivalently, coarse weight resolution) in the synapses, one reduces the maximum synaptic conductances associated with neurons added later. This effectively reduces the sizes of synaptic-weight quantization steps, so that in the later stages, the desired synaptic-weight resolution is ultimately achieved and the learning objective approached as closely as required, without having to increase the number of bits (see figure).

Both simulations and tests with analog complementary metal oxide/semiconductor (CMOS) VLSI hardware have shown that by use of this method, neural networks can learn such difficult problems as 6-bit parity with synaptic quantizations as low as 5 bits, as opposed to the 8 to 16 bits required in the older error-back-propagation and cascade-correlation neural-network-learning algorithms.

This work was done by Tuan A. Duong of Caltech for NASA's Jet Propulsion Laboratory. NPO-19565


This Brief includes a Technical Support Package (TSP).
Training neural networks with fewer quantization bits

(reference NPO19565) is currently available for download from the TSP library.

Don't have an account? Sign up here.



NASA Tech Briefs Magazine

This article first appeared in the July, 1998 issue of NASA Tech Briefs Magazine.

Read more articles from the archives here.