Improved autoassociative neural networks, denoted nexi, have been proposed for use in controlling autonomous robots, including mobile exploratory robots of the biomorphic type. In comparison with conventional autoassociative neural networks, nexi would be more complex but more capable in that they could be trained to do more complex tasks. A nexus would use bit weights and simple arithmetic in a manner that would enable training and operation without a central processing unit, programs, weight registers, or large amounts of memory. Only a relatively small amount of memory (to hold the bit weights) and a simple logic application-specific integrated circuit would be needed.
A description of autoassociative neural networks is prerequisite to a meaningful description of a nexus. An autoassociative network is a set of neurons that are completely connected in the sense that each neuron receives input from, and sends output to, all the other neurons. (In some instantiations, a neuron could also send output back to its own input terminal.) The state of a neuron is completely determined by the inner product of its inputs with weights associated with its input channel. Setting the weights sets the behavior of the network.
The neurons of an autoassociative network are usually regarded as comprising a row or vector. Time is a quantized phenomenon for most autoassociative networks in the sense that time proceeds in discrete steps. At each time step, the row of neurons forms a pattern: some neurons are firing, some are not. Hence, the current state of an autoassociative network can be described with a single binary vector. As time goes by, the network changes the vector. Autoassociative networks move vectors over hyperspace landscapes of possibilities.
The disadvantage of conventional autoassociative neural networks is that they are inefficient. The effect of training is to adjust the weights to values that are best for most patterns. At the end of training, all weights are fixed to reflect the majority of patterns. All the patterns that represent minorities (from the perspective of a single weight) are ignored. The performance of the network would be improved if the fixed weights were replaced with something more dynamic. This would be done in a nexus.
A nexus could be characterized as “deeper,” relative to a conventional autoassociative network, in that each weight of a conventional autoassociative network would be replaced by the output of a subnetwork. Whereas there are on the order of N2 connections among N neurons in a conventional autoassociative network, the number of such connections in a nexus would be Nj ( j>2). In addition, the replacement of weights with subnetworks would introduce a capability for combining networks to form more complex networks.
A nexus would also differ from a conventional autoassociative neural network in the following ways:
- Synaptic subnetworks would be used throughout the network.
- Whereas a conventional autoassociative neural network changes all parts of a vector, a nexus would change only the effector part.
- Whereas the weights of a conventional autoassociative neural network are numbers stored in registers, the weights of a nexus would be binary and could be stored as memory bits.
- The only arithmetic operations in a nexus would be majority votes of binary inputs.
- Learning by a nexus would be governed by a simple algorithm that would use both positive and negative examples. (Conventional autoassociative neural networks are usually trained by use of negative examples only.)
As an example of a potential application, nexi could be used to control the gaits of a walking hexapod robot. More specifically, a different nexus could learn one of three gaits (see figure) or a single nexus could learn all three gaits, albeit more slowly. Training could include positive feedback for forward progress and negative feedback for falling down.
This work was done by Charles Hand of NASA’s Jet Propulsion Laboratory.
In accordance with Public Law 96-517, the contractor has elected to retain title to this invention. Inquiries concerning rights for its commercial use should be addressed to
Intellectual Property group
JPL Mail Stop 202-233
4800 Oak Grove Drive
Pasadena, CA 91109
(818) 354-2240
Refer to NPO-21224, volume and number of this NASA Tech Briefs issue, and the page number.
This Brief includes a Technical Support Package (TSP).
Improved Autoassociative Neural Networks
(reference NPO-21224) is currently available for download from the TSP library.
Don't have an account?