An analog very-large-scale integrated (VLSI) circuit was designed and built to implement Hebbian synapses with an improved method of modifying and storing the synaptic weights, for use in neural-network circuits. (In Hebbian synapses, the synaptic weights are modified through Hebbian learning, which is a local unsupervised adjustment of the weight depending on the correlation of activity between pre- and post-synaptic neurons.) These circuits are intended, more specifically, for use with neural networks of the type that operate with spiking (as distinguished from steady) input and output signals.

A Hebbian Synapse for spiking neurons is illustrated.

The development of these circuits was prompted by a need to store and adjust on-chip synaptic weights using local Hebbian learning rules. The synaptic weights must be stored in the form of analog voltages (charges on capacitive nodes). Such storage is problematic because the charges tend to decay by leakage through reverse-biased active/well/substrate junctions. The designs of the present circuits reduce the leakage currents to about one-sixth of those conventional synaptic-weight-storage circuits, thereby making it possible to store the synaptic weights for correspondingly longer times.

A circuit of the present type includes an analog-weight-storage subcircuit (as depicted within the gray box in the figure) in which the transistor that passes charge on to the capacitive charge-storage node resides in a floating well. The floating well is driven by a voltage follower (VF in the figure) from the storage node, thereby shielding the storage node by reducing the leakage current to the well and enabling the node to hold the charge longer than an ordinary switched capacitor could. The voltage across the active/well junction is held to within the offset of the follower; this typically results in a substantial decrease in leakage current from a normal well that is held at the supply voltage of the well transistor.

The charge-storage subcircuit is incorporated within a larger circuit that acts as a Hebbian synapse; that is, it takes pre- and post-synaptic spike signals as inputs, and increases the synaptic weight if the spikes occur simultaneously. The Hebbian-synapse circuit (see figure) contains two additional transconductance amplifiers (TAs): TA1 controls the learning rate by adjusting the current injected into the storage node, and TA2 converts the stored analog voltage value to a current value. The output of amplifier TA2 is gated by the pre-synaptic input spikes, and hence the final output consists of current pulses that are proportional to the stored voltage and injected into a post-synaptic neuron whenever the pre-synaptic neuron fires a spike. These output current pulses can be summed and integrated with the currents from other synapses in parallel and used to drive the spiking of the post-synaptic neuron. Hebbian learning is achieved by gating charge onto or off the storage node when the pre- and post-synaptic spikes are simultaneous or nonsimultaneous, respectively.

This work was done by Christopher Assad and David Kewley of Caltech for NASA's Jet Propulsion Laboratory. For further information, access the Technical Support Package (TSP) free on-line at www.nasatech.com/tsp  under the Electronics & Computers category.

NPO-20965



This Brief includes a Technical Support Package (TSP).
Document cover
Analog VLSI Circuits for Hebbian Learning in Nueral Networks

(reference NPO-20965) is currently available for download from the TSP library.

Don't have an account?



Overview

The document discusses the development of analog very-large-scale integrated (VLSI) circuits designed to implement Hebbian synapses for neural networks, specifically those that operate with spiking input and output signals. The work was conducted by Christopher Assad and David Kewley at NASA's Jet Propulsion Laboratory and aims to improve the methods of modifying and storing synaptic weights through local Hebbian learning rules.

Hebbian learning is a principle where the synaptic weight between two neurons is adjusted based on their activity correlation; if pre- and post-synaptic spikes occur simultaneously, the synaptic weight increases. The circuits described in the document are intended to store synaptic weights as analog voltages on capacitive nodes, which is crucial for the functionality of neural networks. However, traditional methods face challenges due to leakage currents that cause charge decay, making it difficult to maintain the stored weights over time.

The innovative design presented in this document significantly reduces leakage currents to about one-sixth of those found in conventional synaptic-weight-storage circuits. This is achieved by incorporating a charge-storage subcircuit within a larger Hebbian synapse circuit. The design features two transconductance amplifiers (TAs): one (TA1) controls the learning rate by adjusting the current injected into the storage node, while the other (TA2) converts the stored analog voltage to a current value. The output of TA2 is gated by pre-synaptic spikes, allowing the circuit to produce current pulses proportional to the stored voltage, which can then drive the spiking of the post-synaptic neuron.

A key aspect of the design is the use of a floating well for the transistor that passes charge to the capacitive storage node. This floating well is driven by a voltage follower, which helps shield the storage node and significantly reduces leakage current, enabling longer charge retention compared to standard switched capacitors.

Overall, the document highlights a significant advancement in the field of neural networks, providing a more effective means of storing and adjusting synaptic weights, thereby enhancing the performance and longevity of analog VLSI circuits used in Hebbian learning applications.