Inspired by brains, neural networks are composed of neurons (or nodes) and synapses, which are the connections between nodes. To train a neural network for a task, a neural network takes in a large set of questions and the answers to those questions. In this process of what’s called supervised learning, the connections between nodes are weighted more heavily or lightly to minimize the amount of error in achieving the correct answer.
Once trained, a neural network can then be tested without knowing the answer; for example, a system can process a new photo and correctly identify a human face, because it has learned the features of human faces from other photos in its training set. Image recognition is also a relatively simple problem, as it doesn’t require any information apart from a static image. More complex tasks, such as speech recognition, can depend highly on context, and require neural networks to have knowledge of what has just occurred, or what has just been said.
When transcribing speech to text or translating languages, a word’s meaning and even pronunciation will differ, depending on the previous syllables. This requires a recurrent neural network, which incorporates loops within the network that give the network a memory effect; however, training these recurrent neural networks is especially expensive.
Memristors are a special type of resistive device that can both perform logic and store data. This contrasts with typical computer systems, where processors perform logic separate from memory modules. Memristors require less space and can be integrated more easily into existing silicon-based electronics, and memorize events only in the near history.
Reservoir computing systems built with memristors can skip most of the expensive training process and still provide the network the capability to remember. This is because the most critical component of the system — the reservoir — does not require training.
When a set of data is inputted into the reservoir, the reservoir identifies important time-related features of the data, and hands it off in a simpler format to a second network. This second network then only needs training like simpler neural networks, changing weights of the features and outputs that the first network passed on until it achieves an acceptable level of error. A new network, called a reservoir computing system, could predict words before they are said during conversation, and help predict future outcomes based on the present.
The reservoir computing concept was proven using a test of handwriting recognition, a common benchmark among neural networks. Numerals were broken up into rows of pixels and fed into the computer with voltages like Morse code, with zero volts for a dark pixel, and a little more than one volt for a white pixel. Using only 88 memristors as nodes to identify handwritten versions of numerals — compared to a conventional network that would require thousands of nodes for the task — the reservoir achieved 91 percent accuracy.
Reservoir computing systems are especially adept at handling data that varies with time, like a stream of data or words, or a function depending on past results. To demonstrate this, a complex function was tested that depended on multiple past results, which is common in engineering fields. The reservoir computing system was able to model the complex function with minimal error.
In predictive analysis, the system could be used to take in signals with noise like static from far-off radio stations, and produce a cleaner stream of data. It also could predict and generate an output signal even if the input stopped.
For more information, contact Dan Newman at