UCLA engineers have made major improvements on their design of an optical neural network — a device inspired by how the human brain works — that can identify objects or process information at the speed of light. The development could lead to intelligent camera systems that figure out what they are seeing simply by the patterns of light that run through a 3D engineered material structure. The new design takes advantage of the parallelization and scalability of optical-based computational systems.

For example, such systems could be incorporated into self-driving cars or robots, helping them make near-instantaneous decisions faster and using less power than computer-based systems that need additional time to identify an object after it’s been seen.

The technology was first introduced by the UCLA group in 2018. The system uses a series of 3D-printed wafers or layers with uneven surfaces that transmit or reflect incoming light — similar in look and effect to frosted glass. The layers have tens of thousands of pixel points, essentially artificial neurons that form an engineered volume of material that computes all-optically. Each object will have a unique light pathway through the 3D fabricated layers. Behind the layers are several light detectors, each previously assigned in a computer to deduce what the input object is by where the most light ends up after traveling through the layers. For example, if it’s trained to figure out handwritten digits, then the detector programmed to identify a “5” will see most of the light hit that detector after the image of a “5” has traveled through the layers.

The UCLA researchers have significantly increased the system’s accuracy by adding a second set of detectors to the system, so each object type is now represented with two detectors rather than one. The researchers aimed to increase the signal difference between a detector pair assigned to an object type. Intuitively, this is similar to weighing two stones simultaneously with left and right hands in order to differentiate if they are of similar or different weights.

Such a system performs machine-learning tasks with light-matter interaction and optical diffraction inside a 3D fabricated material structure at the speed of light and without the need for extensive power, except for the illumination light and simple detector circuitry. According to the researchers, this advance could enable task-specific smart cameras that perform computation on a scene using only photons and light-matter interaction, making it extremely fast and power-efficient.

The researchers tested their system’s accuracy using image datasets of handwritten digits, items of clothing, and a broader set of various vehicles and animals known as the CIFAR-10 image dataset. They found image recognition accuracy rates of 98.6%, 91.1% and 51.4% respectively.

Those results compare very favorably to earlier generations of all-electronic deep neural nets. While more recent electronic systems have better performance, the researchers suggest that all-optical systems have advantages in inference speed, low power, and the ability to be scaled up to accommodate and identify many more objects in parallel.

For more information, contact Amy Akmal at This email address is being protected from spambots. You need JavaScript enabled to view it..

Photonics & Imaging Technology Magazine

This article first appeared in the November, 2019 issue of Photonics & Imaging Technology Magazine.

Read more articles from this issue here.

Read more articles from the archives here.