Numerous devices in everyday life use computerized cameras to identify objects — think of automated teller machines that can “read” handwritten dollar amounts when you deposit a check, or Internet search engines that can quickly match photos to other similar images in their databases. But those systems rely on a piece of equipment to image the object, first by “seeing” it with a camera or optical sensor, then processing what it sees into data, and finally using computing programs to figure out what it is.
Researchers have created a physical artificial neural network — a device modeled on how the human brain works — that can analyze large volumes of data and identify objects at the actual speed of light. Called a diffractive deep neural network, it uses the light bouncing from the object itself to identify that object in as little time as it would take for a computer to simply “see” the object. The device does not need advanced computing programs to process an image of the object and decide what the object is after its optical sensors pick it up. And no energy is consumed to run the device because it only uses diffraction of light.
New technologies based on the device could be used to speed up data-intensive tasks that involve sorting and identifying objects; for example, a driverless car using the technology could react instantaneously — even faster than it does using current technology — to a stop sign. With a device based on the system, the car would “read” the sign as soon as the light from the sign hits it, as opposed to having to wait for the car's camera to image the object and then use its computers to figure out what the object is. Technology based on the invention could also be used in microscopic imaging and medicine, for example, to sort through millions of cells for signs of disease. It could be scaled up to enable new camera designs and unique optical components that work passively in medical technologies, robotics, security, or any application where image and video data are essential.
The process of creating the artificial neural network began with a computer-simulated design. Then, the researchers used a 3D printer to create very thin, 8-centimeter-square polymer wafers. Each wafer has uneven surfaces that help diffract light coming from the object in different directions. The layers look opaque to the eye, but submillimeter-wavelength terahertz frequencies of light used in the experiments can travel through them. Each layer is composed of tens of thousands of artificial neurons; in this case, tiny pixels through which the light travels.
A series of pixelated layers functions as an “optical network” that shapes how incoming light from the object travels through them. The network identifies an object because the light coming from the object is mostly diffracted toward a single pixel that is assigned to that type of object. The researchers then trained the network using a computer to identify the objects in front of it by learning the pattern of diffracted light each object produces as the light from that object passes through the device. The “training” used a branch of artificial intelligence called deep learning, in which machines “learn” through repetition and over time as patterns emerge.
The device could accurately identify handwritten numbers and items of clothing — both of which are commonly used tests in artificial intelligence studies. To do that, images were placed in front of a terahertz light source and the device could “see” those images through optical diffraction.
Because its components can be created by a 3D printer, the artificial neural network can be made with larger and additional layers, resulting in a device with hundreds of millions of artificial neurons. Those bigger devices could identify many more objects at the same time or perform more complex data analysis. And the components can be made inexpensively — the device created could be reproduced for less than $50.