Researchers in the US have developed a scalable neural network photonics chip that can classify billions of images a second directly without storing data.
The team at the University of Pennsylvania developed the integrated end-to-end photonic deep neural network (PDNN) that performs sub-nanosecond image classification through direct processing of the optical waves impinging on the on-chip pixel array as they propagate through layers of neurons.
In each neuron, the linear computation is performed optically with a 5×6 array of gated couplers that act as input pixels. The output is divided into three overlapping 3×4 pixel sub-images and fed into nine other neurons spread across three layers using nanophotonic waveguides.
A microcontroller sends the clock and data signals to the serial DACs, with the outputs of the DACs are connected to their corresponding drivers to drive the on-chip PIN attenuators, ring PN junctions and micro-ring thermal phase shifters (above)
This leads to a classification time of under 570 ps, which is comparable with a single clock cycle of state-of-the-art digital platforms.
A uniformly distributed supply light provides the same per-neuron optical output range, allowing scalability to large-scale PDNNs.
Related articles
- Artificial neural networks go all-optical
- Cognifiber raises $6m for In-Fibre photonic processing
- Micro-combs enable 11 TOPS photonic convolutional neural network
- Startup reveals prototype optical AI processor
- Low power DSP for 100 Gbps Coherent Transmission in Optical Access Networks
As a proof of concept, the 9.3 square millimetre chip was tested on data sets containing either two or four types of handwritten characters, achieving classification accuracies higher than 93.8% and 89.8%, respectively.
“Our chip processes information through what we call ‘computation-by-propagation,’ meaning that unlike clock-based systems, computations occur as light propagates through the chip,” said Firooz Aflatouni, Associate Professor in Electrical and Systems Engineering. “We are also skipping the step of converting optical signals to electrical signals because our chip can read and process optical signals directly, and both of these changes make our chip a significantly faster technology.”
The chip’s ability to process optical signals directly lends itself to another benefit.
“When current computer chips process electrical signals they often run them through a Graphics Processing Unit, or GPU, which takes up space and energy,” said postdoctoral fellow Farshid Ashtiani. “Our chip does not need to store the information, eliminating the need for a large memory unit.”
“And, by eliminating the memory unit that stores images, we are also increasing data privacy,” said Aflatouni. “With chips that read image data directly, there is no need for photo storage and thus, a data leak does not occur.”
“We aren’t the first to come up with technology that reads optical signals directly,” said researcher Alexander Geers. “but we are the first to create the complete system within a chip that is both compatible with existing technology and scalable to work with more complex data.”
The chip requires training to learn and classify new data sets, similar to how humans learn. When presented with a given data set, the deep network takes in the information and classifies it into previously learned categories. This training needs to strike a balance that is specific enough to result in accurate image classifications and general enough to be useful when presented with new data sets. The engineers can “scale up” the deep network by adding more neural layers, allowing the chip to read data in more complex images with higher resolution.
“What’s really interesting about this technology is that it can do so much more than classify images,” said Aflatouni. “We already know how to convert many data types into the electrical domain – images, audio, speech, and many other data types. Now, we can convert different data types into the optical domain and have them processed almost instantaneously using this technology.”
“Our next steps in this research will examine the scalability of the chip as well as work on three-dimensional object classification,” he said. “Then maybe we will venture into the realm of classifying non-optical data. While image classification is one of the first areas of research for this chip, I am excited to see how it will be used, perhaps together with digital platforms, to accelerate different types of computations.”
www.upenn.edu
Other articles on eeNews Europe
- EU standardises on USB-C chargers, wireless
- Apple boosts transistor count in 5nm M2 chip
- Entry level oscilloscope for training and field use
- Sony teams for IoT connectivity for its microcontroller
