LEGO-like reconfigurable AI chip stacks sensing, processing layers
The new design is stackable and reconfigurable, for swapping out and building on existing sensors and neural network processors. Such a design promises the ability to upgrade devices like cellphones and smartwatches with the latest sensors and processors that would snap onto a device’s internal chip.
Such reconfigurable chipware, say the researchers, could keep devices up to date while reducing electronic waste. The design comprises alternating layers of sensing and processing elements, along with light-emitting diodes (LED) that allow for the chip’s layers to communicate optically.
Other modular chip designs employ conventional wiring to relay signals between layers. Such intricate connections are difficult if not impossible to sever and rewire, making such stackable designs not reconfigurable.
The researchers’ design uses light, rather than physical wires, to transmit information through the chip. The chip can therefore be reconfigured, with layers that can be swapped out or stacked on, for instance to add new sensors or updated processors.
“You can add as many computing layers and sensors as you want, such as for light, pressure, and even smell,” says MIT postdoc Jihoon Kang. “We call this a LEGO-like reconfigurable AI chip because it has unlimited expandability depending on the combination of layers.”
The researchers say they are eager to apply the design to edge computing devices — self-sufficient sensors and other electronics that work independently from any central or distributed resources such as supercomputers or cloud-based computing.
“As we enter the era of the internet of things based on sensor networks, demand for multifunctioning edge-computing devices will expand dramatically,” says Jeehwan Kim, associate professor of mechanical engineering at MIT. “Our proposed hardware architecture will provide high versatility of edge computing in the future.”
The design is currently configured to carry out basic image-recognition tasks. It does so via a layering of image sensors, LEDs, and processors made from artificial synapses — arrays of memory resistors, or “memristors,” that the researchers previously developed, which together function as a physical neural network, or “brain-on-a-chip.” Each array can be trained to process and classify signals directly on a chip, without the need for external software or an Internet connection.
In their new chip design, the researchers paired image sensors with artificial synapse arrays, each of which they trained to recognize certain letters — in this case, M, I, and T. While a conventional approach would be to relay a sensor’s signals to a processor via physical wires, the researchers instead fabricated an optical system between each sensor and artificial synapse array to enable communication between the layers, without requiring a physical connection.
“Other chips are physically wired through metal, which makes them hard to rewire and redesign, so you’d need to make a new chip if you wanted to add any new function,” says MIT postdoc Hyunseok Kim. “We replaced that physical wire connection with an optical communication system, which gives us the freedom to stack and add chips the way we want.”
The optical communication system consists of paired photodetectors and LEDs, each patterned with tiny pixels. Photodetectors constitute an image sensor for receiving data, and LEDs to transmit data to the next layer. As a signal (for instance an image of a letter) reaches the image sensor, the image’s light pattern encodes a certain configuration of LED pixels, which in turn stimulates another layer of photodetectors, along with an artificial synapse array, which classifies the signal based on the pattern and strength of the incoming LED light.
The researchers fabricated a single chip, with a computing core measuring about four square millimeters, or about the size of a piece of confetti. The chip is stacked with three image recognition “blocks,” each comprising an image sensor, optical communication layer, and artificial synapse array for classifying one of three letters, M, I, or T. They then shone a pixelated image of random letters onto the chip and measured the electrical current that each neural network array produced in response. (The larger the current, the larger the chance that the image is indeed the letter that the particular array is trained to recognize.)
The researchers found that the chip correctly classified clear images of each letter, but it was less able to distinguish between blurry images, for instance between I and T. However, the researchers were able to quickly swap out the chip’s processing layer for a better “denoising” processor, and found the chip then accurately identified the images.
“We showed stackability, replaceability, and the ability to insert a new function into the chip,” says MIT postdoc Min-Kyu Song.
The researchers say that they plan to add more sensing and processing capabilities to the chip, and that they envision the applications to be boundless.
“We can add layers to a cellphone’s camera so it could recognize more complex images, or makes these into healthcare monitors that can be embedded in wearable electronic skin,” says Choi, who along with Kim previously developed a “smart” skin for monitoring vital signs.
Another idea, he says, is for modular chips, built into electronics, that consumers can choose to build up with the latest sensor and processor “bricks.”
“We can make a general chip platform, and each layer could be sold separately like a video game,” says Jeehwan Kim. “We could make different types of neural networks, like for image or voice recognition, and let the customer choose what they want, and add to an existing chip like a LEGO.”
For more, see “Reconfigurable heterogeneous integration using stackable chips with embedded artificial intelligence.”