Microchip supplies memBrain for Korean analog edge AI

Microchip supplies memBrain for Korean analog edge AI

Partners |
By Peter Clarke

Microcontroller and mixed-signal chip company Microchip Technology Inc. (Chandler, Ariz.) has teamed up with Intelligent Hardware Korea (IHWK) to develop an analog compute platform for edge AI/ML inferencing.

Intelligent Hardware Korea is a venture supported by the Korea Advanced Institute of Science and Technology and Yonsei University and is planning to create an SoC processor for neurotechnology devices. Microchip has agreed to provide an evaluation system for its flash-based memBrain via its Silicon Storage Technologies subsidiary.

The memBrain technology is based on flash nonvolatile memory (NVM) optimized to perform vector matrix multiplication (see China startup Witinmem uses analog flash for compute-in-memory). Similar technologies have had problems with thermal variation causing a least one company to revert to digital multiplication (see CEO interview: with Kurt Busch of ‘always-on’ startup Syntiant).

On its website IHKW claims that: “Our APUs [analog processing units] will not only demonstrate true analog computing that exceed expectations but will have inferencing accuracies comparable to digital inferencing. These achievements will be made possible by applying new hardware technologies to memory devices and circuit designs in combination with optimized algorithms.” 


The memBrain technology evaluation kit is designed to enable IHWK to demonstrate the power efficiency of its neuromorphic computing platform for running inferencing algorithms at the edge. The end goal is to create an ultra-low-power APU for applications such as generative AI models, autonomous cars, medical diagnosis, voice processing, security/surveillance and commercial drones.

Edge inferencing may require 50 million or more synapses and therefore weighting values for processing. The use of external memory to store weights can produce bottlenecks and latencies that rapidly impinge on energy efficiency.

The memBrain flash memory not only stores synaptic weights in the on-chip floating gate in a sub-threshold mode but also uses the same memory cells to perform the computations. When compared to approaches based on digital processing and SRAM/DRAM storage, it delivers 10 to 20 times lower power usage per inference decision, claimed Microchip.

To develop the APU, IHWK is working with KAIST for device development and Yonsei University, Seoul, for device design assistance. The final APU is expected to optimize system-level algorithms for inferencing and operate at between 20 and 80 TeraOPS per Watt.

“Our experts on nonvolatile and emerging memory have validated that Microchip’s memBrain product based on proven NVM technology is the best option when it comes to creating in-memory computing systems,” said Sanghoon Yoon, IHWK branch manager.

Related links and articles:

News articles:

China startup Witinmem uses analog flash for compute-in-memory

CEO interview: with Kurt Busch of ‘always-on’ startup Syntiant

Ex-Google engineers’ probabilistic, AI startup raises seed funding

Blumind startup pursues analog AI at the edge

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News


Linked Articles