Researchers at ARM and IBM have developed a 14nm analog compute in memory chip for low power always on machine learning.
These always-on perception tasks in IoT applications, dubbed TinyML, require very high energy efficiency. Analog compute-in-memory (CiM) using non-volatile memory (NVM) promises high energy efficiency and self-contained on-chip model storage.
However, analog CiM introduces new practical challenges, including conductance drift, read/write noise, fixed analog-to-digital (ADC) converter gain, etc. These must be addressed to achieve models that can be deployed on analog CiM with acceptable accuracy loss.
Researchers from ARM and IBM Research Zurich looked at the TinyML models for the popular always-on tasks of keyword spotting (KWS) and visual wake words (VWW). The model architectures are specifically designed for analog CiM, and detail a comprehensive training methodology, to retain accuracy in the face of analog issues and low-precision data converters at inference time.
- sureCore develops in-memory computing for edge AI
- First In Memory Computing chip for GDDR6
- NeuroBlade raises $83m for compute in memory chip
- ISSCC sees 3nm process, compute-in-memory
They developed a programmable, minimal-area phase-change memory (PCM) analog CiM accelerator called AON-CiM on a 14nm process technology. This has series of layers remove the cost of complex interconnects associated with a fully-pipelined design.
Evaluating the analog networks on a calibrated simulator, as well as real hardware, showed that accuracy degradation is limited to 0.8%/1.2% after 24 hours of PCM drift (8-bit). Running the nets on 14 nm AON-CiM accelerator demonstrate 8.55/26.55/56.67 and 4.34/12.64/25.2 TOPS/W for keyword-spotting (KWS) and visual-wake-words (VWW) with 8/6/4-bit activations respectively.
“Analog Compute-in-memory techniques may be a good fit for ultra low-power TinyMLperception tasks, such as keyword spotting and visual wakewords in edge computing applications,” said Paul Whatmough, Head of Machine Learning Research at ARM research in the US
“Our paper, in collaboration with the amazing IBM Research Zurich team, gives a deep dive on the co-design of machine learning model and analog hardware, covering model design and training for noisy hardware, as well as compact efficient (and practical) hardware for layer-serial compute in memory. We even test the models on real hardware.”
https://ieeexplore.ieee.org/abstract/document/9855854
Related analog in-memory compute articles
- ST hints at analog in-memory computing chip
- Analog-in-memory AI processor startup uses memristors
- Axelera shows DIANA analog in-memory computing chip
- 48 core neuromorphic AI chip uses resistive memory
- 1400 RISC-V cores for on-chip machine learning
Other articles on eeNews Europe
- Graph AI chip challengers drive AI as a service
- Dolpin Design teams for RISC-V Headphone 3.0 chips
- Researcher steals fingerprints by hacking smart lock
- First RISC-V chip optimised for motor control
- Morse Micro raises $95m for IoT WiFi chip, teams with MegaChips
