Three layer AI neural network for on-chip microcontroller training
Cette publication existe aussi en Français
Rohm has developed an AI-enabled microcontroller that it says is the first with on-chip training.
The ML63Q253 and Q255 devices are intended for fault and anomaly prediction and degradation forecasting using sensing data in applications such as motors in both industrial and consumer applications.
The devices are based on a 32bit ARM Cortex-M0+ core alongside a hardware AI accelerator core with a CAN FD controller, 3-phase motor control PWM, and dual A/D converters, and have a power consumption of approximately 40mW.
Edge AI models typically depend on network connectivity and high-performance CPUs, which can be costly and difficult to install. Endpoint AI conducts training in the cloud and performs inference on local devices, so network connection is still required. These models typically perform inference via software, necessitating the use of GPUs or high-performance CPUs.
Instead, Rohm has implemented a simple 3-layer neural network algorithm to implement its proprietary Solist-AI model running on the AxlCORE-ODL accelerator core to perform learning and inference independently, without the need for cloud or network connectivity.
It supports unsupervised learning, training on the motor data to create the baseline and monitor deviations, as well a supervised learning that highlights potential anomalies.
- AI accelerator brings generative AI to Raspberry Pi 5
- The AI architecture in the Imagination E-series GPU
- Edge accelerator for generative AI
This combination allows the MCUs to independently carry out both learning and inference through on-device learning, cutting out the need for cloud training which can be complex and expensive. The on-chip learning also allows for flexible adaptation to different installation environments and unit-to-unit variations, even within the same equipment model.
An AI simulation tool called Solist-AI Sim allows users to evaluate the effectiveness of learning and inference prior to deploying the AI microcontroller. The data generated by this tool can also provide the data for supervised training for the AI MCU, supporting validation and improving inference accuracy.
The AxlCORE-ODL hardware accelerator provides processing that is 1000x faster than the previous software-based MCUs running at 12MHz. This enables real-time detection and numerical output of anomalies that deviate from the measured baseline.

Rohm plans 16 devices with different memory sizes, package types, pin counts, and packaging specifications. Mass production of eight models in the TQFP package began sequentially in February 2025 with two models with 256KB of Code Flash memory and taping packaging, and there is an evaluation board for the devices.
www.rohm.com/lapis-tech/product/micon/solistai-software
If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :
eeNews on Google News