MLCommons has published the latest benchmarks for machine learning and signal analysis embedded systems on low cost microcontrollers.
The MLPerf Tiny benchmark suite is intended for the lowest power devices and smallest form factors, such as deeply embedded, intelligent sensing, and internet-of-things applications. The second round of MLPerf Tiny results showed tremendous growth in collaboration with submissions from Alibaba, Andes, hls4ml-FINN team, Plumerai, Renesas, Silicon Labs, STMicroelectronics, and Syntiant.
Collectively, these organizations submitted 19 different systems with 3 times the number of results than the first round and over half the results incorporating energy measurements. These were measured with benchmarks on machine learning models for visual wake words, image classification, keyword spotting and anomaly detection.
- Plumerai, TI team on embedded AI
- World’s fastest deep learning inference software for ARM
- XMOS and Plumerai partner collaborate on binarised neural networks
- STMicroelectronics acquires Cartesiam for edge AI tool
“The MLPerf Tiny benchmark confirms that our inference engine for ARM Cortex-M is the fastest in the world,” said UK based inference engine developer Plumerai. “These have been verified by the MLCommons organization and its members.”
Plumerai submitted three systems for benchmarking using its inference on the ST Nucleo L4R5ZISTM32L4R5ZIT6U ARM Cortex-M4 and DISCO-F746NG Cortex-M7 systems as well as Infineon’s CY8CPROT0 PSoC 62 (Cortex M4) system. The benchmark showed the lowest latency in the first three categories, at 59.4ms, 65.1ms and 19.5ms.
This compared to ST’s own algorithms on its Nucleo Cortex-M4X, M33 and M7 systems. The ST M7 Nucleo board and algorithm had the lowest latency for anomaly detection at 2.4ms.
Renesas also benchmarked its Cortex-M33 EK-RA6M4RA6M4 system and RX65N system and Syntiant benchmarked its NDP120 Cortex-M0 chip.
Other benchmarks include the Andes D25F RISC-V core with AI extensions running on several Xilinx/AMD FPGA systems. Silicon Labs also previewed its G24-DK2601BEFR32MG24 Cortex-M33 Gecko chip.
The benchmarks are at mlcommons.org/en/inference-tiny-07/