
Microcontroller AI development tool adds deeply quantized neural networks
STMicroelectronics has released what it says is first artificial-intelligence (AI) development tool from a microcontroller vendor to support ultra-efficient deeply quantized neural networks.
Researchers from ST and the University of Salerno in Italy worked on deeply quantized neural networks. DQNNs only use small weights (from 1 bit to 8 bits) and can contain hybrid structures with only some binarized layers while others use a higher bit-width floating-point quantizer.
The research showed which hybrid structure could offer the best result while achieving the lowest RAM and ROM footprint.
The latest version of the STM32Cube.AI tool, 7.2.0, is a direct result of this research. It converts pretrained neural networks into optimized C code for STM32 microcontrollers. It is an essential tool for developing cutting-edge AI implementations that make the most of the constrained memory sizes and computing power of embedded products.
Related articles
- STMicroelectronics acquires Cartesiam for edge AI tool
- €100m project to develop low power edge AI microcontroller
- XMOS, Plumerai partner on binarised neural networks
- World’s fastest deep learning inference software for ARM microcontrollers
Moving AI to the edge, away from the cloud, delivers substantial advantages to the application. These include privacy by design, deterministic and real-time response, greater reliability, and lower power consumption. It also helps optimize cloud usage.
The tool adds support for deep quantization input formats like qKeras or Larq to allow developers to further reduce network size, memory footprint, and latency. This opens up applications such as self-powered IoT endpoints that deliver advanced functionality and performance with longer battery runtime.
STM32Cube.AI version 7.2.0 also adds support for TensorFlow 2.9 models, kernel performance improvements, new scikit-learn machine learning algorithms, and new Open Neural Network eXchange (ONNX) operators.
STM32Cube.AI also uses X-CUBE-AI, a software package containing libraries to convert pre-trained neural networks
Related AI articles
- Renesas buys edge AI tool developer
- Low-power AI chip features quantized DNN engine
- 32bit RISC-V cores are customisable for TensorFlowLite AI
- Sparsity engine boost for neural network IP core
Other articles on eeNews Europe
- IQM raises €128m in Europe’s largest quantum computing round
- Fibonacci boost for quantum computer error correction
- ABB pulls out of Russia
- UK publishes alternative to EU Horizon research programme
- Israel to build open architecture quantum computer
