MENU

Silicon Labs details hardware ML on wireless chip

Silicon Labs details hardware ML on wireless chip

Technology News |
By Nick Flaherty



Silicon Labs has benchmarked the performance of its hardware accelerator for machine learning in its latest wireless chip.

The MG24 and BG24 includes a purpose built matrix vector execution unit optimised for convolutional neural networks. The accelerator is designed with native support for the TensorFlow Lite for Microcontrollers (TFLM) framework from Google, offloading the matrix calculations from the ARM Corex M33 microcontroller core in the chip.

This gives a low energy benchmark of 1721uJ for image processing with a latency of 186ms and 45uJ with a latency of 5ms for anomaly detection in the MLperf Tiny benchmarks (see link below for the other results).

The low power comes from the efficiency of the vector matrix accelerator subsystem and by allowing the rest of the chip to shut down while the ML calculations are made.

Silicon Labs is now supporting TFLM for all Series 1 and Series 2 SoC via CMSIS-NN APIs and provides a set of APIs for accelerated kernels in MG24 platform. It is also working on its own frameworks.

“There are customers who want to develop on their own ML solutions and we see TFLM as the most widespread framework for these Cortex M4+ systems,” said Tamas Daranyi, product manager for IoT & AI/ML.

“We are actively working on extending our eco-system and bringing in third parties and partners who enable ready-made libraries  for complete solutions. We also provide a set of open source Python code to help people who want to develop their own machine learning models. We are new into this area so other solutions are coming,” he said.

Several companies have been using samples of the MG24 and BG24 since January. These support multiple wireless protocols, including Matter, Zibee, WiFi and Bluetooth via the separate radio sub-system.

Edge Impulse is using the chips for an embedded ML platform for companies building AI-aware products with automated data labeling, pre-built digital signal processing and ML blocks. This enables  live classification testing and digital twins that are less complex, more contextual and easier to develop.

“Integrating Edge Impulse with the built-in machine learning accelerator on the BG24 and MG24 enables up to 4x faster processing of machine learning algorithms with up to 6x lower power consumption while offloading the main CPU for other applications – enabling smarter and faster edge devices with long battery life and new potential workloads,” said Zach Shelby, CEO, and co-founder at Edge Impulse. “By minimizing latency and traffic over the internet for time-sensitive applications, we are strengthening privacy and security, taking full advantage of MG24 and BG24 right at the edge.” 

SensiML has ported its AI tools to use the built-in AI/ML accelerator in the MG24 and BG24 for acoustic event detection, motion analysis, gesture and keyword recognition, anomaly detection, predictive maintenance and other time series sensor signal processing. SensiML’s software tool automates the upfront development complexity and optimizes the resulting firmware to deliver accurate results with the smallest memory and power footprint possible. 

www.silabs.com/

Other articles on eeNews Europe

 

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s