MENU



The growth, says the firm, will be propelled by the increasing focus on low latency, advanced automation, and the availability of low-cost and ultra-power-efficient artificial intelligence (AI) chipsets – also known as “very edge” AI or embedded AI devices. These chipsets perform AI inference almost fully on board while continuing to rely on external resources – such as gateways, on-premise servers, or the cloud – for training.

As enterprises start to look for AI solutions in areas of voice activation, image or video screening, people tracking, and ambient tracking, says the firm, end-users struggle with the restricted nature of battery-powered sensors and embedded modules that operate on low computational resources offered by general-purpose microcontrollers. Often, edge sensors and devices need to handle large amounts of data, but due to the low powered nature of these devices they struggle to support high computing performance and high data throughput, causing latency issues.

“Since AI is deployed to make immediate critical decisions such as quality inspection, surveillance, and alarm management, any latency within the system may result in machine stoppage or slowdown causing heavy damages or loss in productivity,” says Lian Jye Su, Principal Analyst at ABI Research. “Moving AI to the edge mitigates potential vulnerability and risks such as unreliable connectivity and delayed responses.”

Featuring quantized AI models, TinyML chipsets enable smart sensors to perform data analytics on hardware and software dedicated for low powered systems, typically in the milliwatt range, using algorithms, networks, and models down to 100 kB and below. ARM and CEVA have both launched a chipset IP solution that supports low powered AI inference with supporting software libraries, toolchains, and models.

Low-powered AI chipset vendors including GreenWaves Technologies, Lattice Semiconductor, Rockchip, Syntiant, and XMOS have launched embedded AI chipset products in 2019. Realizing the potential of TinyML in machine vision, says the firm, CMOS vendors such as Sony and HiMax are also integrating TinyML chipset into their CMOS sensor.

“This means the market will soon start to see multiple AI chipsets in a single device at sensor and device level,” says Su.

More importantly, says the firm, it is not just hardware development that accelerates the democratization of TinyML. Open-source software development from Google through TensorFlow Lite for microcontroller and proprietary solutions from the likes of SensiML offer developer-friendly software tools and libraries, allowing more AI developers to create AI models that can support very edge applications.

TinyML chipset manufacturers, says the firm, must focus on developing their AI developer ecosystem or be part of existing ecosystems, embrace open source, and focus on articulating their unique selling points and target markets to end users. Without these conditions, chipset suppliers will struggle to generate scale for their products in what is expected to be a very competitive market.

“At the moment most of these solutions are still in the early stages of commercial deployment in smart cities and smart manufacturing, mainly used for asset tracking and anomaly sensing, and yet to achieve large-scale adoption,” says Su. “While able to offer better processing capabilities, sensors with TinyML are often much more expensive. End users will also need to design and introduce a new set of procedures and protocols to leverage the information and insights derived from these sensors.”

For more, see “Very Edge AI Chipsets for TinyML Applications.”

 

Related articles:
Edge AI chipset market to surpass that of cloud in 2025
AI at the edge and beyond – whitepaper
Edge AI partnership combines neural sensor SoC, TinyML platform

 

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s