MENU

Ambiq looks to go Atomiq with $9bn IPO

Ambiq looks to go Atomiq with $9bn IPO

Business news |
By Nick Flaherty

Cette publication existe aussi en Français


Ambiq Micro in the US is developing a new family of ultra low power edge AI chips as it sees a valuation as high as $9bn in a public offering.

The initial public offering (IPO) price is expected to be between $22.00 and $25.00 per share, with 356m shares issued. The number available for the IPO has yet to be announced. The company had previously looked at an IPO in 2021.

The company saw revenue of $76m in 2024 and $65m in 2023. Its Apollo ARM-based. near-threshold CMOS microcontrollers currently power over 270 million devices, with over 42 million units shipped in 2024, 40% of them running AI algorithms.

It is integrating the SPOT near-threshold technology into additional chip products for AI in medical, digital health, industrial, security, smart home and buildings, robotics, and automotive markets.

The first Atomiq edge AI microcontroller, currently in development, is is expected to feature a full neural processing unit (NPU) for high-performance AI acceleration along with a new memory design for minimum power and maximum performance on AI model execution at the edge.

The company has also launched two AI runtime tools for its Apollo microcontrollers in battery- powered and wearable designs.

HeliosRT is a performance-enhanced implementation of LiteRT (formerly TensorFlow Lite for Microcontrollers) that is tailored for energy-constrained environments and is fully compatible with existing TensorFlow workflows. Custom AI kernels are optimised for the Apollo510 vector acceleration hardware with improved numeric support for audio and speech processing models and a 300% improvement in inference speed and power efficiency over standard LiteRT implementations

HeliosAOT introduces a ground-up, ahead-of-time compiler that transforms TensorFlow Lite models directly into embedded C code for edge AI deployment. This provides a 15–50% reduction in memory footprint versus traditional runtime-based deployments with granular memory control, enabling per-layer weight distribution across Apollo’s memory hierarchy. Direct integration of generated C code into embedded applications provides greater flexibility for resource-constrained systems

“The intersection of developer experience and power efficiency is our north star,” said Carlos Morales, VP of AI at Ambiq. “HeliosRT and HeliosAOT are designed to integrate seamlessly with existing AI development pipelines while delivering the performance and efficiency gains that edge applications demand. We believe this is a major step forward in making sophisticated AI truly ubiquitous.”

HeliosRT is available now in beta via the neuralSPOT SDK, with general release expected in Q3 2025, while HeliosAOT is currently available as a technical preview for select partners, with wider availability planned for Q4 2025

www.ambiq.com

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s