MENU

Expedera raises $18m for edge AI accelerator IP

Expedera raises $18m for edge AI accelerator IP

Business news |
By Nick Flaherty



Californian AI accelerator designer Expedera has raised $18m to boost development of its dep learning accelerator IP.

The Series A funding round led by Dr. Sehat Sutardja and Weili Dai, founders of Marvell Technology Group, and other prominent semiconductor industry investors. This brings the total amount raised to over $27m, and will enable Expedera to speed product development and expand sales and marketing to meet the demand for its high performance and energy-efficient deep learning accelerator (DLA) IP.

“We expect shipments of AI-enabled edge devices to grow from about 600 million units in 2020 to 2 billion units in 2025, representing 26% annual growth,” said Linley Gwennap, Principal Analyst at The Linley Group. “Smartphones, a market where Expedera already has traction, represent about half of these units.”

“This financing underscores the success that Expedera has had so far and will enable us to expand our portfolio and team to meet the market needs,” said Da Chuang, CEO of Expedera. “We are incredibly happy to have Weili Dai and Sehat Sutardja lead this round. As highly respected veterans of the semiconductor industry, they have a unique understanding of the market and customer needs. I look forward to a long partnership.”

Expedera’s deep learning accelerator IP provides the industry’s highest performance per watt, and is scalable up to 128 TOPS with a single core and to PetaOps with multi-core. The first products deliver 18 TOPS/W at 7nm, which Chuang says is up to ten times more than competitive offerings while minimizing memory requirements.

“We’ve taken a novel approach to AI acceleration inspired by the team’s extensive background in network processing,” said Chuang. “We’ve created an AI architecture that allows us to load the entire network model as metadata and run it natively using very little memory. If you plot performance in terms of TOPS/W or ResNet-50 IPS/W you’ll see that all other vendors hit a wall around 4 TOPS/W or 550 IPS/W. However, we can break through the wall with 18 TOPS/W or 2000 IPS/W. As our hardware processes the model monolithically, we are not constrained by memory bandwidth and can scale up to over 100 TOPS.”

This makes it suitable for AI inference applications at the edge. Expedera’s Origin IP and software platform supports popular AI frontends including TensorFlow, ONNX, Keras, Mxnet, Darknet, CoreML and Caffe2 through Apache TVM.

By licensing its technology as a semiconductor IP, Expedera enables any chip designer to add the AI functionality to chip designs.

expedera.com

Related articles

Other articles on eeNews Europe


 

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s