
Expedera raises $18m for edge AI accelerator IP
Californian AI accelerator designer Expedera has raised $18m to boost development of its dep learning accelerator IP.
The Series A funding round led by Dr. Sehat Sutardja and Weili Dai, founders of Marvell Technology Group, and other prominent semiconductor industry investors. This brings the total amount raised to over $27m, and will enable Expedera to speed product development and expand sales and marketing to meet the demand for its high performance and energy-efficient deep learning accelerator (DLA) IP.
“We expect shipments of AI-enabled edge devices to grow from about 600 million units in 2020 to 2 billion units in 2025, representing 26% annual growth,” said Linley Gwennap, Principal Analyst at The Linley Group. “Smartphones, a market where Expedera already has traction, represent about half of these units.”
- Expedera Introduces Its Origin Neural Engine IP
- The AI chips race is on – what role will IP play?
- €100m project to develop low power edge AI microcontroller
“This financing underscores the success that Expedera has had so far and will enable us to expand our portfolio and team to meet the market needs,” said Da Chuang, CEO of Expedera. “We are incredibly happy to have Weili Dai and Sehat Sutardja lead this round. As highly respected veterans of the semiconductor industry, they have a unique understanding of the market and customer needs. I look forward to a long partnership.”
Expedera’s deep learning accelerator IP provides the industry’s highest performance per watt, and is scalable up to 128 TOPS with a single core and to PetaOps with multi-core. The first products deliver 18 TOPS/W at 7nm, which Chuang says is up to ten times more than competitive offerings while minimizing memory requirements.
“We’ve taken a novel approach to AI acceleration inspired by the team’s extensive background in network processing,” said Chuang. “We’ve created an AI architecture that allows us to load the entire network model as metadata and run it natively using very little memory. If you plot performance in terms of TOPS/W or ResNet-50 IPS/W you’ll see that all other vendors hit a wall around 4 TOPS/W or 550 IPS/W. However, we can break through the wall with 18 TOPS/W or 2000 IPS/W. As our hardware processes the model monolithically, we are not constrained by memory bandwidth and can scale up to over 100 TOPS.”
This makes it suitable for AI inference applications at the edge. Expedera’s Origin IP and software platform supports popular AI frontends including TensorFlow, ONNX, Keras, Mxnet, Darknet, CoreML and Caffe2 through Apache TVM.
By licensing its technology as a semiconductor IP, Expedera enables any chip designer to add the AI functionality to chip designs.
Related articles
- Blaize raises $71m to take on Nvidia from Europe
- Quad core IP for edge AI data processing
- Automotive edge AI chip designer Recogni raises $49m
- Quad PCIe card with in-memory computation for edge AI
Other articles on eeNews Europe
- US sues Nvidia over ARM deal
- €100m project to develop low power edge AI microcontroller
- Finland’s first 5 qubit quantum computer is up and running
- Imagination launches four RISC-V processor IP families
- Fraunhofer extends RISC-V embedded processor for edge AI
- Raspberry Pi prepares for IPO
