MENU

Axelera launches Metis in memory computing AI edge cards

Axelera launches Metis in memory computing AI edge cards

Interviews |
By Nick Flaherty

Cette publication existe aussi en Français


Dutch AI startup Axelera is launching a PCI Express card for its Metis accelerator chip and plans to have a smaller M.2 format card next month.

“It’s a true European product with cutting edge performance and efficiency,” Fabrizio Del Maffeo, CEO and co-founder of Axelera tells eeNews Europe.

The $149 PCIe board is a key part of the strategy to provide AI computer vision for edge applications.

“Delivering a chip is not a scalable solution,” said Fabrizio. “Nvidia is not selling a chip. You want a customer to have a smooth and fast integration experience so from day one we said we would design an acceleration card. So after just two years we have scaled the company to 140 people and taped out two chips with tens of customers. This is a very important step as getting the hardware into customer’s hands is key.”

“We are not just a semiconductor company, we have IP with the in memory computing and RISC-V but also at the algorithm level with quantisation, network optimisation and network architecture search to may offer higher performance with lower latency and lower training time.”

The Metis chip is the second chip from the company and uses a digital in memory computing architecture to reduce the power consumption, reaching 214TOPS with a power envelope of 3 to 16W for embedded applications.  It is built on TSMC’s 12nm process.

“Thetis was the test chip to validate the digital in memory compute engine that sits at the core of the accelerator,” said Paul Neil, VP of Product Management  at Axelera. “Then we built a full SoC around four instances of the AI accelerator engine with the matrix engine and a scalar engine and that is Metis. Each core is controlled by a RISC-V core in a data flow architecture.”

The chip has 32M of L2 cache on the chip to run AI frameworks in memory. “By themselves the cores are capable of processing a wide variety of AI frameworks,” said Neil. “The 32M L2 cache allows the cores to communicate with independent networks on each cores, or multiple cores running the same network or temporal and spacial pipelining.”

There is a LPDDR4x memory interface “for larger models that bust the 32M,” said Neil.

The PCIe 3.0 card is being integrated into an edge vision server from Advantech, and there are plans for a card with four chips next year that would reach 856TOPS.

Just as selling a chip is not enough, so the software is key for adoption by engineers that are not data scientists. The aim is for the Metis chip to be an accelerator alongside a host processor.

“We have a fully featured SDK that goes along with the system products and this has a number of components,” said Neil. An optimising compiler TVM framework called Voyager compiles an AI neural network to the chip while the Axelera model zoo maps frameworks to the in memory compute architecture,” said Neil.

“A lot of our customers have well developed vision pipelines running on the host and they want to link to the runtime and run the acceleration on the card,” he said.  

“We have low code development. Engineers can import pretrained models using the YAML language to prototype and develop with all the tools and that gives a light touch entry point into the tool chain. This low code approach will be augmented with a no code cloud based graphical composer environment by the end of Q1 2024.”

The tool chain has been designed to cater for every class of user, says Neil, from innovators and early adopters with lots of machine learning skills that can take the full performance of the Voyager compiler through the embedded software engineers that want to solve a business problem.

“With the first phase of the early access programme we are showing off our ability for optimised models.

Transformer frameworks

Many edge AI chip developers are porting transformer frameworks to their architectures. This is also something that Axelera is looking at, says Neil.

“AI for computer vision is a well understood and mature technology,” said Neil. “What we observe for vision transformers it is typically a hybrid framework with vision transformer elements so future support will be at the primitive level to support a wide range of transformer primitives natively in hardware.”

The company is also working with two integration partners, Advantech and Seco. “We sell directly or they sell directly,” said FDM. “We are not planning to go to the end user but help the integrator and partner for partner for specific software for applications, whether that is for crop monitoring or inspection systems.”

Silicon roadmap

“Our design is a vanilla digital process at 12nm so we can take advantage of performance and cost improvements very easily, this is a volume solution,” said Neil. “There are a number of ways we can explore the architecture space. It is designed for CNN for vision, and there are a number of vectors that we can follow – architecture scalability, arch innovation, device scalability and process scalability.”

“But we an aggregator, not an integrator. We sit as a node in the network and take multiple streams and process them alongside a host that can be scaled and provisioned for the application.”

www.axelera.ai

 

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s