MENU

SemiDynamics aims to future proof embedded AI chip designs

SemiDynamics aims to future proof embedded AI chip designs

Technology News |
By Nick Flaherty



Spanish AI IP supplier SemiDynamics has combined its RISC-V core and AI accelerator blocks into a single package to future proof embedded AI chip design.

SemiDynamics has combined its RISC-V core, vector engine and tensor engines with its interconnect into a single ‘all in one’ element with an open source software layer. This will be launched at the Embedded World show next week in Germany.

“Our solution protects you from whatever the AI guys invent next so we think this element approach is quite a change, its balanced. Its subtle but super important,” said Roger Espasa, CEO and founder of SemiDynamics in Barcelona.

“We realised that talking to customers they were buying the core, NPU or GPU for different people but don’t seem to be able to put together the three parts of the solution,” he tells eeNews Europe.

“I’m offering a solution with a convolution engine, flexible vector engine that is the equivalent of the GPU and I guarantee I can run any framework, any Turing computing algorithm,  now or in the future because it is fully programmable so the investment is protected.”

The integration saves on memory, and therefore on die size. The vector storage and the tensor storage is the same with a single cache in front of the element, rather than having three different amounts of memory for the different blocks. These are linked by the Gazillion non-blocking interconnect.

The single element also has a single development environment. “With us you get the three elements together under one RISC-V programming environment so the programming environment is homogeneous rather than heterogeneous from different vendors, and it’s a very simple software environment with the ONNX runtime,” he said.

“The second point is the element is scalable so with one design you can go from one TOP to as many as you want by just adding more elements to get the right performance to fit the power envelope. The element can defined, for example 2TOPS or 4TOPS and these can scale.”

“We will have a 20 TOP version and a 40TOP version and if the customer doesn’t want to know the details we will solve this for them, for example the maximum clock so we will tune it in different ways.  Lower frequency requires more area, higher frequency requires more power.

“We have line of sight to 100 to 200 TOPS,” he said.

The flexibility is key for an embedded AI chip. “Sometimes there are just a fixed number of models, perhaps transformer mixed with a CNN, and we look at what they want to achieve.”

The ONNX layer enables a various AI framework tools.

“All of the frameworks such as PyTorch or TensorFlowLite have an ONNX export and the runtime is a nice piece of engineering,” he said, “The runtime is well thought out and that helps us provide a minimal but important layer and we can optimise this for new operators in software.”

The company provides its IP direct to companies designing their own chips, but also to third party design houses.  

www.semidynamics.com

 

 

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s