
SiFive adds matrix engine for RISC-V AI processor IP

SiFive has launched RISC-V processor IP with a matrix engine and custom extension for the first time for edge AI applications.
The XM Series IP from SiFive include a highly scalable AI matrix engine for Edge AI and IoT chips as well as consumer devices, next generation electric and/or autonomous vehicles and data centers.
SiFive also announced its intention to open source a reference implementation of its SiFive Kernel Library (SKL).
The XM integrates scalar, vector, and matrix engines to address the memory bandwidth issues of AI designs. There are four X-Cores per cluster each with dual vector units, and a cluster can deliver 16 TOPS (INT8) or 8 TFLOPS (BF16) per GHz. X-Cores per cluster. This can execute all other layers e.g. activation functions and adds new custom exponential acceleration instructions
The new matrix instructions are fetched by the scalar unit with the source data comes from vector registers and the results sent to each matrix accumulator
There is 1TB/s of sustained memory bandwidth per XM Series cluster, with the clusters being able to access memory via a high bandwidth port to SRAM for model data or via a CHI port for coherent memory access.
“Many companies are seeing the benefits of an open processor standard while they race to keep up with the rapid pace of change with AI. AI plays to SiFive’s strengths with performance per watt and our unique ability to help customers customize their solutions,” said Patrick Little, CEO of SiFive.
“RISC-V was originally developed to efficiently support specialized computing engines including mixed-precision operations,” said Krste Asanovic, SiFive Founder and Chief Architect. “This, coupled with the inclusion of efficient vector instructions and the support of specialized AI extensions, are the reasons why many of the largest datacenter companies have already adopted RISC-V AI accelerators.”
