General purpose neural inferencing engine targets DSP acceleration

April 08, 2020 //By Julien Happich
neural inferencing engine
Embedded FPGA (eFPGA) IP solutions provider Flex Logix Technologies has released key benchmarking information around its new nnMAX architecture, a general purpose neural inferencing engine that excels in the acceleration of key DSP functions.

For FIR (finite impulse response) filters, the company says nnMAX is able to process up to 1 Gigasamples per second with hundreds and even thousands of “taps” or coefficients. FIR filters are widely used in a large number of commercial and aerospace applications. Cheng Wang, Flex Logix’s senior VP engineering and co-founder, disclosed these benchmarks and more at the online Linley Spring Processor Conference in a presentation titled “DSP Acceleration using nnMAX.”

“Because nnMAX is so good at accelerating AI inference, customers started asking us if it could also be applied to DSP functions,” said Geoff Tate, CEO and co-founder of Flex Logix. “When we started evaluating their models, we found that it can deliver similar performance to the most expensive Xilinx FPGAs in the same process node (16nm), and is also faster than TI’s highest-performing DSP – but in a much smaller silicon area than both those solutions. nnMAX is available now for 16nm SoC designs and will be available for additional process nodes in 2021.”


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.