Embedded memory IP specialist sureCore has developed a high performance, multi-port, embedded memory for a RISC-V-based, tensor processing chip.
Semidynamics in Barcelona, Spain, is developing a high bandwidth, vector processing unit optimised for tensor processing aimed at AI applications and the vector compute units have to be tightly coupled to a high-performance register file. Whilst this can be implemented via a standard synthesised approach, the outcome is often inefficient in terms of power and area. The sureCore design delivers optimal and predictable performance for all key parameters including the 2.5GHz speed.
“Using a standard, synthesised, physical implementation flow would deliver the performance we need but at the expense of power and area,” said Roger Espasa, CEO of Semidynamics. “Achieving the optimal configuration of Power, Performance and Area (PPA) is always the key goal in realising a chip design with the right market value proposition.
“This is why we went to sureCore to exploit their embedded memory expertise as this is fundamental to meeting our performance goals,” he said. “In fact, it is a central feature of our highly parallel architecture that necessitates multiple instantiations of the high-performance register file to deliver the performance. Effectively we de-risked a critical part of the design.”
“Usually companies come to us for ultra-low power, embedded memory solutions. What we were able to demonstrate in this case is that our architectures could instead be optimised for performance and still deliver compelling power and area characteristics,” said Paul Wells, CEO of sureCore.
- sureCore expands into control logic for quantum computing
- £6.5m project to develop cryogenic CMOS IP for quantum chips
- sureCore offers low-power SRAM IP customization service
Other articles on eeNews Europe
- Renesas enters the FPGA business
- TI to build up to four new 300mm fabs in Texas
- European memory company formed
- UK to examine Nvidia-ARM deal in more detail
- Quad PCIe card with in-memory computation for edge AI
- First SoC with CXL 2.0 as memory accelerator