
Processor-in-memory DRAM benchmarked on Xeon server
Upmem SAS (Grenoble, France) has demonstrated DDR4-compatible DIMMs populated with processor-in-memory (PIM) chips running in x86 Xeon servers and delivering improved performance. Typical results on performance-critical applications showed PIM-based operation was ten times more efficient than a server with standard DRAMs.
Processing-in-memory allows compute to be offloaded into the memory chips where data is located and saves on data movement, latency and power consumption. Upmem reduces data movement while leveraging existing server architectures and memory technology for its PIM accelerators.
Applications in genomics mapping and index searching validate the potential of PIM to improve speed and save power by an order of magnitude, said Upmem.
Offloading the full human genome analysis (mapping and variance analysis) for a GATK-compliant analysis proves more than ten times faster if the Xeon server is equipped with Upmem PIM modules instead of the same amount of standard DRAM. It also results in six times less energy consumption per throughput.
By sending an index search request to hundreds of PIM-DRAM units in parallel, instead of relying on the main CPU, a server with PIM modules produces 11 times more throughput per second than the same server with the same amount of DRAM. It also reduces to get the result back: on average 35 times faster. The energy consumed at server level is six times less per throughput without any hardware modification, and can be extended.
“The early benchmarks reflect the benefits that PIM can bring when stopping most of the off-chip data movement between the memory and the processing cores, while unleashing the available data bandwidth,” said Upmem STO and cofounder Fabrice Devaux. “We expect dozens of evaluations from partners and large research labs to show market adoption and potential, in all geographies, with a larger percentage in the US market.”
Related links and articles:
News articles:
Western Digital backs processor-in-memory startup
Processing-in-Memory architecture takes embedded AI to 8.8 TOPS/W
Startup plans to embed processors in DRAM
