Samsung embeds AI accelerator in memory chip: Page 2 of 2

February 18, 2021 // By Nick Flaherty
Samsung embeds AI accelerator in memory chip
The HBM-PIM architecture cuts AI power consumption by 70 percent and doubles processing performance

The HBM-PIM chip is now being tested inside a range of datacentre AI accelerators, expected to be Nvidia and AMD who both support HBM2 interfaces, with all validations expected to be completed within the first half of this year. One of the first users of the chip is in the supercomputer used by the Argonne lab in the US.

“I’m delighted to see that Samsung is addressing the memory bandwidth/power challenges for HPC and AI computing. HBM-PIM design has demonstrated impressive performance and power gains on important classes of AI applications, so we look forward to working together to evaluate its performance on additional problems of interest to Argonne National Laboratory,” said Rick Stevens, Argonne’s Associate Laboratory Director for Computing, Environment and Life Sciences.

www.samsung.com

Related AI HBM articles 

 

Other articles on eeNews Europe 


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.