MENU

Marvell partners with Micron, Samsung, SK Hynix on custom HBM

Marvell partners with Micron, Samsung, SK Hynix on custom HBM

Partners |
By Peter Clarke



Marvell has said it has been working with the three leading memory chip makers – Micron, Samsung and SK Hynix – on a custom-HBM that improves the performance of its XPU AI accelerator.

Marvell has said that working with – Micron, Samsung and SK Hynix – it has developed a custom HBM. When the custom HBM die are included in its XPU AI accelerator it enables access to 33 percent more memory and 25 percent more performance while improving power efficiency, Marvell said.

At the same time Marvell has upgraded memory controller logic and advanced packaging to create new XPU designs.

Marvell said it is collaborating with its cloud customers and leading HBM manufacturers, Micron, Samsung Electronics, and SK Hynix to define and develop further custom HBM DRAMs for next-generation XPUs.

Samsung, SK Hynix partner on processing-in-memory

Marvell, like AI-GPU market leader Nvidia, uses high-bandwidth memory (HBM) DRAMs within its AI processing components in multi-die components with advanced packaging and high-speed interfaces. However, the scaling of XPU performance has been limited by the current standard interface-based architecture, Marvell said. The Marvell custom HBM compute architecture introduces tailored interfaces to optimize performance, power, die size, and cost for specific XPU designs.

The key here is that rather than taking off-the-shelf HBMs from DRAM vendors Marvell is considering the logic, the stacked HBM DRAMs, the interfaces and packaging at the same time.

The Marvell custom HBM compute architecture enhances XPUs by serializing and speeding up the I/O interfaces between its internal AI compute accelerator silicon dies and the HBM base dies. This results in greater performance and up to 70% lower interface power compared to standard HBM interfaces, Marvell said.

The additional interfaces reduce the required die areaallowing HBM support logic to be integrated onto the base die. The die area released can be used to enhance compute capabilities, add features, and support up to 33 percent taller HBM stacks, increasing memory capacity per XPU. These improvements boost XPU performance and power efficiency while lowering the total cost of ownership for cloud operators, said Marvell.

“Enhancing XPUs by tailoring HBM for specific performance, power, and total cost of ownership is the latest step in a new paradigm in the way AI accelerators are designed and delivered,” said Will Chu, general manager of the compute and storage business group at Marvell, in a statement.

In the same statement Harry Yoon, Samsung’s head of products and solutions planning for the Americas region, said: “Optimizing HBM for specific XPUs and software environments will greatly improve the performance of cloud operators’ infrastructure and ensure efficient power use.”

Related links and articles:

www.marvell.com

News articles:

Marvell, AWS partner for EDA-in-the-cloud

Samsung, SK Hynix partner on processing-in-memory

SK Hynix prepares 16-layer HBM3e DRAM for 2025

Nvidia asks SK Hynix to supply HBM4 earlier

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s