MENU

3D DRAM breakthrough promises major AI inference performance gains

3D DRAM breakthrough promises major AI inference performance gains

New Products |
By eeNews Europe



d-Matrix and Alchip have announced a joint effort to develop what they call the world’s first 3D DRAM-based datacenter inference accelerator. The technology is aimed at eliminating the performance, cost, and scalability limits constraining today’s AI infrastructure. The companies say the collaboration leverages the strengths of both teams: Alchip’s ASIC design experience and d-Matrix’s digital in-memory compute platform.

For eeNews Europe readers, this development points to a potential shift in how datacenters approach scale-out AI inference, especially as generative and agentic AI workloads continue to grow. Engineers evaluating next-generation inference accelerators will be interested in monitoring how 3D DRAM architectures compare in practice with advanced HBM-based solutions.

New architecture for high-speed inference

At the core of the announcement is d-Matrix’s 3DIMC, which is a 3D-stacked DRAM implementation designed to break traditional memory bandwidth bottlenecks. According to d-Matrix, the technology has already been validated on Pavehawk test silicon in the company’s labs. The company notes that 3DIMC will deliver up to 10× faster inference than solutions built around HBM4, marking what could be a significant architectural shift in accelerator design.

The first commercial appearance of 3DIMC will be on the forthcoming d-Matrix Raptor inference accelerator, positioned as the successor to the company’s Corsair platform. Raptor targets generative AI, agentic AI, and other compute-intensive inference workloads that require increasingly specialized silicon solutions.

The companies say the joint engineering approach unites compute-memory integration with advanced ASIC capabilities to enable unprecedented levels of inference throughput and energy efficiency.

Building on prior platforms

d-Matrix frames the collaboration as a continuation of the compute-memory integration philosophy established with Corsair. Extending that architecture into 3D DRAM is described as the next logical step in supporting hyperscalers and enterprises facing rapidly growing inference demands.

“This collaboration combines our unique compute-memory integration technologies with Alchip’s ASIC design innovation capabilities to deliver the world’s first 3D DRAM inference solution,” d-Matrix announced. “Together, we’re engineering a breakthrough that makes AI not only faster, but more cost-effective and sustainable at scale. 3DIMC represents the next logical step in our roadmap toward delivering efficient inference architectures that keep pace with the exponential growth of generative and agentic AI.”

For hardware architects, chip designers, and systems engineers, the promise of improved cost efficiency, along with a significant performance gain, will be key points of interest. As HBM4-based solutions push power and cost envelopes higher, the prospect of 3D DRAM as a scalable alternative may reshape discussions about datacenter inference roadmaps.

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s