Samsung Electronics has developed the industry’s first memory module supporting the new Compute Express Link (CXL) interconnect standard. Integrated with Samsung’s Double Data Rate 5 (DDR5) technology, this CXL-based module will enable server systems to significantly scale memory capacity and bandwidth, accelerating artificial intelligence (AI) and high-performance computing (HPC) workloads in data centres.
Unlike conventional DDR-based memory, which has limited memory channels, Samsung’s CXL-enabled DDR5 module can scale memory capacity to the terabyte level, while dramatically reducing system latency caused by memory caching.
Samsung has included several controller and software technologies such as memory mapping, interface converting and error management, which will allow CPUs or GPUs to recognize the CXL-based memory and utilize it as the main memory.
The rise of AI and big data has been driving the need for multiple processors work in parallel to process massive volumes of data. CXL is an open, industry-supported interconnect based on the PCI Express (PCIe) 5.0 interface that enables high-speed, low latency communication between the host processor and devices such as accelerators, memory buffers and smart I/O devices, while expanding memory capacity and bandwidth. Samsung has been working with several data centre, server and chipset manufacturers as part of the CXL consortium since 2019.
“This is the industry’s first DRAM-based memory solution that runs on the CXL interface, which will play a critical role in serving data-intensive applications including AI and machine learning in data centers as well as cloud environments,” said Cheolmin Park, vice president of the Memory Product Planning Team at Samsung Electronics. “Samsung will continue to raise the bar with memory interface innovation and capacity scaling to help our customers, and the industry at-large, better manage the demands of larger, more complex, real-time workloads that are key to AI and the data centres of tomorrow.”
“Data centre architecture is rapidly evolving to support the growing demand and workloads for AI and ML, and CXL memory is expected to expand the use of memory to a new level,” said Dr. Debendra Das Sharma, Intel Fellow and Director of I/O Technology and Standards at Intel
Dan McNamara, senior vice president and general manager, Server Business Unit, AMD, added, “AMD is committed to driving the next generation of performance in cloud and enterprise computing. Memory research is a critical piece to unlocking this performance, and we are excited to work with Samsung to deliver advanced interconnect technology to our data centre customers.”
Samsung says the module has been validated on next-generation server platforms from Intel. The company is also working with data centre and cloud providers around the world on big data applications including in-memory database systems.
- Samsung embeds AI accelerator in HBM memory chip
- IBM shows first 2nm chip
- Samsung 3nm designs can start on Cadence EDA tools
Other articles on eeNews Europe