MENU

Chip makers back CXL 3.0 for data centre memory interconnect

Technology News |
By Nick Flaherty

The CXL Consortium has released version 3.0 of its Compute eXpress Link (CXL) specification to double the bandwidth of memory systems in data centres with support from across the industry.

CXL 3.0 uses the latest version of PCI Express, PCIe 6.0, to double the data rate to 64GT/s with no added latency over CXL 2.0 and adds peer-to-peer memory interconnect (above)

Verification IP has been launched by Avery Design Systems, while Synopsys and Cadence Design Systems have controller IP that supports the new specification. This allows a mix of memory to be easily added to data centre servers to support the increased memory requirements of machine learning (ML) and artificial intelligence (AI) as well as mix of CPUs, GPUs and dedicated AI accelerator chips.

ARM, Intel, Marvell, Rambus and Samsung Electronics are also supporting the technology as well as memory makers SK hynix and Micron and test equipment maker Teledyne LeCroy.

“CXL 3.0 is a significant step forward in enabling heterogeneous computing,” said Kevin Krewell, principal analyst, TIRIAS Research. “With its expanded features for coherent memory sharing and new fabric capabilities, CXL 3.0 adds new levels of flexibility and composability required by present day and future data centres. The CXL Consortium has made exceptionally fast progress in delivering this important spec to the industry.”

The release of version 3.0 follows significant consolidation of similar specifications from OpenCAPI and Gen Z.

The CXL Consortium yesterday signed a deal to take over the development of the OpenCAPI, a specification that allows any microprocessor to attach to coherent user-level accelerators, advanced memories, and was agnostic to the processor architecture. The Open CAPI Consortium (OCC) also developed an Open Memory Interface (OMI) as a serial attached near memory interface that provides low latency and high bandwidth connections for main memory similar to CXL.

Earlier this year the Gen-Z Consortium also transferred its specifications and assets to the CXL Consortium.

“Standardization remains critical to delivering the compute required for increasingly complex datacentre workloads. ARM’s collaboration with consortium members on the CXL 3.0 specification will enable more performance and flexibility, allowing customers to build and deploy scalable, heterogeneous systems that best match their needs,” said Andy Rose, chief system architect and fellow, Architecture and Technology Group at ARM in Cambridge, UK.

“The CXL Consortium has introduced the CXL 3.0 specification to meet the needs of increasingly compute-intensive AI and ML applications. Cadence has been an active member of the CXL Consortium, demonstrating the industry’s first complete CXL IP solution in silicon. Cadence IP implements the CXL 3.0 features enabling 64GT/s transfers at low latency with the latest IDE specification for security while maintaining backward compatibility. This enables customers to build robust, high-performance solutions while lowering risk and reducing development cost,” said Rishi Chugh, vice president, product marketing and management, IP Group at Cadence.

“The CXL 3.0 specification’s new capabilities address data-intensive workloads in high-performance computing applications that require greater bandwidth, scalability and security. As an active contributor of the CXL Consortium, Synopsys is already enabling leading customers to integrate the standards-compliant Synopsys CXL 3.0 PHY, controller, IDE security module, and verification IP, helping them get an early start on their advanced chip designs,” said John Koeter, senior vice president of Marketing and Strategy, Synopsys Solutions Group

“Modern datacentres require heterogenous and composable architectures to support compute intensive workloads for applications such as Artificial Intelligence and Machine Learning – and we continue to evolve CXL technology to meet industry requirements,” said Siamak Tavallaei, president of the CXL Consortium. “Developed by our dedicated technical workgroup members, the CXL 3.0 specification will enable new usage models in composable disaggregated infrastructure.”

“Intel is excited to see the CXL Consortium launching the CXL 3.0 Specification, which will open many new innovative usages around disaggregated compute architectures with pooled and shared resources such as memory and accelerators at the Rack level and beyond. CXL has become the focal point for the advancement of high-speed IO with coherency and memory enhancements and Intel remains committed to working with the industry to continue to drive the CXL technology through the CXL Consortium,” said Dr. Debendra Das Sharma, senior fellow, and co-GM of Memory and I/O Technologies at Intel and CXL Consortium Technical Task Force Co-Chair.

“CXL 3.0 will play a significant role in delivering on the promise of fully composable infrastructure for the cloud, bringing increased memory performance and optimal resource utilization to next-generation data centers. CXL is an integral component to Marvell’s industry-leading cloud portfolio which spans compute, electro-optics, memory, networking, security and storage. We applaud the CXL Consortium on this new specification and the ripple-effect it will have in advancing industry innovation,” said Shalesh Thusoo, VP of CXL Product Development at Marvell.

CXL takes over OpenCAPI

“We are pleased to see the industry coming together around one organization driving open innovation and leveraging the value OpenCAPI and Open Memory Interface provide for coherent interconnects and low latency, near memory interfaces. We expect this will yield the best business results for the industry as a whole and for the members of the consortia,” said Bob Szabo, OpenCAPI Consortium President

“We are excited about this opportunity to focus the industry on specifications residing under one organization moving forward. This is the right time for our mutual members to work together to advance a standard-high-speed coherent interconnect/fabric for the benefit of the industry. Assignment of OCC assets will allow for CXL Consortium to freely utilize what OCC has already developed with OpenCAPI/OMI.” – Siamak Tavallaei, CXL Consortium President

www.computeexpresslink.org/; opencapi.org

Related articles

Other articles on eeNews Europe 

 


Share:

Linked Articles
eeNews Europe
10s