Oracle and AMD are expanding their long-running partnership with a next-generation AI supercluster designed to deliver massive scalability and performance for artificial intelligence workloads. Beginning in Q3 2026, Oracle Cloud Infrastructure (OCI) will become the first hyperscaler to publicly offer an AI computing cluster powered by 50,000 AMD Instinct MI450 Series GPUs, with expansion planned into 2027.
This announcement marks another major move in the competitive cloud AI infrastructure space, highlighting how chipmakers and hyperscalers are teaming up to meet unprecedented demand for compute power. Furthermore, the partnership underscores the growing importance of open standards, energy-efficient designs, and scalable architectures for Europe’s data center and AI ecosystem.
Scaling for the next wave of AI
AI workloads are growing beyond the limits of today’s clusters, therefore demanding flexible, high-performance compute infrastructure. Oracle’s upcoming AI supercluster will use AMD’s new “Helios” rack design — integrating the Instinct MI450 Series GPUs, next-generation EPYC CPUs (codenamed Venice), and Pensando networking (codenamed Vulcano). This vertically optimized setup aims to deliver extreme performance and scalability while minimizing power use.
“Our customers are building some of the world’s most ambitious AI applications, and that requires robust, scalable, and high-performance infrastructure,” said Mahesh Thiagarajan, executive vice president of Oracle Cloud Infrastructure. “By bringing together the latest AMD processor innovations with OCI’s secure, flexible platform and advanced networking powered by Oracle Acceleron, customers can push the boundaries with confidence.”
Forrest Norrod, executive vice president and general manager of AMD’s Data Center Solutions Business Group, added: “AMD and Oracle continue to set the pace for AI innovation in the cloud. With our AMD Instinct GPUs, EPYC CPUs, and advanced AMD Pensando networking, Oracle customers gain powerful new capabilities for training, fine-tuning, and deploying the next generation of AI.”
Inside the Helios architecture
The new MI450 Series GPUs will feature up to 432 GB of HBM4 memory and 20 TB/s of bandwidth — enabling training of models 50% larger than those handled by previous generations. The dense, liquid-cooled Helios racks will house 72 GPUs each and integrate UALink and UALoE open interconnects to reduce latency and streamline communication between accelerators.
Each GPU can connect through up to three 800 Gbps Pensando “Vulcano” AI-NICs, providing high-speed, lossless networking aligned with emerging Ultra Ethernet Consortium (UEC) standards. The architecture will also include next-gen EPYC CPUs supporting confidential computing and enhanced data security.
Expanding OCI’s AI portfolio
Alongside the MI450 rollout, Oracle also announced the general availability of OCI Compute instances using AMD Instinct MI355X GPUs, available on its zettascale Supercluster platform scaling to 131,072 GPUs. These new shapes aim to give enterprises and research organizations flexible, open-source-compatible solutions for training, inference, and high-performance computing workloads at massive scale.
With this latest expansion, Oracle and AMD are betting big on an open, energy-efficient future for large-scale AI infrastructure — one designed to handle the trillion-parameter models of tomorrow.
If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :
eeNews on Google News

Cette publication existe aussi en Français