OpenAI adds AI capacity in the Oracle Cloud
Key Points
- OpenAI, Microsoft, and Oracle have partnered to extend the Azure AI platform to Oracle Cloud Infrastructure (OCI), providing OpenAI with additional capacity for AI development and deployment.
- According to Oracle, the OCI Supercluster can scale up to 64,000 NVIDIA Blackwell GPUs or GB200 Grace Blackwell Superchips for training large language models, connected by an ultra-low latency cluster network.
- OpenAI emphasized that its strategic cloud relationship with Microsoft remains unchanged, and that all pre-training of top models will continue to take place on jointly built supercomputers. The Oracle partnership is primarily for scaling operations.
OpenAI is partnering with Microsoft and Oracle to extend the Azure AI platform to Oracle Cloud Infrastructure (OCI), providing additional capacity for AI development and deployment.
This move aims to help OpenAI keep up with the growing demand for its generative AI services like ChatGPT, which boasts over 100 million monthly users.
OpenAI CEO Sam Altman stated, "OCI will extend Azure's platform and enable OpenAI to continue to scale."
The OCI Supercluster can scale up to 64,000 NVIDIA Blackwell GPUs or GB200 Grace Blackwell Superchips connected by an ultra-low latency cluster network and a choice of HPC storage for training large language models (LLMs), according to Oracle.
OpenAI's frontier models continue to be trained in the Microsoft cloud
Following Oracle's announcement, OpenAI clarified that its strategic cloud relationship with Microsoft remains unchanged. The partnership with OCI allows OpenAI to use the Azure AI platform on OCI infrastructure for inferences and "other needs," while all pre-training of OpenAI's frontier models will continue on supercomputers built in partnership with Microsoft.
Microsoft is OpenAI's most significant infrastructure partner and investor, having supported the AI pioneer with billions of dollars. In return, Microsoft gains access to OpenAI technologies, which it integrates into its products like Copilot.
Training the massive AI models requires specialized high-performance infrastructure provided by Microsoft's customized supercomputers, and the company likely wants to maintain control over this crucial part of the value chain.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe now