What the OpenAI Deal Means for AMD in AI Factory Compute

Earlier this year, when the Stargate announcement at the White House kicked off a wave of eye-popping, hundreds-of-billion dollar data center investments, we might have been justified in assuming that it would be the tech story of the year.

Then just a few weeks ago, NVIDIA stunned the technology world by investing $10 billion in Intel, buoying the former chip industry leader with a financial shot in the arm from a company that has played a major role in deminishing Intel market position over the last few years. Intel couldn’t ask for a better partner than NVIDIA for aggressively entering the AI data center server chip market. So as of a week ago, we might have guessed NVIDIA-Intel will be the tech story of the year.

But now we have a deal valued at more than $95 billion between ChatGPT monster OpenAI and AMD, which has been seeking a means of competing with NVIDIA in the GPU market. Yes, there was a deal between AMD and Oracle, announced in June, calling for Oracle to deploy 130,000 AMD MI355X GPUs on the Oracle Cloud Infrastructure, and in March, Oracle during an earnings call discussed intentions to deploy 30,000 AMD MI355X processors. And then there’s AMD GPUs driving two of the first three U.S.-based exascale supercomputers, currently ranked nos. 1 and 2 on the TOP500 list of the most powerful HPC systems.

But this is much bigger and gives AMD added credibility in the AI compute market, which NVIDIA dominates with more than a 90 percent share.

Of all the days since the last “AMD winter” (i.e., sometime before Lisa Su took over as CEO in 2014), yesterday is among the most memorable and, the company believes, significant (perhaps it’s time to put to rest recollections of the old boom-and-bust AMD, which has executed admirably for the past decade).

The AMD-OpenAI deal calls for the latter to utilize 6 gigawatts of computing capacity with AMD chips over the next several years. The chip in the spotlight is AMD’s upcoming MI450 rackscale AI processor, scheduled for shipping in the second half of 2026. It will go up against next-gen Vera Rubin, which NVIDIA has said will deliver 3x the power of the company’s current flagship Blackwell chips. Vera Rubin also is scheduled for shipping in Q3 or Q4 of next year.

So it may be that NVIDIA finds itself for the first time in a genuine GPU horserace – assuming both companies deliver their next-gen chips on schedule.

Lisa Su

As chip industry analyst Dr. Ian Cutress explains it, “Built on the CDNA4 architecture, MI450 is expected to be the company’s first product line optimized for true rack-scale deployment, integrating compute, interconnect, and memory bandwidth at system level rather than device level. AMD’s Helios platform, its new reference architecture for large AI clusters, will combine MI450 GPUs with Zen 6-based CPUs, high-bandwidth memory, and advanced interconnect fabric into pre-engineered rack-scale systems designed to compete directly with NVIDIA’s HGX and DGX SuperPODs.”

In short, AMD is entering rarefied AI compute company, and the competition it has entered is reminiscent of AMD eight years ago taking on Intel’s seemingly impregnable lead in CPUs.

A key to the potential rack-scale prowess of the MI450 is AMD’s purchase in August 2024 of ZT Systems, a server and manufacturer, for nearly $5 billion, according to an article in CRN, which quoted Su last June saying “the acquisition would give it ‘world-class systems design and rack-scale solutions expertise’ to ‘significantly strengthen our data center AI systems and customer enablement capabilities.’”

One might reasonably ask why OpenAI chose to partner with AMD rather than NVIDIA. The answer is that OpenAI is GPU-agnostic, it needs a multi-vendor supply chain to deliver its massive demand for AI computer and, of course, it has a partnership with NVIDIA, announced late last month, one that’s if anything bigger than the AMD agreement.

Under the NVIDIA deal, OpenAI will deploy at least 10 gigawatts of AI data centers with NVIDIA systems representing millions of GPUs for OpenAI’s next-generation AI infrastructure. For its part, NVIDIA will invest up to $100 billion in OpenAI progressively as each gigawatt is deployed. And it is scheduled to roll out at the same time as OpenAI’s AMD-based deployments.

To realize the financial rewards of the OpenAI deal, AMD now must execute at an extremely high level. As Cutress said, “Delivering six gigawatts of data center GPU capacity requires consistent foundry allocation, substrate availability, and high-bandwidth memory supply over multiple years. This was … (when) … AMD cited its multi-year investment in supply chain stability. Any disruption in wafer starts or packaging capacity, particularly as AMD competes with NVIDIA and others for TSMC’s CoWoS capacity, could delay fulfillment.”

But then, AMD has executed consistently well since its bad old days ended more than a decade ago, so long ago they are fading from memory.