Speaking Tuesday at the UBS Global Technology and AI Conference in Scottsdale, Nvidia EVP and CFO Colette Kress told investors that the much-hyped OpenAI partnership is still at the letter-of-intent stage.
“We still haven’t completed a definitive agreement,” Kress said when asked how much of the 10-gigawatt commitment is actually locked in.
Kress’s comments suggest something more tentative, even months after the framework was released.
In a lengthy “Risk Factors” section, Nvidia spells out the fragile architecture underpinning megadeals like this one. The company stresses that the story is only as real as the world’s ability to build and power the data centers required to run its systems. Nvidia must order GPUs, HBM memory, networking gear, and other components more than a year in advance, often via non-cancelable, prepaid contracts. If customers scale back, delay financing, or change direction, Nvidia warns it may end up with “excess inventory,” “cancellation penalties,” or “inventory provisions or impairments.” Past mismatches between supply and demand have “significantly harmed our financial results,” the filing notes.
The biggest swing factor seems to be the physical world: Nvidia says the availability of “data center capacity, energy, and capital” is critical for customers to deploy the AI systems they’ve verbally committed to. Power build-out is described as a “multiyear process” that faces “regulatory, technical, and construction challenges.” If customers can’t secure enough electricity or financing, Nvidia warns, it could “delay customer deployments or reduce the scale” of AI adoption.
Nvidia also admits that its own pace of innovation makes planning harder. It has moved to an annual cadence of new architectures—Hopper, Blackwell, Vera Rubin—while still supporting prior generations. It notes that a faster architecture pace “may magnify the challenges” of predicting demand and can lead to “reduced demand for current generation” products.
The company also nodded explicitly to past boom-bust cycles tied to “trendy” use cases like crypto mining, warning that new AI workloads could create similar spikes and crashes that are hard to forecast and can flood the gray market with secondhand GPUs.
Despite the lack of a deal, Kress stressed that Nvidia’s relationship with OpenAI remains “a very strong partnership,” more than a decade old. OpenAI, she said, considers Nvidia its “preferred partner” for compute. But she added that Nvidia’s current sales outlook does not rely on the new megadeal.
OpenAI “does want to go direct,” Kress said. “But again, we’re still working on a definitive agreement.”
On competitive dynamics, Kress was unequivocal. Markets lately have been cheering Google’s TPU—which has a smaller use case than the GPU but requires less power—as a potential competitor to Nvidia’s GPU. Asked whether those types of chips, called ASICs, are narrowing Nvidia’s lead, she responded: “Absolutely not.”
“Our focus right now is helping all different model builders, but also helping so many enterprises with a full stack,” she said. Nvidia’s defensive moat, she argued, isn’t any individual chip but the entire platform: hardware, CUDA, and a constantly expanding library of industry-specific software. That stack, she said, is why older architectures remain heavily used even as Blackwell becomes the new standard.
“Everybody is on our platform,” Kress said. “All models are on our platform, both in the cloud as well as on-prem.”



