By customizing the processors, OpenAI said it will be able to embed what it has learned from developing AI models and services “directly into the hardware, unlocking new levels of capability and intelligence.” The hardware rollout should be completed by the end of 2029, according to the companies.
For Broadcom, the move provides deeper access to the booming AI market. Monday’s agreement confirms an arrangement that Broadcom Chief Executive Officer Hock Tan had hinted at during an earnings conference call last month.
Investors sent Broadcom shares up as as much as 11% on Monday, betting that the OpenAI alliance will generate hundreds of billions of dollars in new revenue for the chipmaker. But the details of how OpenAI will pay for the equipment aren’t spelled out. While the AI startup has shown it can easily raise funding from investors, it’s burning through wads of cash and doesn’t expect to be cash-flow positive until around the end of this decade.
As AI and cloud companies announce large projects every few days, it’s often not clear how the efforts are being financed. The interlocking deals also have boosted fears of a bubble in AI spending, particularly as many of these partnerships involve OpenAI, a fast-growing but unprofitable business.
While purchasing chips from others, OpenAI has also been working on designing its own semiconductors. They’re mainly intended to handle the inference stage of running AI models — the phase after the technology is trained.
There’s no investment or stock component to the Broadcom deal, OpenAI said, making it different than the agreements with Nvidia and AMD. An OpenAI spokesperson declined to comment on how the company will finance the chips, but the underlying idea is that more computing power will let the company sell more services.
In announcing the agreement, OpenAI CEO Sam Altman said that his company has been working with Broadcom for 18 months.
The startup is rethinking technology starting with the transistors and going all the way up to what happens when someone asks ChatGPT a question, he said on a podcast released by his company. “By being able to optimize across that entire stack, we can get huge efficiency gains, and that will lead to much better performance, faster models, cheaper models.”
When Tan referred to the agreement last month, he didn’t name the customer, though people familiar with the matter identified it as OpenAI.
“If you do your own chips, you control your destiny,” Tan said in the podcast Monday.
By tapping Broadcom’s networking technology, OpenAI is hedging its bets. Broadcom’s Ethernet-based options compete with Nvidia’s proprietary technology. OpenAI also will be designing its own gear as part of its work on custom hardware, the startup said.
Broadcom won’t be providing the data center capacity itself. Instead, it will deploy server racks with custom hardware to facilities run by either OpenAI or its cloud-computing partners.
A single gigawatt is about the capacity of a conventional nuclear power plant. Still, 10 GW of computing power alone isn’t enough to support OpenAI’s vision of achieving artificial general intelligence, said OpenAI co-founder and President Greg Brockman.
“That is a drop in the bucket compared to where we need to go,” he said.
Getting to the level under discussion isn’t going to happen quickly, said Charlie Kawwas, president of Broadcom’s semiconductor solutions group. “Take railroads — it took about a century to roll it out as critical infrastructure. If you take the internet, it took about 30 years,” he said. “This is not going to take five years.”



