The first was the announcement of a major new strategic partnership between OpenAI and chipmaker AMD. The deal will see AMD provide its M4150 graphic processing units (GPUs) to OpenAI’s data centers beginning in the second half of 2026. OpenAI will use these chips primarily for inference—that is, running its AI products such as ChatGPT, Sora, and its API, rather than for training new models.
While it is true that part of the logic of the AMD deal is to ensure OpenAI is not completely beholden to Nvidia, a lot of this is simply about ensuring OpenAI has enough computing power to serve its existing user base as they consume more and more tokens thanks to things like the ability to tap apps directly from ChatGPT. Altman went out of his way on social media to explain that the AMD partnership was “incremental to our work with Nvidia” and that “the world needs much more compute.”
Of course, not every telecom operator came out of that decade well—there were several bankruptcies, and the debt needed to build all that 4G infrastructure was a major driver of widespread consolidation in the industry. Meanwhile, much of the value of the 4G shift accrued to the social media companies and apps, not to the telecom providers, which emerged from the infrastructure boom as arguably less healthy businesses than they had been going in: The average debt-to-equity ratio in the industry more than doubled, for instance.
But most companies survived, and the infrastructure did get used. It wasn’t Tulip Mania. We’ll see what happens this time.
With that, here’s more AI news.
With that, here’s more AI news.



