The battle to win on AI inference, of course, is over its economics. Once a model is trained, every useful thing it does—answering a query, generating code, recommending a product, summarizing a document, powering a chatbot, or analyzing an image—happens during inference. That’s the moment AI goes from a sunk cost into a revenue-generating service, with all the accompanying pressure to reduce costs, shrink latency (how long you have to wait for an AI to answer), and improve efficiency.
That pressure is exactly why inference has become the industry’s next battleground for potential profits—and why Nvidia, in a deal announced just before the Christmas holiday, licensed technology from Groq, a startup building chips designed specifically for fast, low-latency AI inference, and hired most of its team, including founder and CEO Jonathan Ross.
“People think that inference is one shot, and therefore it’s easy. Anybody could approach the market that way,” Huang said. “But it turns out to be the hardest of all, because thinking, as it turns out, is quite hard.”
Nvidia’s support of Groq underscores that belief, and signals that even the company that dominates AI training is hedging on how inference economics will ultimately shake out.
“That’s the part that most people haven’t completely internalized,” Huang said. “This is the industry we were talking about. This is the industrial revolution.”
The CEO’s confidence helps explain why Nvidia is willing to hedge aggressively on how inference will be delivered, even as the underlying economics remain unsettled.
Freund said Nvidia’s move into Groq could lift the entire category. “I’m sure D-Matrix is a pretty happy startup right now, because I suspect their next round will go at a much higher valuation thanks to the [Nvidia-Groq deal],” he said.
Other industry executives say the economics of AI inference are shifting as AI moves beyond chatbots into real-time systems like robots, drones, and security tools. Those systems can’t afford the delays that come with sending data back and forth to the cloud, or the risk that computing power won’t always be available. Instead, they favor specialized chips like Groq’s over centralized clusters of GPUs.
Behnam Bastani, founder and CEO of OpenInfer, which focuses on running AI inference close to where data is generated—such as on devices, sensors, or local servers rather than distant cloud data centers—said his startup is targeting these kinds of applications at the “edge.”
The inference market, he emphasized, is still nascent. And Nvidia is looking to corner that market with its Groq deal. With inference economics still unsettled, he said Nvidia is trying to position itself as the company that spans the entire inference hardware stack, rather than betting on a single architecture.
“It positions Nvidia as a bigger umbrella,” he said.



