The AI industry is on a power trip—literally–and it’s getting desperate. Data centers already account for roughly 4% of U.S. electricity use, a share expected to more than double by 2030 as running and training AI models increasingly require gigawatts of power. Analysts project global data-center power demand could rise as much as 165% by the end of the decade, even as new generation and transmission infrastructure lag years behind need. In response, hyperscalers are scrambling—cutting deals to build their own gas plants, exploring small nuclear reactors, and searching for power wherever they can find it.
Against that backdrop, it’s not surprising that some of the industry’s biggest players are starting to look to outer space for a solution.
However, while Musk and other bulls argue that space-based AI computing could become cost-effective relatively quickly, many experts say anything approaching meaningful scale remains decades away. Constraints around power generation, heat dissipation, launch logistics, and cost still make it impractical—and for now, the overwhelming share of AI investment continues to flow into terrestrial infrastructure. Small-scale pilots of orbital computing may be feasible in the next few years, they argue, but space remains a poor substitute for Earth-based data centers for the foreseeable future.
The problem is that everything else, from building massive solar arrays to lowering launch costs, moves far more slowly than today’s AI hype cycle. Still, Thornburg said in the long run, the energy pressures driving interest in orbital data centers are unlikely to disappear. “Engineers will find ways to make this work,” he said. “Long term, it’s just a matter of how long is it going to take us.”
With that, here’s more AI news.
Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman



