The secret wasn’t more power plants, it was smarter design—specifically, with energy efficiency. Now we’re about to watch that story unfold again, but with AI in the starring role.
Energy-efficiency innovations are uniquely powerful at fueling growth because their benefits can apply to both existing and future units, lowering both current and future energy demands with one stroke.
AI efficiency innovations are happening on three fronts: chips, connections, and AI architecture itself.
Another area where innovation will reduce AI’s energy needs is the connections between chips. Even as transistors have gotten smaller and allowed a given space to pack more “punch,” chips are only as fast as the connections between them. And today’s most advanced chip circuitry relies on copper-based electrical wires, which can mean that GPUs running AI workloads can spend more than half their time idle “waiting” for the data to process.
Lastly, there are exciting opportunities to redesign AI itself—often spurred forward by open-source AI communities. Techniques like “knowledge distillation” let us create sleeker, more efficient AI models by having them learn from larger ones. Think of it as passing down wisdom through generations. Low-rank adaptation (LoRA) allows us to fine-tune massive models with surgical precision, turning LLMs into more specialized models without the energy costs of rebuilding from scratch.
Perhaps the most elegant solution is the mixture-of-experts approach. Instead of using one AI model to handle everything, it breaks tasks into smaller pieces and routes them to specialized mini-models. It’s the difference between powering up an entire office building versus just lighting the room you need.
These are just a handful of innovations underway with more efficient AI, but they are not “around the edge” improvements.
Take co-packaged optics alone, which can bring 80% energy savings to LLM training—the equivalent of running two small data centers for an entire year. If instead you take several innovations—with chips, connections, and models themselves—and introduce them throughout the world, you can imagine how the energy savings might stack to the equivalent of not just Three Mile Island, but many nuclear power plants—with a fraction of the cost or risk.
The last year has been one of AI excitement, adoption, and, yes, massive costs. But foundation models are like reusable rockets. The upfront costs on research, engineering, and more can be staggering, but every additional use of that model amortizes those costs by yet another outcome. And foundation models are a lot more reusable than rockets.
Raising a flag over AI’s energy use makes sense. It identifies an important challenge and can help rally us toward a collective solution. But we should balance the weight of the challenge with the incredible, rapid innovation that is happening.
For businesses, the flag should have two words written on it: Be intentional! At every part of the AI stack. Companies are already moving toward smaller, cheaper, task-specific models, and as innovations are commercialized this will drive down costs and energy use even more.
We should remember what happened with the earlier cycle of computing and energy use—and lend all our support to repeating it.
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.
Read more: