In the 1990s, the expansion of distributed computer power and the vast purchases of it led to new claims about massive increases in productivity that would soon be released. Except it didn’t. It took a long time and associated changes in how work was organized to drive productivity improvements.
Experts have a long history of torturing us with predictions about how technology will wipe us out, first our jobs and then just getting rid of us altogether because humans are a bother. The AI panic around Large Language Models over the last three years is no exception.
We are back in panic mode in 2026, brought on by new claims about the dangers of AI, even though we don’t see evidence yet of these changes.
Do you see a pattern here? Scientists and developers are rightly excited about a new innovation, and they are happy to imagine out loud how they think the new tools could be used. Then vendors rise to sell those new tools, and they push the claims hard. This is the beginning of the hype cycle. They aren’t thinking about whether those uses would be practical: what will it cost, what other changes are required for it to work, and does anyone need the tools in the first place?
First, it is expensive to introduce. The LLM companies are not in the business of giving these tools away, and the really good ones cost a lot to use. The bet that they will inevitably get cheaper is not obvious. While there are tons of vendors offering LLM tools, they are almost all built on core LLM technology from six vendors who already control almost 80% of the market. Computer time is not getting that much cheaper and the electricity to power it is jumping in price.
But the biggest cost is the time and energy needed to configure them in your own organization and keep them up to date. Most of those costs need to be front-loaded. We still need some human back-up to solve the problems that the LLMs can’t, and productivity improvements that could lead to fewer workers come much later. Selling an expensive, front-loaded project with substantial and continuing IT costs to a CFO looking for a return on investment is difficult when the benefits are uncertain and only show up years later.
Second, related to the ROI challenge, there is the misplaced focus on eliminating low-skill work. Two lessons here. The first is that we don’t save much money if we cut a bunch of minimum-wage jobs, especially when we still need employees to monitor and problem-solve the AI tools. Next, simple white-collar jobs are simple because they don’t require much judgment and tend to be binary: identify which form this is and put it in the right pile. But they have to be right every time. Those are perfect tasks for Machine Learning, but Machine Learning is also a lot more expensive than using LLMs because it has to be built for each task, and it has to be monitored and adjusted almost constantly.
Third, LLMs can take over tasks in more complicated jobs where it just has to be reasonably good, not perfect. It is cheaper to use than Machine Learning, but it still requires monitoring and checking. A typical human job has a large number of discrete and complicated tasks that cannot be automated, or at least not yet.
LLMs can really help with programming tasks, for example, but computer programmers spend as much as 70% of their time on tasks other than programming, which mainly involve dealing with other employees. If, say, LLMs can take over the 20% of the time that school principals spend preparing reports, we can’t cut 20% of each principal. But we can have them do something new.
The real benefit of LLMs, I believe, won’t come in cost savings; rather it will be allowing us to do new things we haven’t thought of yet. For an analogy, look back on the introduction of search engines, which led to a massive cut in the time needed to do research and get answers. I’ve never heard that search engines caused massive job losses. Instead, they created new businesses, new ways of working, and new jobs. Most businesses, for example, are awash in data that has been too difficult to organize for them to even look at. If the latest Claude/Anthropic tool can do as much with analyses as is claimed, it could spend a few years just making sense of all that data.
Maybe we should stop fixating on what AI is cutting (headcount reduction) and focus instead on what it is growing: all the new products and new solutions that AI may let us do.
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.



