Hello and welcome to Eye on AI. In this edition…the U.S. Census Bureau finds AI adoption declining…Anthropic reaches a landmark copyright settlement, but the judge isn’t happy…OpenAI is burning piles of cash, building its own chips, producing a Hollywood movie, and scrambling to save its corporate restructuring plans…OpenAI researchers find ways to tame hallucinations…and why teachers are failing the AI test.
This might be a blip. The Census Bureau also asks another question about AI adoption, querying businesses on whether they anticipate using AI to produce goods or services in the next six months. And here, the data don’t show a dip—although the percentage answering “yes” seems to have plateaued at a level below what it was back in late 2023 and early 2024.
As I argue in the piece, many of the factors that contributed to previous AI winters are present today. The past hype cycle that seems perhaps most similar to the current one took place in the 1980s around “expert systems”—though those were built using a very different kind of AI technology from today’s AI models. What’s most strikingly similar is that Fortune 500 companies were excited about expert systems and spent big money to adopt them, and some found huge productivity gains from using them. But ultimately many grew frustrated with how expensive and difficult it was to build and maintain this kind of AI—as well as how easily it could fail in some real world situations that humans could handle easily.
The situation is not that different today. Integrating LLMs into enterprise workflows is difficult and potentially expensive. AI models don’t come with instruction manuals, and integrating them into corporate workflows—or building entirely new ones around them—requires a ton of work. Some companies are figuring it out and seeing real value. But many are struggling.
And just like the expert systems, today’s AI models are often unreliable in real-world situations—although for different reasons. Expert systems tended to fail because they were too inflexible to deal with the messiness of the world. In many ways, today’s LLMs are far too flexible—inventing information or taking unexpected shortcuts. (OpenAI researchers just published a paper on how they think some of these problems can be solved—see the Eye on AI Research section below.)
Some are starting to suggest that the solution may lie in neurosymbolic systems, hybrids that try to integrate the best features of neural networks, like LLMs, with those of rules-based, symbolic AI, similar to the 1980s expert systems. It’s just one of several alternative approaches to AI that may start to gain traction if the hype around LLMs dissipates. In the long run, that might be a good thing. But in the near term, it might be a cold, cold winter for investors, founders, and researchers.
With that, here’s more AI news.