Lloyd Blankfein spent decades at Goldman Sachs learning how to manage risk at scale. He watched the firm navigate the 1987 crash, the dot-com bust, the 2008 financial crisis, and the post-crisis regulatory overhaul that reshaped Wall Street. So when the Goldman senior chairman and former CEO says something worries him about AI, it’s worth paying attention to what, exactly, that thing is.
It’s not superintelligence or autonomous weapons. It’s a much more mundane — and in some ways more frightening — problem.
Alluding to AI in particular but technological advancement in particular, he said, “everything is whirring behind the scenes,” and you don’t really get a close look at the thought process of the technology on which you’re relying. “Now you can leave a piece of software, [and it] could go out and do 70,000 transactions,” he said, explaining that when he started on the trading floor, everyone could hear every mistake, and the room would get quiet at the smallest slip-up.
This simple explanation may be the most precise articulation yet of why Wall Street — despite spending billions deploying AI across trading, compliance, and back-office operations — remains deeply reluctant to hand autonomous agents the keys to anything that actually matters.
The financial industry has long understood that speed creates leverage, and leverage cuts both ways. A well-timed trade amplifies gains. A mistaken one — executed at machine speed, across thousands of positions, before a human can intervene — amplifies losses just as fast.
The data bears out Blankfein’s instinct in striking detail. A January 2026 Wakefield Research study found that only 14% of CFOs completely trust AI to deliver accurate accounting data on its own — yet the vast majority of those same firms are already using AI tools. Ninety-seven percent said human oversight remains critical for accuracy, and most had already encountered at least one instance of hallucinated or inaccurate AI output.
Blankfein also offered a pointed observation about how Goldman historically approached system transitions: running legacy and new systems in parallel for years before making a full switch. It’s a discipline, he noted, that most technology companies don’t share — and one increasingly at odds with the “move fast” culture defining the AI deployment wave sweeping through finance.
The implicit warning: the firms most aggressively deploying AI agents are also the least likely to have stress-tested what happens when those agents are wrong.
“We always had to do things twice,” Blankfein said about the old way of working. “We had to run things 50 times and be perfect the last 49 times before we could go that way.” That means it could be a long, long time before AI agents are fully trusted to get it right every time out of the gate.



