In a recent interview with the Financial Times, Gary Gensler, the head of the U.S. Securities and Exchange Commission (SEC), expressed significant concerns about the impact of artificial intelligence (AI) on the financial industry. Gensler believes that AI, if left unchecked, has the potential to initiate a financial crash within the next ten years.
Gensler’s primary focus is on the use of AI models within the banking sector on Wall Street. He has called for stringent regulations to govern the implementation of these models, highlighting the pressing need for oversight.
The financial sector has eagerly embraced AI technologies, with notable examples like Morgan Stanley introducing a chatbot advisor built on OpenAI’s GPT-4. This technological shift in the industry has prompted Gensler to raise alarm bells regarding AI’s potential to disrupt the financial stability.
Gensler predicts that AI could play a pivotal role in triggering a financial crisis, with a likely timeframe being the late 2020s or early 2030s. He has expressed concerns that excessive reliance on models developed by tech companies could lead to economic turmoil, emphasizing the need for proactive measures to avert such a scenario.
He stated, “I do think we will in the future have a financial crisis . . .[and] in the after action reports people will say ‘Aha! There was either one data aggregator or one model . . . we’ve relied on.’ Maybe it’s in the mortgage market. Maybe it’s in some sector of the equity market.”
Gensler’s call to action extends to both the regulation of the underlying AI models created by tech companies and how these models are utilized by banks on Wall Street. He characterizes this as a “cross-regulatory challenge” that requires coordinated efforts to address the issue effectively.

“It’s a hard financial stability issue to address because most of our regulation is about individual institutions, individual banks, individual money market funds, individual brokers; it’s just in the nature of what we do,” he explained to the Financial Times. “And this is about a horizontal [matter whereby] many institutions might be relying on the same underlying base model or underlying data aggregator.”
While Wall Street banks have eagerly embraced generative AI technologies, such as the introduction of ChatGPT, they have simultaneously begun to impose restrictions on their usage. For instance, banks like Goldman Sachs, Deutsche Bank, and Bank of America have prohibited their employees from using chatbots at work, even as they explore the capabilities of this emerging technology.
Notably, Morgan Stanley launched an AI assistant founded on OpenAI’s GPT-4 model to facilitate employees’ access to market information. Rival JPMorgan has reportedly filed a patent for an AI model called ‘IndexGPT’ aimed at aiding traders in selecting securities for investment.
As the financial industry continues to grapple with the implications of AI integration, the SEC’s vigilance in regulating this evolving landscape remains a critical point of discussion. The potential impact of AI on financial markets will undoubtedly shape the industry’s future, and proactive measures to manage this risk are now a top priority for regulators. The SEC was not immediately available for comment when contacted by Insider, likely due to the request being made outside of regular working hours.