With one new AI capability after another entering the mainstream, it’s tempting to give each one the same cursory consideration. But some merit more attention than others.
Consider AI deepfakes. Scammers can now use generative-AI tools to create voices, or even live video fakes, that sound or look like specific people—and request money transfers. As such, there’s a “significant” risk of such capabilities “breaking the trust and identity systems upon which our entire economy relies,” said Emily Chiu, CEO of Miami-based fintech startup Novo, at Fortune’s Most Powerful Women summit in Riyadh, Saudi Arabia, last week.
Yet as sophisticated as the AI technology behind such scams is, it’s relatively easy to access and use.
Arup, a U.K. engineering firm, later confirmed that it had been the victim in the attack.
Chiu said the Hong Kong incident shows that “we’re going to run into a world where our ability to really trust and validate what’s real—the system of trust upon which commerce relies, upon which fintech relies—is going to be a real challenge.”
Of course, that presents opportunities for companies that can come up with effective solutions to this problem, “but it’s not a solved situation yet,” Chiu said. “So, it’s something I would be on the lookout for…even if you’re outside of fintech.”