New York Assemblymember Alex Bores, a Democrat now running for Congress in Manhattan’s 12th District, argues that one of the most alarming uses of artificial intelligence—highly realistic deepfakes—is less an unsolvable crisis than a failure to deploy an existing fix.
Rather than training people to spot visual glitches in fake images or audio, Bores said policymakers and the tech industry should lean on a well-established cryptographic approach similar to what made online banking possible in the 1990s. Back then, skeptics doubted consumers would ever trust financial transactions over the internet. The widespread adoption of HTTPS—using digital certificates to verify that a website is authentic—changed that.
“That was a solvable problem,” Bores said. “That basically same technique works for images, video, and for audio.”
“The challenge is the creator has to attach it and so you need to get to a place where that is the default option,” Bores said.
In his view, the goal is a world where most legitimate media carries this kind of provenance data, and should “you see an image and it doesn’t have that cryptographic proof, you should be skeptical.”
Bores said thanks to the shift from HTTP to HTTPS, consumers now instinctively know to distrust a banking site that lacks a secure connection. “It’d be like going to your banking website and only loading HTTP, right? You would instantly be suspect, but you can still produce the images.”
On Odd Lots, Bores said cryptographic content authentication should anchor any policy response to deepfakes. But he also stressed that technical labels are only one piece of the puzzle. Laws that explicitly ban harmful uses—such as deepfake child sexual abuse material—are still vital, he said, particularly while Congress has yet to enact comprehensive federal standards.
“AI is already embedded in [voters’] lives,” Bores said, pointing to examples like AI toys aimed at children to bots mimicking human conversation.
You can watch the full Odd Lots interview with Bores below:



