Elon Musk has a moonshot vision of life with AI: The technology will take all our jobs, while a “universal high income” will mean anyone can access a theoretical abundance of goods and services. Provided Musk’s lofty dream could even become a reality, there would, of course, be a profound existential reckoning.
But most industry leaders aren’t asking themselves this question about the endgame of AI, according to Nobel laureate and “godfather of AI” Geoffrey Hinton. When it comes to developing AI, Big Tech is less interested in the long-term consequences of the technology—and more concerned with quick results.
“For the owners of the companies, what’s driving the research is short-term profits,” Hinton, a professor emeritus of computer science at the University of Toronto, told Fortune.
And for the developers behind the technology, Hinton said, the focus is similarly on the work immediately in front of them, not on the final outcome of the research itself.
“Researchers are interested in solving problems that have their curiosity. It’s not like we start off with the same goal of, what’s the future of humanity going to be?” Hinton said.
“We have these little goals of, how would you make it? Or, how should you make your computer able to recognize things in images? How would you make a computer able to generate convincing videos?” he added. “That’s really what’s driving the research.”
For Hinton, the dangers of AI fall into two categories: the risk the technology itself poses to the future of humanity, and the consequences of AI being manipulated by people with bad intent.
“There’s a big distinction between two different kinds of risk,” he said. “There’s the risk of bad actors misusing AI, and that’s already here. That’s already happening with things like fake videos and cyberattacks, and may happen very soon with viruses. And that’s very different from the risk of AI itself becoming a bad actor.”
“We’ve identified more than 150 types of deepfake attacks,” he said.
Just as printers added names to their works after the advent of the printing press hundreds of years ago, media sources will similarly need to find a way to add their signatures to their authentic works. But Hinton said fixes can only go so far.
“That problem can probably be solved, but the solution to that problem doesn’t solve the other problems,” he said.
For the risk AI itself poses, Hinton believes tech companies need to fundamentally change how they view their relationship to AI. When AI achieves superintelligence, he said, it will not only surpass human capabilities, but have a strong desire to survive and gain additional control. The current framework around AI—that humans can control the technology—will therefore no longer be relevant.
Invoking ideals of traditional femininity, he said the only example he can cite of a more intelligent being falling under the sway of a less intelligent one is a baby controlling a mother.
“And so I think that’s a better model we could practice with superintelligent AI,” Hinton said. “They will be the mothers, and we will be the babies.”