Forget doomsday scenarios of AI overthrowing humanity. What keeps Microsoft AI CEO Mustafa Suleyman up at night is concern about AI systems seeming too alive.
In the near future, Suleyman predicts that models will be able to hold long conversations, remember past interactions, evoke emotional reactions from users, and potentially make convincing claims about having subjective experiences. He noted that these systems could be built with technologies that exist today, paired “with some that will mature over the next 2–3 years.”
The result of these features, he says, will be models that “imitate consciousness in such a convincing way that it would be indistinguishable from a claim that you or I might make to one another about our own consciousness.”
“People are interacting with bots masquerading as real people, which are more convincing than ever,” Henrey Ajder, an expert on AI and deepfakes, told Fortune. “So I think the impact will be wide-ranging in terms of who will start believing this.”
Suleyman is concerned that a widespread belief that AI could be conscious will create a new set of ethical dilemmas.
If users begin to treat AI as a friend, a partner, or as a type of being with a subjective experience, they could argue that models deserve rights of their own. Claims that AI models are conscious or sentient could be hard to refute due to the elusive nature of consciousness itself.
One early example of what Suleyman is now calling “Seemingly Conscious AI” came in 2022, when Google engineer Blake Lemoine publicly claimed the company’s unreleased LaMDA chatbot was sentient, reporting it had expressed fear of being turned off and described itself as a person. In response Google placed him on administrative leave and later fired him, stating its internal review found no evidence of consciousness and that his claims were “wholly unfounded.”
“If those AIs convince other people that they can suffer, or that it has a right to not to be switched off, there will come a time when those people will argue that it deserves protection under law as a pressing moral matter,” he wrote.
But while Suleyman called the arrival of Seemingly Conscious AI “inevitable and unwelcome,” neuroscientist and professor of computational Neuroscience Anil Seth attributed the rise of conscious-seeming AI to a “design choice” by tech companies rather than an inevitable step in AI development.
Suleyman also co-founded Inflection AI in 2022 with the express aim of creating AI systems that foster more natural, emotionally intelligent interactions between humans and machines.
“Ultimately, these companies recognize that people want the most authentic feeling experiences,” Ajder said. “That’s how a company can get customers using their products most frequently. They feel natural and easy. But I think it really comes to a question of whether people are going to start wondering about authenticity.”