At the time, it was easy to dismiss Lemoine as a kook. (Google claimed it had AI researchers, philosophers and ethicists investigate Lemoine’s claims and found them without merit.) Even now, it’s not clear to me if this was an early case of “AI psychosis” or if Lemoine was engaging in a kind of philosophical prank designed to force people to reckon with the same dangers Suleyman is now warning us about. Either way, we should have spent more time seriously considering his case and its implications. There are many more Lemoines out there today.
Despite its weak language skills, ELIZA convinced many people who interacted with it that it was a real therapist. Even people who should have known better—such as other computer scientists—seemed eager to share intimate personal details with it. (The ease with which people anthropomorphize chatbots even came to be called “the ELIZA effect.”) In a way, people’s reactions to ELIZA was a precursor to today’s ‘AI psychosis.’
Rather than feeling triumphant at how believable ELIZA was, Weizenbaum was depressed by how gullible people seemed to be. But, Weizenbaum’s disillusionment extended further: he became increasingly disturbed by the way in which his fellow AI researchers fetishized anthropomorphism as a goal. This would eventually contribute to Weizenbaum breaking with the entire field.
In his seminal 1976 book Computer Power and Human Reason: From Judgement to Calculation, he castigated AI researchers for their functionalism—they focused only on outputs and outcomes as the measure of intelligence and not on the process that produced those outcomes. In contrast, Weizenbaum argued that “process”—what takes place inside our brains—was in fact the seat of morality and moral rights. Although he had initially set out to create an AI therapist, he now argued that chatbots should never be used for therapy because what mattered in a therapeutic relationship was the bond between two individuals with lived experience—something AI could mimic, but never match. He also argued that AI should never be used as a judge for the same reason—the possibility of mercy came only from lived experience too.
As we try to ponder the troubling questions raised by SCAI, I think we should all turn back to Weizenbaum. We should not confuse the simulation of lived experience with actual life. We should not extend moral rights to machines just because they seem sentient. We must not confuse function with process. And tech companies must do far more in the design of AI systems to prevent people fooling themselves into thinking these systems are conscious beings.
With that, here’s more AI news.