Tucker Carlson wanted to see the “angst-filled” Sam Altman: He wanted to hear him admit he was tormented by the power he holds. After about half an hour of couching his fears with technical language and cautious caveats, the OpenAI CEO finally did.
“I haven’t had a good night’s sleep since ChatGPT launched,” Altman told Carlson. He laughed wryly.
Those small design choices, Altman explained, are replicated billions of times across the globe, shaping how people think and act in ways he can’t fully track.
“What I lose sleep over is that very small decisions we make about how a model may behave slightly differently are probably touching hundreds of millions of people,” he said. “That impact is so big.”
One example that weighs heavily: suicide. Altman noted roughly 15,000 people take their lives each week worldwide, and if 10% of them are ChatGPT users, roughly 1,500 people with suicidal thoughts may have spoken to the system—and then killed themselves anyway. (World Health Organization data confirms about 720,000 people per year worldwide take their own lives).
“We probably didn’t save their lives,” he admitted. “Maybe we could have said something better. Maybe we could have been more proactive.”
In countries where assisted suicide is legal such as in Canada or Germany, Altman said he could imagine ChatGPT telling terminally ill, suffering adults suicide was “in their option space.” But ChatGPT shouldn’t be for or against anything at all, he added.
That trade-off between freedom and safety runs through all of Altman’s thinking. Broadly, he said adult users should be treated “like adults,” with wide latitude to explore ideas. But there are red lines.
“It’s not in society’s interest for ChatGPT to help people build bioweapons,” he said flatly. For him, the hardest questions are the ones in the gray areas, when curiosity blurs into risk.
Carlson pressed him on what moral framework governs those decisions. Altman said the base model reflects “the collective of humanity, good and bad.”
“The person you should hold accountable is me,” Altman said. He stressed his aim isn’t to impose his own beliefs but to reflect a “weighted average of humanity’s moral view.”
That, he conceded, is an impossible balance to get perfectly right.
Yet, for all the focus now on jobs or geopolitical effects of his technology, what unsettles Altman most are the unknown unknowns: the subtle, almost imperceptible cultural shifts that spread when millions of people interact with the same system every day. He pointed to something as trivial as ChatGPT’s cadence or overuse of em dashes, which has already seeped into human writing styles. If such quirks can ripple through society, what else might follow?
Altman, grey-haired and often looking down, came across as a Frankenstein-esque character, haunted by the scale of what he has unleashed.
“I have to hold these two simultaneous ideas in my head,” Altman said. “One is, all of this stuff is happening because a big computer, very quickly, is multiplying large numbers in these big, huge matrices together, and those are correlated with words that are being put out one or the other.
“On the other hand, the subjective experience of using that feels like it’s beyond just a really fancy calculator, and it is surprising to me in ways that are beyond what that mathematical reality would seem.”
OpenAI didn’t immediately respond to Fortune‘s request for comment.