If there is one thing that Ilya Sutskever knows, it is the opportunities—and risks—that stem from the advent of artificial intelligence.
An AI safety researcher and one of the top minds in the field, he served for years as the chief scientist of OpenAI. There he had the explicit goal of creating deep learning neural networks so advanced they would one day be able to think and reason just as well as, if not better than, any human.
Artificial general intelligence, or simply AGI, is the official term for that goal. It remains the holy grail for researchers to this day—a chance for mankind at last to give birth to its own sentient lifeform, even if it’s only silicon and not carbon based.
“We’re definitely going to build a bunker before we release AGI,” Sutskever told his team in 2023, months before he would ultimately leave the company.
Sutskever reasoned his fellow scientists would require protection at that point, since the technology was too powerful for it not to become an object of intense desire for governments globally.
“Of course, it’s going to be optional whether you want to get into the bunker,” he assured fellow OpenAI scientists, according to people present at the time.
Written by former Wall Street Journal correspondent Karen Hao and based on dozens of interviews with some 90 current and former company employees either directly involved or with knowledge of events, Empire of AI reveals new information regarding the brief but spectacular coup led to oust of Sam Altman as CEO in November 2023 and what it meant for the company behind ChatGPT.
Neither OpenAI nor Sutskever responded to a request for comment from Fortune out of normal hours. Ms. Murati could not be reached.
“There is a group of people—Ilya being one of them—who believe that building AGI will bring about a rapture,” said one researcher Hao quotes, who was present at the time Sustekever revealed his plans for a bunker. “Literally a rapture.”