Of all the unlikely stories to emerge from the current AI frenzy, few are more striking than that of Leopold Aschenbrenner.
The 23-year-old’s career didn’t exactly start auspiciously: He spent time at the philanthropy arm of Sam Bankman-Fried’s now-bankrupt FTX cryptocurrency exchange before a controversial year at OpenAI, where he was ultimately fired. Then, just two months after being booted out of the most influential company in AI, he penned an AI manifesto that went viral—President Trump’s daughter Ivanka even praised it on social media—and used it as a launching pad for a hedge fund that now manages more than $1.5 billion. That’s modest by hedge-fund standards but remarkable for someone barely out of college. Just four years after graduating from Columbia, Aschenbrenner is holding private discussions with tech CEOs, investors, and policymakers who treat him as a kind of prophet of the AI age.
It’s an astonishing ascent, one that has many asking not just how this German-born early-career AI researcher pulled it off, but whether the hype surrounding him matches the reality. To some, Aschenbrenner is a rare genius who saw the moment—the coming of humanlike artificial general intelligence, China’s accelerating AI race, and the vast fortunes awaiting those who move first—more clearly than anyone else. To others, including several former OpenAI colleagues, he’s a lucky novice with no finance track record, repackaging hype into a hedge fund pitch.
His meteoric rise captures how Silicon Valley converts zeitgeist into capital—and how that, in turn, can be parlayed into influence. While critics question whether launching a hedge fund was simply a way to turn dubious techno-prophecy into profit, friends like Anthropic researcher Sholto Douglas frame it differently—as a “theory of change.” Aschenbrenner is using the hedge fund to garner a credible voice in the financial ecosystem, Douglas explained: “He is saying, ‘I have an extremely high conviction [that this is] how the world is going to evolve, and I am literally putting my money where my mouth is.”
But that also prompts the question: Why are so many willing to trust this newcomer?
The answer is complicated. In conversations with over a dozen friends, former colleagues, and acquaintances of Aschenbrenner, as well as investors and Silicon Valley insiders, one theme keeps surfacing: that Aschenbrenner has been able to seize ideas that have been gathering momentum across Silicon Valley’s labs and use them as ingredients for a coherent and convincing narrative that are like a blue plate special to investors with a healthy appetite for risk.
Aschenbrenner declined to comment for this story. A number of sources were granted anonymity owing to concerns over the potential consequences of speaking about people who wield considerable power and influence in AI circles.
Many spoke of Aschenbrenner with a mixture of admiration and wariness—“intense,” “scarily smart,” “brash,” “confident.” More than one described him as carrying the aura of a wunderkind, the kind of figure Silicon Valley has long been eager to anoint. Others, however, noted that his thinking wasn’t especially novel, just unusually well-packaged and well-timed. Yet, while critics dismiss him as more hype than insight, investors Fortune spoke with see him differently, crediting his essays and early portfolio bets with unusual foresight.
There is no doubt, however, that Aschenbrenner’s rise reflects a unique convergence: vast pools of global capital eager to ride the AI wave; a Valley enthralled by the prospect of achieving artificial general intelligence (AGI), or AI that matches or surpasses human intelligence; and a geopolitical backdrop that frames AI development as a technological arms race with China.
To drive the point home, he invoked the example of COVID in early 2020—arguing that only a few grasped the implications of a pandemic’s exponential spread, understood the scope of the coming economic shock, and profited by shorting before the crash. “All I could do is buy masks and short the market,” he wrote. Similarly, he emphasized that only a small circle today comprehends how quickly AGI is coming, and those who act early stand to capture historic gains. And once again, he cast himself among the prescient few.
But the core of Situational Awareness’s argument wasn’t the COVID parallel. It was the argument that the math itself—the scaling curves that suggested AI capabilities increased exponentially with the amount of data and compute thrown at the same basic algorithms—showed where things were headed.
Douglas, now a tech lead on scaling reinforcement learning at Anthropic, is both a friend and former roommate of Aschenbrenner’s who had conversations with him about the monograph. He told Fortune that the essay crystallized what many AI researchers had felt. ”If we believe that the trend line will continue, then we end up in some pretty wild places,” Douglas said. Unlike many who focused on the incremental progress of each successive model release, Aschenbrenner was willing to “really bet on the exponential,” he said.
Plenty of long, dense essays about AI risk and strategy circulate every year, most vanishing after brief debates in niche forums like LessWrong, a website founded by AI theorist and “doomer” extraordinaire Eliezer Yudkowsky that became a hub for rationalist and AI-safety ideas.
A longtime AI governance expert described the essays as “a big achievement,” but emphasized that the ideas were not new: “He basically took what was already common wisdom inside frontier AI labs and wrote it up in a very nicely packaged, compelling, easy-to-consume way.” The result was to make insider thinking legible to a much broader audience at a fever-pitch moment in the AI conversation.
Among AI safety researchers, who worry primarily about the ways in which AI might pose an existential risk to humanity, the essays were more divisive. For many, Aschenbrenner’s work felt like a betrayal, particularly because he had come out of those very circles. They felt their arguments urging caution and regulation had been repurposed into a sales pitch to investors. “Some people who are very worried about [existential risks] quite dislike Leopold now because of what he’s done—they basically think he sold out,” said one former OpenAI governance researcher. Others agreed with most of his predictions and saw value in amplifying them.
Still, even critics conceded his knack for packaging and marketing. “He’s very good at understanding the zeitgeist—what people are interested in and what could go viral,” said another former OpenAI researcher. “That’s his superpower. He knew how to capture the attention of powerful people by articulating a narrative very favorable to the mood of the moment: that the U.S. needed to beat China, that we needed to take AI security more seriously. Even if the details were wrong, the timing was perfect.”
That timing made the essays unavoidable. Tech founders and investors shared Situational Awareness with the sort of urgency usually reserved for hot term sheets, while policymakers and national security officials circulated it like the juiciest classified NSA assessment.
As one current OpenAI staffer put it, Aschenbrenner’s skill is “knowing where the puck is skating.”
At the same time as the essays were released, Aschenbrenner launched Situational Awareness LP, a hedge fund built around the theme of AGI, with its bets placed in publicly traded companies rather than private startups.
Together, the manifesto and the fund reinforced each other: Here was a book-length investment thesis paired with a prognosticator with so much conviction he was willing to put serious money on the line. It proved an irresistible combination to a certain kind of investor. One former OpenAI researcher said Friedman is known for “zeitgeist hacking”—backing people who could capture the mood of the moment and amplify it into influence. Supporting Aschenbrenner fit that playbook perfectly.
According to a spokesperson, Situational Awareness LP has global investors, including West Coast founders, family offices, institutions, and endowments. In addition, the spokesperson said, Aschenbrenner “has almost all of his net worth invested in the fund.”
But at least some experienced financial industry professionals have come to view him differently. Veteran hedge fund investor Graham Duncan, who invested personally in Situational Awareness LP and now serves as an advisor to the fund, said he was struck by Aschenbrenner’s combination of insider perspective and bold investment strategy. “I found his paper provocative,” Duncan said, adding that Aschenbrenner and Shulman weren’t outsiders scanning opportunities but insiders building an investment vehicle around their view. The fund’s thesis reminded him of the few contrarians who spotted the subprime collapse before it hit—people like Michael Burry, whom Michael Lewis made famous in his book The Big Short. “If you want to have variant perception, it helps to be a little variant.”
He pointed to Situational Awareness’s reaction to Chinese startup DeepSeek’s January release of its R1 open-source LLM, which many dubbed a “Sputnik moment” that showcased China’s rising AI capabilities despite limited funding and export controls. While most investors panicked, he said Aschenbrenner and Shulman had already been tracking it and saw the selloff as an overreaction. They bought instead of sold, and even a major tech fund reportedly held back from dumping shares after an analyst said, “Leopold says it’s fine.” That moment, Duncan said, cemented Aschenbrenner’s credibility—though Duncan acknowledged, “He could yet be proven wrong.”
Another investor in Situational Awareness LP, who manages a leading hedge fund, told Fortune that he was struck by Aschenbrenner’s answer when asked why he was starting a hedge fund focused on AI rather than a VC fund, which seemed like the most obvious choice.
“He said that AGI was going to be so impactful to the global economy that the only way to fully capitalize on it was to express investment ideas in the most liquid markets in the world,” he said. “I am a bit stunned by how fast they have come up the learning curve … They are way more sophisticated on AI investing than anyone else I speak to in the public markets.“
Aschenbrenner, born in Germany to two doctors, enrolled at Columbia when he was just 15 and graduated valedictorian at 19. The longtime AI governance researcher, who described herself as an acquaintance of Aschenbrenner’s, recalled that she first heard of him when he was still an undergraduate.
“I heard about him as, ‘Oh, we heard about this Leopold Aschenbrenner kid, he seems like a sharp guy,’” she said. “The vibe was very much a whiz kid sort of thing.”
He was already embedded in the Effective Altruism community—a controversial philosophy-driven movement influential in AI safety circles—and cofounded Columbia’s EA chapter. That network eventually led him to a job at the FTX Future Fund, a charity founded by cryptocurrency exchange founder Sam Bankman-Fried. Bankman-Fried was another EA adherent who donated hundreds of millions of dollars to causes, including AI governance research, that aligned with EA’s philanthropic priorities.
But when Bankman-Fried’s FTX empire collapsed in November 2022, the Future Fund philanthropic effort imploded. “We were a tiny team, and then from one day to the next, it was all gone and associated with a giant fraud,” Aschenbrenner told Dwarkesh Patel. “That was incredibly tough.”
Just months after FTX collapsed, however, Aschenbrenner reemerged—at OpenAI. He joined the company’s newly launched “superalignment” team in 2023, created to tackle a problem no one yet knows how to solve: how to steer and control future AI systems that would be far smarter than any human being, and perhaps smarter than all of humanity put together. Existing methods like reinforcement learning from human feedback (RLHF) had proven somewhat effective for today’s models, but they depend on humans being able to evaluate outputs—something which might not be possible if systems surpassed human comprehension.
Aaronson, the UT computer science professor, joined OpenAI before Aschenbrenner and said what impressed him was Aschenbrenner’s instinct to act. Aaronson had been working on watermarking ChatGPT outputs to make AI-generated text easier to identify. “I had a proposal for how to do that, but the idea was just sort of languishing,” he said. “Leopold immediately started saying, ‘Yes, we should be doing this, I’m going to take responsibility for pushing it.’”
Others remembered him differently, as politically clumsy and sometimes arrogant. “He was never afraid to be astringent at meetings or piss off the higher-ups, to a degree I found alarming,” said one current OpenAI researcher. A former OpenAI staffer, who said they first became aware of Aschenbrenner when he gave a talk at a company all-hands meeting that previewed themes he would later publish in Situational Awareness, recalled him as “a bit abrasive.” Multiple researchers also described a holiday party where, in a casual group discussion, Aschenbrenner told then Scale AI CEO Alexandr Wang how many GPUs OpenAI had—“just straight out in the open,” as one put it. Two people told Fortune they had directly overheard the remark. A number of people were taken aback, they explained, at how casually Aschenbrenner shared something so sensitive. Through spokespeople, both Wang and Aschenbrenner denied that the exchange occurred.
“This account is entirely false,” a representative of Aschenbrenner told Fortune. “Leopold never discussed private information with Alex. Leopold often discusses AI scaling trends such as in Situational Awareness, based on public information and industry trends.”
In April 2024, OpenAI fired Aschenbrenner, officially citing the leaking of internal information (the incident was not related to the alleged GPU remarks to Wang). On the Dwarkesh podcast two months later, Aschenbrenner maintained the “leak” was “a brainstorming document on preparedness, safety, and security measures needed in the future on the path to AGI” that he shared with three external researchers for feedback—something he said was “totally normal” at OpenAI at the time. He argued that an earlier memo in which he said OpenAI’s security was “egregiously insufficient to protect against the theft of model weights or key algorithmic secrets from foreign actors” was the real reason for his dismissal.
According to news reports, OpenAI did respond, via a spokesperson, that the concerns about security that he raised internally (including to the board) “did not lead to his separation.” The spokesperson also said they “disagree with many of the claims he has since made” about OpenAI’s security and the circumstances of his departure.
Either way, Aschenbrenner’s ouster came amid broader turmoil: Within weeks, OpenAI’s “superalignment” team—led by OpenAI’s cofounder and chief scientist Ilya Sutskever and AI researcher Jan Leike, and where Aschenbrenner had worked—dissolved after both leaders departed from the company.
Two months later, Aschenbrenner published Situational Awareness and unveiled his hedge fund. The speed of the rollout prompted speculation among some former colleagues that he had been laying the groundwork while still at OpenAI.
Even skeptics acknowledge the market has rewarded Aschenbrenner for channeling today’s AGI hype, but still, doubts linger. “I can’t think of anybody that would trust somebody that young with no prior fund management [experience],” said a former OpenAI colleague who is now a founder. “I would not be an LP in a fund drawn by a child unless I felt there was really strong governance in place.”
Others question the ethics of profiting from AI fears. “Many agree with Leopold’s arguments, but disapprove of stoking the U.S.-China race or raising money based off AGI hype, even if the hype is justified,” said one former OpenAI researcher. “Either he no longer thinks that [the existential risk from AI] is a big deal or he is arguably being disingenuous,” said another.
One former strategist within the Effective Altruism community said many in that world “are annoyed with him,” particularly for promoting the narrative that there’s a “race to AGI” that “becomes a self-fulfilling prophecy.” While profiting from stoking the idea of an arms race can be rationalized—since Effective Altruists often view making money for the purpose of then giving it away as virtuous—the former strategist argued that “at the level of Leopold’s fund, you’re meaningfully providing capital,” and that carries more moral weight.
The deeper worry, said Aaronson, is that Aschenbrenner’s message—that the U.S. must accelerate the pace of AI development at all costs in order to beat China—has landed in Washington at a moment when accelerationist voices like Marc Andreessen, David Sacks, and Michael Kratsios are ascendant. “Even if Leopold doesn’t believe that, his essay will be used by people who do,” Aaronson said. If so, his biggest legacy may not be a hedge fund, but a broader intellectual framework that is helping to cement a technological Cold War between the U.S. and China.
If that proves true, Aschenbrenner’s real impact may be less about returns and more about rhetoric—the way his ideas have rippled from Silicon Valley into Washington. It underscores the paradox at the center of his story: To some, he’s a genius who saw the moment more clearly than anyone else. To others, he’s a Machiavellian figure who repackaged insider safety worries into an investor pitch. Either way, billions are now riding on whether his bet on AGI delivers.