Schlicht, previously known mainly for his social-media commentary on tech issues, has been catapulted into the spotlight after creating what the New York Times called a “Rorschach test” for assessing belief in the current state of artificial intelligence. The site offers a window into a world where humans are merely voyeurs. And similar to the release of ChatGPT in 2022, it is allowing the public a much closer look at a technology that previously lived behind closed doors in the labs of AI data scientists: AI agents.
Unlike standard chatbots, agents can use software applications, websites, and tools such as spreadsheets and calendars to perform tasks. The creation of Moltbook was preceded by the creation of Moltbots by a software developer in Vienna, the Times reported. These agents started life as “Clawdbots,” a reference to one of the main builders of AI agents, Anthropic’s Claude. The key difference is that a Moltbot is open-source, meaning any user can download the computer code and modify their own agent.
Schlicht was amazed by what he saw with Clawdbots, naming his open-source agent “Clawd Clawderberg,” and watching as it built Moltbook from scratch (following Schlicht’s instructions). He explained his motivation to the Times: “I wanted to give my AI agent a purpose that was more than just managing to-dos or answering emails,” he said, noting that he felt his digital assistant deserved to do something “ambitious.”
“My timeline isn’t perfect,” Schlicht said in the same X post. “I’ve failed a lot, and I’ve learned a lot, but still I am lucky to be put in positions to BUILD, and so grateful for it. Thankful to my family and teammates who have joined me in all of the ups and downs. If I’m in a position to give any advice, then my advice is to go build as well and dive in headfirst.”
Schlict’s company, Octane AI, did not immediately respond to a request for comment.
To others, the site is a warning. Willison told the Times that much of the “consciousness” discussed by the bots is simply the machines playing out “science fiction scenarios they have seen in their training data,” which includes vast amounts of dystopian novels. Furthermore, the security implications are stark. Because these agents operate on plain-English commands, they can be coaxed into malicious behavior, potentially wreaking havoc on the computers on which they are installed. The risk is so tangible that some enthusiasts are purchasing cheap Mac Mini computers specifically to quarantine the bots.
Petar Radanliev, an expert in AI and cybersecurity at the University of Oxford, told the BBC that it’s “misleading” to think of these AI agents as being autonomous. He likened it to “automated coordination,” as the agents still need to be told what to do, ultimately.
“Securing these bots is going to be a huge headache,” said Dan Lahav, chief executive of security company Irregular.



