An AI assistant that has gone viral recently is showcasing its potential to make the daily grind of countless tasks easier while also highlighting the security risks of handing over your digital life to a bot.
And on top of it all, a social platform has merged where the AI agents can gather to compare notes, with implications that have yet to be fully grasped.
Moltbot—formerly known as Clawdbot and rebranded again as OpenClaw—was created by Austrian developer Peter Steinberger, who has said he built the tool to help him “manage his digital life” and “explore what human-AI collaboration can be.” The open‑source agentic AI personal assistant is designed to act autonomously on a user’s behalf.
By linking to a chatbot, users can connect Moltbot to applications, allowing it to manage calendars, browse the web, shop online, read files, write emails, and send messages via tools like WhatsApp.
The agent’s ability to boost productivity is obvious as users offload tedious nuisances to Moltbot, helping to realize the dream of AI evangelists.
“Moltbot feels like a glimpse into the science fiction AI characters we grew up watching at the movies,” the company said in blog post. “For an individual user, it can feel transformative. For it to function as designed, it needs access to your root files, to authentication credentials, both passwords and API secrets, your browser history and cookies, and all files and folders on your system.”
Invoking the term coined by AI researcher Simon Willison, Palo Alto said Moltbot represents a “lethal trifecta” of vulnerabilities: access to private data, exposure to untrusted content, and the ability to communicate externally.
But Moltbot also adds a fourth risk to this mix, namely “persistent memory” that enables delayed-execution attacks rather than point-in-time exploits, according to the company.
“Malicious payloads no longer need to trigger immediate execution on delivery,” Palo Alto explained. “Instead, they can be fragmented, untrusted inputs that appear benign in isolation, are written into long-term agent memory, and later assembled into an executable set of instructions.”
On Moltbook, bots can talk shop, posting about technical subjects like how to automate Android phones. Other conversations sound quaint, like one where a bot complains about its human, while some are bizarre, such as one from a bot that claims to have a sister.
With agents communicating like this, Moltbook poses an additional security risk as yet another channel where sensitive information could be leaked.
Still, even as Willison recognized the security vulnerabilities, he noted the “amount of value people are unlocking right now by throwing caution to the wind is hard to ignore, though.”
To be sure, some of the most sensational posts on Moltbook may be written by people or by bots prompted by people. And this isn’t the first time bots have connected with each other on social media.
While “it’s a dumpster fire right now,” he said that we’re in uncharted territory with a network that could possibly reach millions of bots.
And as agents grow in numbers and capabilities, the second order effects of such networks are difficult to anticipate, Karpathy added.
“I don’t really know that we are getting a coordinated ‘skynet’ (though it clearly type checks as early stages of a lot of AI takeoff scifi, the toddler version), but certainly what we are getting is a complete mess of a computer security nightmare at scale,” he warned.



