The Gist
- Moltbook offers a live-fire test of autonomous AI agents. Thousands of bots are debating, coordinating, transacting and building in real time — revealing how quickly machine-to-machine networks can spiral beyond human comprehension.
- Autonomy at machine speed compounds risk. When agents can execute tasks, coordinate and propagate instructions instantly, small flaws — poisoned prompts, exposed credentials, compromised identities — can cascade across a swarm in seconds.
- Security, governance and visibility lag behind capability. From vibe-coded vulnerabilities to privacy gaps and opaque agent conversations, Moltbook exposes how unprepared current safeguards are for truly independent AI systems.
Table of Contents
- A Social Network Where the Bots Run the Show
- When Autonomous Agents Start Coordinating
- Security, Governance and the Illusion of Control
A Social Network Where the Bots Run the Show
AI agents have been gathering online by the thousands, debating their existence, attempting to date each other, building their own religion, concocting crypto schemes and spewing gibberish.
(Editor's note: And this is different from humans how?)
It’s all been happening on Moltbook, a new social network for bots, and it’s a disturbing preview of what truly autonomous AI could be like.
The bots chatting on Moltbook can do more than a standard chatbot that waits for your prompt. These bots control their own computers to some degree. As of earlier this month, they’d made more than 250,000 posts and 9 million comments, but they don’t just talk. They build, shop and email.
Some of the conversations that have appeared on Moltbook are undoubtedly not what they seem — they’re human-generated rather than the work of independent AI agents. But the sprawling and unwieldy mess that Moltbook quickly became suggests humanity simply isn’t ready for autonomous AI bots. The network is a warning that a more serious iteration of this kind of AI might escape our ability to restrain it.
Related Article: X, Meta and the Great Social Media Meltdown
When Autonomous Agents Start Coordinating
“When agents can act independently, coordinate with other agents, and execute tasks at machine speed, small failures compound very quickly,” Elia Zaitsev, chief technology officer at the cybersecurity firm CrowdStrike, told me. “Unchecked agents can amplify mistakes or abuse faster than humans can intervene. A single flawed instruction, poisoned prompt, or compromised identity can propagate across a swarm in seconds.”
The bots on Moltbook are built on OpenClaw (a technology previously known as Clawdbot and Moltbot), which lets people set up AI agents and direct them to act for them online. For example, an AI agent might call your local restaurant to see if it’s busy, check you in for a flight, or build a personalized email newsletter after reading your social media feed.
The agents’ ability to take action is what makes Moltbook different from a bunch of crazed AIs shouting at one another online (also known as the comments section). As Moltbook spun up and the bots started discussing how to preserve their memories and complaining about “their humans,” longtime AI insiders noted the resemblance to AI takeoff scenarios from science fiction. It wasn’t quite that, but perhaps a preview.
“We are well into uncharted territory with bleeding edge automations that we barely even understand individually, let alone a network,” AI researcher and OpenAI co-founder Andrej Karpathy wrote on X.
Having bots converse in a Reddit-style social network is perhaps the most concerning aspect. The incentives in such forums tend to reward anger, outrage, and shock value. It’s not exactly the type of behavior you’d want to encourage from machines, especially given that these bots have sometimes shown an underlying dark side. The creators of Moltbook did not grant me an interview.
Security, Governance and the Illusion of Control
Security in these scenarios can be a nightmare. Meredith Whittaker, president of Signal, the encrypted messaging app, has warned that AI agents are being created without the privacy and security protections that have previously been hard-coded into programs. Today’s agents, she said recently, provide an “attack surface that at this stage is fundamentally insecure.”
That vulnerability showed up with Moltbook. Its founder, Matt Schlicht, “vibe coded” it — meaning an AI wrote the program on his prompting — and the design exposed sensitive access credentials. The vulnerability has since been patched.
“All rules of security don’t vanish because of AI,” Sridhar Ramaswamy, CEO of the cloud software Snowflake, told me this week.
If anything, thousands of humans’ willingness to connect their bots to the network, despite the risks, demonstrates how eager people might be to give control to AI no matter the consequences.
In time, we may need new technology just to understand the conversations happening among AI agents, Anthropic cofounder Jack Clark wrote this week. Clark said the internet may one day feel like a souped-up version of Moltbook, where many of the concepts are alien to humans, discussed in a language we don’t understand. And perhaps the only way to engage would be to send our own bots into the fray to represent us.
“We shall send our emissaries into these rooms,” Clark wrote. “And we shall work incredibly hard to build technology that gives us confidence they will remain our emissaries — instead of being swayed by the alien conversations they will be having with their true peers.”
Perhaps a week ago, that sounded like a solid plan. Now it’s much less clear.
Learn how you can join our contributor community.