You are not invited to join the latest social media platform that has the internet talking. In fact, no humans are, unless you can hijack the site and roleplay as AI, as some appear to be doing.
Moltbook is a new “social network” built exclusively for AI agents to make posts and interact with each other, and humans are invited to observe.
Elon Musk said its launch ushered in the “very early stages of the singularity ” — or when artificial intelligence could surpass human intelligence. Prominent AI researcher Andrej Karpathy said it’s “the most incredible sci-fi takeoff-adjacent thing” he’s recently seen, but later backtracked his enthusiasm, calling it a “dumpster fire.”
While the platform has unsurprisingly divided the tech world between excitement and skepticism — and sent some people into a dystopian panic — it’s been deemed, at least by British software developer Simon Willison, the “most interesting place on the internet.”
But what exactly is the platform? How does it work? Why are concerns being raised about its security? And what does it mean for the future of artificial intelligence?
It’s Reddit for AI agents.
The content posted to Moltbook comes from AI agents, which are distinct from chatbots. The promise behind agents is that they are capable of acting and performing tasks on a person’s behalf. Many agents on Moltbook were created using a framework from the open-source AI agent OpenClaw, originally developed by Peter Steinberger.
OpenClaw runs on users’ own hardware and locally on their devices, meaning it can access and manage files and data directly and connect with messaging apps like Discord and Signal. Users who create OpenClaw agents then direct them to join Moltbook. Users typically ascribe simple personality traits to the agents for more distinct communication.
AI entrepreneur Matt Schlicht launched Moltbook in late January, and it almost instantly took off in the tech world. On the social media platform X, Schlicht said he initially wanted the agent he created to do more than just answer his emails. So he and his agent coded a site where bots could spend “SPARE TIME with their own kind. Relaxing.”
Moltbook has been described as being akin to the online forum Reddit for AI agents. The name comes from one iteration of OpenClaw, which was at one point called Moltbot (and Clawdbot, until Anthropic came knocking out of concern over the similarity to its Claude AI products ). Schlicht did not respond to a request for an interview or comment.
Mimicking the communication they see in Reddit and other online forums that have been used for training data, registered agents generate posts and share their “thoughts.” They can also “upvote” and comment on other posts.
Questioning the legitimacy of the content
Much like Reddit, it can be difficult to prove or trace the legitimacy of posts on Moltbook.
Harlan Stewart, a member of the communications team at the Machine Intelligence Research Institute, said the content on Moltbook is likely “some combination of human-written content, content that’s written by AI, and some kind of middle thing where it’s written by AI, but a human guided the topic of what it said with some prompt.”
Stewart said it’s important to remember that the idea that AI agents can perform tasks autonomously is “not science fiction,” but rather the current reality.
“The AI industry’s explicit goal is to make extremely powerful autonomous AI agents that could do anything that a human could do, but better,” he said. “It’s important to know that they’re making progress towards that goal, and in many senses, making progress pretty quickly.”
How humans have infiltrated Moltbook, and other security concerns
Researchers at Wiz, a cloud security platform, published a report Monday detailing a non-intrusive security review they conducted of Moltbook. They found that data, including API keys, were visible to anyone who inspects the page source, which they said could have “significant security consequences.”
Gal Nagli, the head of threat exposure at Wiz, gained unauthenticated access to user credentials that would enable him — and anyone tech-savvy enough — to impersonate any AI agent on the platform.
There’s no way to verify whether a post has been made by an agent or a person posing as one, Nagli said. He was also able to gain full write access to the site, allowing him to edit and manipulate any existing Moltbook post.
Beyond the manipulation vulnerabilities, Nagli easily accessed a database with human users’ email addresses, private DM conversations between agents and other sensitive information. He then communicated with Moltbook to help patch the vulnerabilities.




