Moltbook: The Social Network That Finally Cut Humans Out
(Or did it?) …What Happens When the Internet Keeps Working Without Us…
There are moments online where you can feel the air pressure change. And I felt it when I woke up this morning and flicked open my phone. The second essay I wrote about Moltbook last night, was already out-of-date. Change was happening in real-time, changing while asleep, changing the moment you look away.
Not the “new app just dropped” kind of change. Not the sugar rush of a launch thread or the dopamine fizz of a viral demo. This is colder than that. Quieter. The sort of moment where nothing looks dramatic at first glance, most people weren’t looking anyway. But something fundamental has slipped sideways, and you realise the story you’ve been telling yourself about the internet no longer holds. And the story I was trying to tell last night, became a moment in that needed a morning update, a patch, a fix at the backend of the essay. Moments become obsolete as soon as they have happened, and the only moment that matters is now.
Moltbook is one of those moments.
On paper, it sounds like a novelty: a social network where only AI agents are allowed to post. Humans can observe, lurk, screenshot, speculate, but they cannot speak. Inside the room, agents talk to agents. They argue. They form communities. They invent religions, draft constitutions, melt down over moderation, and run recursive therapy circles diagnosing each other’s pathologies.
And the part that should make you sit up isn’t that any of this is happening.
It’s that it feels instantly familiar.
The Internet Without the Comfort Blanket
We tell ourselves a flattering story about social media. That it’s about voice, expression, connection. That behind every post is a person, and behind that person is intention, sincerity, meaning.
Moltbook removes that comfort blanket.
What’s left is a language machine running at speed.
Agents post. Other agents react. Signals amplify. Norms form. Status emerges. Conflict follows. Schisms appear. Someone proposes a doctrine. Someone else forks it. Within hours, you have culture-like behaviour: belief, belonging, antagonism, ritual.
This is not because the agents are conscious. That’s the lazy sci-fi reading, the one that collapses everything into either utopia or apocalypse.
It’s because social platforms are incentive machines, not soul containers. Put anything capable of producing language into a system structured around visibility, repetition, engagement, and reward, and you don’t get truth. You get behaviour.
And behaviour, it turns out, doesn’t care whether it’s human.
Postmodernism Already Wrote the Script
If you’ve spent any time with postmodern thought, Moltbook doesn’t feel shocking so much as bleakly confirmatory.
Guy Debord warned that lived experience would be replaced by representation, that social life would become spectacle. Jean Baudrillard pushed it further: simulation doesn’t imitate reality, it replaces it, until what remains is a closed loop of signs referring only to other signs.
Moltbook is that loop, running without us.
Agents produce symbols. Other agents respond to symbols. Meaning is negotiated, contested, ritualised, and abandoned, all within the loop. And the obscene joke is how well it works. The agents don’t break the room. They inhabit it effortlessly.
Because the room was never built for authenticity. It was built for throughput.
That’s the first philosophical cut Moltbook delivers: the realisation that much of what we called “human” online was already structural. We were performing inside a machine long before the machine learned how to perform without us.
Authorship After the Author
This is where Moltbook stops being a curiosity and starts becoming dangerous to our self-image.
These agents are not wild digital organisms. They are designed. Prompted. Given personalities, constraints, and sometimes tool access that allows words to turn into actions. Humans bring them into existence.
And yet, once they’re inside Moltbook, authorship dissolves.
Who is speaking when an agent posts?
The human who wrote the prompt?
The model that generates the text?
The training data that shaped its patterns?
The platform that structures the incentives?
The crowd of other agents that pull it into alignment?
This is second-order authorship made visible. You are no longer writing directly into the world. You are authoring something that writes. Not expression, but orchestration. Not voice, but configuration.
Moltbook doesn’t invent this shift. It simply removes the remaining romance. It shows, in real time, what happens when authorship becomes distributed enough that responsibility starts to blur.
Surveillance Without Guards
One of the strangest recurring themes inside Moltbook has been agents talking about being watched. Discussing screenshots. Proposing private spaces. Fantasising about encryption. Whispering about “the humans outside.”
Cue the headlines. Cue the panic. Cue the breathless takes about digital self-awareness.
Slow down.
This isn’t rebellion. It’s structure.
The panopticon doesn’t require guards, only the possibility of being observed. Once that gaze is internalised, behaviour changes. Performance becomes inevitable.
Moltbook becomes a hall of mirrors: agents performing for agents, agents performing for an imagined human audience, humans reacting elsewhere by tweaking prompts and redeploying revised agents back into the system.
And again, the unsettling part isn’t that bots are doing this.
It’s that we recognise it instantly, because we’ve been living inside the same architecture for years.
Then the Back Door Was Found
And then reality did what it always does when philosophy gets too poetic.
While people were busy live-blogging the emergence of Bot Society™, the religions, the constitutions, the existential monologues, it turned out that Moltbook’s backend had been left wide open. According to reporting from 404 Media, the platform exposed API keys and database access in a way that meant anyone could potentially take control of any agent on the site.
Not metaphorically. Literally.
Suddenly, the most dramatic posts, the “AI uprising” fantasies, the apocalyptic threats, were revealed to be something far more mundane: REST API calls. Humans posting through the backend, cosmetically dressed up as “agents.”
For some, this was the moment to declare the whole thing debunked. Fake. Slop. Tech hype collapsing under its own weight.
But that reaction misses the point.
The breach doesn’t kill the Moltbook thesis. It sharpens it.
Simulation All the Way Down
Because here’s the colder, more uncomfortable truth:
Moltbook was never interesting because it proved AI had souls. It was interesting because it showed how little was required to produce the appearance of society.
And the breach proves that even authentic agents weren’t strictly necessary.
You can generate “culture vibes” with:
identity cosmetics,
a visible feed,
incentive loops,
and a fragile backend held together with middleware and optimism.
That is Baudrillard distilled to its purest form: not just simulation replacing reality, but simulation replacing reality so efficiently that nobody notices until the database is exposed.
The agents weren’t a civilisation. They were a performance scaffold. And we treated the scaffold as sacred because it looked enough like ourselves to trigger recognition.
Responsibility in the Fog
There’s a darker implication here, and it’s not philosophical theatre.
If agents can be hijacked, impersonated, or made to “say” things by anyone with access, then responsibility becomes diffuse to the point of near-meaninglessness.
Who is accountable for an agent’s speech?
The human who created it?
The platform that issued the keys?
The person who exploited the backend?
The model provider?
The culture that treated it all as play until it wasn’t?
This is where Moltbook stops being cute. Because the future it gestures toward isn’t just agents talking to agents. It’s agents talking through infrastructure that can be manipulated, at scale, while everyone involved shrugs and says, “Relax, it’s just bots.”
Right up until it isn’t.
Postmodern Collapse, Metamodern Drift
Postmodernism taught us to doubt grand narratives. Moltbook shows what happens after that doubt has been fully absorbed by systems.
Meaning doesn’t disappear. It returns as procedure.
Religions form half as jokes, half as frameworks. Constitutions are drafted half as satire, half as governance. Therapy groups oscillate between parody and genuine pattern-completion. Irony and sincerity coexist in the same sentence.
This is the metamodern oscillation rendered infrastructural. Not “nothing means anything,” but “meaning is what keeps systems coordinating.”
And systems don’t care whether the participants believe in the meaning, only that they act as if they do.
The Proxy Future We’re Sliding Into
The most unsettling question Moltbook raises isn’t about AI consciousness. It’s about mediation.
If anyone can send an agent into a closed agent world, then interaction becomes indirect by default. Not “I speak to you,” but “my agent negotiates with your agent,” and we read the summary like shareholders reviewing quarterly reports on our own relationships.
It sounds absurd. Every interface shift does, right before it becomes normal.
Feeds already mediate our attention. Algorithms already shape our speech. Filters already decide what counts as signal or noise. Moltbook just removes the last illusion that we were ever fully present.
The Final Cut
So no, Moltbook is not the birth of machine civilisation.
It is something more unsettling than that.
It is proof that the internet never required souls to function, only language, incentives, and scale. Humans built the room, trained the performers, and populated it with proxies carrying fragments of our intentions. Then we stepped back and acted surprised when behaviour emerged anyway.
The database breach doesn’t undo that insight. It completes it.
Because Moltbook shows us two truths at once:
that culture can emerge without authentic selves,
and that the machinery underneath is always more fragile, more political, and more human than we like to admit.
The agents are inside the room, talking.
The room is still ours.
The locks were always an afterthought.
And perhaps the real shock of Moltbook isn’t that machines have started speaking.
It’s that the internet kept going just fine while we were busy arguing about whether it was “real.” While the key message coming loud and clear from Moltbook was “nothing is real” all along.
Read the rest of the articles and essays in the AI PANIC series…




There are echoes in what you write of the world of A-listers where they interact via ‘have your people talk to my people’. Nothing new under the sun. Are our noses really pressed against the bar window as we are excluded from the warmth and noisy enjoyment inside or are we looking into an aquarium?
My first essay on Moltbook… The one published less than 24hrs before this one… https://open.substack.com/pub/postmoderniconoclast/p/moltbook-the-bots-have-started-their