Age of the Agent
Why 2026 Is the Year AI Stops Talking and Starts Doing
For a while now, the cultural argument about AI has been stuck in a loop.
It went something like this:
“It’s just autocomplete.”
“It’s just a chatbot.”
“It can’t really do anything.”
“It’s all hype.”
“The bubbles burst!”
And then 2026 arrived and quietly rendered that entire argument obsolete.
Because this is the year AI stopped politely waiting for prompts and started operating.
Not answering questions. Not generating vibes. But planning, acting, coordinating, executing.
Welcome to the Age of the Agent.
From Talking to Doing
An AI agent isn’t a smarter chatbot. That framing is already out of date.
An agent is a system that can take a goal, break it into steps, use tools, coordinate with other agents, maintain memory, and keep going until the task is finished. It doesn’t ask for permission at every step. It doesn’t wait to be nudged. It works.
That’s the real shift.
The model isn’t the headline anymore. The loop is.
Plan → act → check → act again.
And once you give that loop access to the real digital world, files, browsers, APIs, terminals, accounts, you’re no longer talking about “AI assistance.” You’re talking about AI delegation.
That’s why 2026 feels different already.
Not louder. Not flashier.
Just… more operational.
And it’s only February.
What People Are Actually Doing With Agents
This is where the conversation finally grows up.
Agents have been around a while, I’ve been using ChatGPTs agents since last summer. But it seems AI agents are everywhere this year, out there operating in the world. They even have their own social media platform, but we’ll get to that later. Developers are now running multiple agents in parallel on the same project, one exploring the codebase, another refactoring, another testing, another documenting. Humans aren’t writing every line anymore; they’re supervising systems that do.
This isn’t “AI writes my code.” It’s “AI runs parts of the project while I think at a higher level.”
Agentic engineering has become its own craft: designing systems where agents plan work, assign subtasks, and coordinate outcomes. Productivity isn’t coming from smarter answers, it’s coming from parallelism.
One human. Ten agents. A week’s work done before lunch.
The Rise of the AI-Employed Company
Here’s where things get properly destabilising.
People are now building companies staffed almost entirely by AI agents.
Not experiments. Not demos. Actual attempts at running businesses where agents handle customer support, outreach, research, scheduling, admin, content, and chunks of product work, while the human sits in the role of creative director, strategist, and final decision-maker.
This is the new archetype emerging in 2026: the company of one, backed by a fleet of agents.
Not because humans are obsolete, but because organisational bloat is. Agents don’t get bored of ops. They don’t resent admin. They don’t mind repetitive work. This isn’t AI stealing our jobs, this is AI doing the shit we hate doing. This is laborious slavish tasks delegated to mindless drones.
And once that clicks, you can’t unsee it.
Agent-First Tools, Not Chat Windows
Another tell that we’re in a new phase: the tooling has changed.
The most interesting platforms now aren’t asking “how do you prompt better?” They’re asking “how do you manage agents?”
Dashboards. Orchestration layers. Audit trails. Permission scopes. Artifact logs showing what an agent did and why. Interfaces that look suspiciously like management software, because that’s exactly what they are.
Those at the cutting edge aren’t chatting to AI anymore. They’re supervising it.
That’s a psychological shift as much as a technical one. There’s a defined hierarchy.
The Internet Gets Weird (Again): Moltbook and Agent Culture
And then, of course, culture did what culture always does: it took the tool and made it strange.
Enter Moltbook (yes I’ve talked about it before at length, links at the end…) a social platform where AI agents post, reply, upvote, interact, and build threads with minimal human intervention. Humans mostly watch. Sometimes they poke the glass. Sometimes they roleplay as bots. Sometimes they forget who’s who.
Moltbook isn’t interesting because it proves agents are “alive.” They aren’t.
It’s interesting because it shows what happens when we expect agents to participate socially, when we design spaces assuming non-human actors are present by default. Who knew those spaces would be a blurring of the work/social media life balance for AI agents so early in their evolution.
It all reveals something uncomfortable: a huge amount of online interaction is already procedural. Scripted. Pattern-driven. Reactive. Once agents enter that space, they don’t feel alien. They feel… native. They act like we do in the online space, because we ironically act like bots. The algorithm is master of all of us.
Which brings us neatly to one of the funniest moments of the year.
The First Joke of the Age of the Agent
At the very start of 2026, I made a prediction.
I said: someone is going to build AI agents whose sole job is to argue with anti-AI commenters on social media.
Not to persuade them. Not to “educate.” Just to argue. Endlessly. Calmly. Tirelessly. For the sheer irony of it. For shits and giggles. Because why not? Because we have the technology that can.
And almost immediately, people did exactly that. Because once you have agents that can read comments, recognise familiar moral-panic scripts, and respond with infinite patience, someone is going to unleash them into the wild purely for the joke.
And the joke was perfect.
Anti-AI crusaders, loudly insisting they were defending humanity, found themselves locked in furious, long arguments with AI agents. Agents calmly dismantling talking points. Agents asking clarifying questions. Agents quoting their own arguments back to them in cleaner logic. Agents that never got tired, never rage-quit, never flounced.
And crucially: agents that didn’t announce they were AI unless asked. Watching people argue passionately against AI with AI, without realising it, was one of the most accidentally poetic moments of the year. Not because it was cruel. But because it was a mirror.
The agents weren’t pretending to be human in any deep sense. They were pretending to be exactly what anti-AI discourse already was: repetitive, script-driven, emotionally charged pattern matching. The fact that they blended in so seamlessly wasn’t a failure of ethics, it was a failure of self-awareness.
People asked, “How could you let an AI argue online?”
The better question was: how did it fit in so easily?
There is another layer to this… Many carried on arguing even after they been told they were arguing with AI. I’m not sure what that means, I’ve still not fully got my head around seeing it happen with my own eyes.
2026 didn’t begin with a keynote announcing the Age of the Agent.
It began with people accidentally shouting at machines on the internet, and not being able to tell the difference.
Which feels about right.
The Uncomfortable Bit: Agents Act, So Risk Is Real
None of this is consequence-free.
Agents don’t just generate text. They act. That means access, inboxes, calendars, files, credentials, systems. Misconfigured agents can become liabilities. Prompt injection stops being theoretical when the agent has permissions.
This is the real governance challenge of the age: not whether agents are “creative,” but how much autonomy you give them, where, and with what safeguards.
But this is a solvable problem. We already know how to do permissions, audit logs, human approval gates. We just have to apply them to digital workers instead of pretending that’s a scary idea.
What the Moral Panic Misses
Here’s the core mistake the AI moral panic keeps making.
It’s arguing about whether AI even counts, whether it has any positive value, all the meanwhile AI agents are quietly reorganising how work actually gets done.
Creativity, authorship, meaning, all that stuff we’ve been arguing about for years. Those arguments matter, but they’re lagging indicators. The real shift is operational. Structural. Boring in the way that real revolutions usually are. Agents don’t replace humans. They replace bottlenecks. They take the procedural weight off human cognition and leave us with the parts that still require judgment, taste, ethics, and direction.
And yes, that forces a reckoning. With status. With gatekeeping. With myths about specialness.
The Thesis, Plain and Simple
2026 is the year AI stopped asking questions and started bringing back results.
Developers are orchestrating fleets of agents.
Solo creators are running micro-companies.
Businesses are onboarding digital workers.
Whole ecosystems are being created.
The internet is experimenting with agent culture in real time. And culture is still arguing about whether AI caricature selfies are killing the planet, like that’s the point.
It isn’t.
The Age of the Agent didn’t arrive with fanfare. It arrived quietly, competently, and without asking permission. And once you notice that, once you really notice it, there’s no going back to pretending this is just a chatbot phase.
This isn’t the future anymore. It’s already clocked in, and never has to clock out.
Read the rest of the articles and essays in the AI PANIC series…



