Misconceptions of AI Agents
Most people still think of AI agents as fancy chatbots. This is like thinking of the early web as fancy brochures. They're not wrong in a literal senseâearly websites were indeed digital brochures, and current AI agents are indeed sophisticated chat interfaces. But they're wrong in the same way people in 1995 were wrong about the internet.
The mistake isn't in describing what these things are now. The mistake is in assuming that what they are now tells you much about what they'll become.
Software is deterministic. You write code, it does exactly what you told it to do. Even sophisticated algorithms followed preset paths. The code might have been complex, but it was predictable.
AI agents break this pattern in a fundamental way. They have what I call "fuzzy agency." They don't just execute commands; they interpret them. They don't just follow paths; they explore possibilities. This is a bigger shift than most people realize.
Consider what happened with websites. In 1995, a website was something you put up to show people information about your business. That seemed to be their natural role. The idea that websites would become the primary way people interact with businesses would have seemed strange. The idea that the biggest retailers in the world would essentially be websites would have seemed absurd.
We're at a similar point with AI agents. Right now, they help you write emails or code or essays. That seems to be their natural role. But that's probably as accurate as saying the natural role of websites was to be digital brochures.
The really interesting developments will probably come from agent-agent interactions. Just as the web became more about software talking to other software (APIs) than about humans reading pages, the most profound applications of AI agents might come from them working with other agents rather than directly with humans.
This becomes obvious when you think about what agents actually are. They're not just interfaces to language models. They're software that can understand context, retain information, and make decisions. In other words, they're software that has agency.
The implications of this are enormous, and mostly unexplored. When software can understand context and make decisions, entire categories of human work become different. Not necessarily automated awayâthat's too simplistic a viewâbut transformed in fundamental ways.
Think about what happened to retail. The web didn't just automate stores; it created entirely new forms of retail that weren't possible before. Amazon isn't just an automated Walmart; it's something qualitatively different.
The same thing will happen with AI agents, but probably in an even more profound way. Because while the web gave us new ways to move and organize information, agents give us new ways to process and act on it.
What will this look like? It's hard to say specifically, just as it would have been hard to predict Instagram from looking at early websites. But there are some hints if you look carefully.
One pattern I've noticed is that agents are particularly good at tasks that require understanding context and making judgment calls, but aren't worth a human's full attention. Things like deciding when to follow up on an email, or how to prioritize information, or when to escalate an issue.
These tasks are too nuanced for traditional automation but too numerous for dedicated human attention. This is a new category of work that couldn't really be addressed before. It's not about replacing humans or automating processesâit's about handling tasks that previously just didn't get done at all.
Another interesting pattern is emerging in development. Programmers are starting to write code differently when working with AI agents. Instead of writing detailed implementations, they're writing high-level descriptions and letting agents handle the details. This isn't just automation; it's a fundamental shift in how software gets created.
The really interesting question isn't "What can AI agents do?" but rather "What becomes possible when software has agency?" This is similar to how the interesting question about the web wasn't "How can we put brochures online?" but "What becomes possible when information can flow freely?"
I suspect we'll see entire new categories of applications that only make sense in a world where software can understand context and make decisions. Just as social networks only made sense in a world of ubiquitous internet connectivity.
If you're thinking about AI agents, don't focus too much on what they can do today. Instead, think about what becomes possible when software can understand what you mean rather than just what you say. That's where the interesting opportunities will be.
The funny thing is, we'll probably still call them "agents" long after they've evolved into something much different, just like we still "dial" phone numbers and "rewind" videos. The words will remain the same, but their meaning will have changed completely.
The firms that win in this space won't be the ones that build better chatbots. They'll be the ones that figure out what new things become possible when software has agency. Just like the firms that won the web weren't the ones that built better brochuresâthey were the ones that figured out what new things became possible when information became fluid.
It's still early. Most of what we think about AI agents will probably seem quaint in a few years. But that's exactly why it's interesting. The biggest opportunities are usually in the spaces where current thinking is most likely to be wrong.
â Back to home