I keep reading about the future of AI agents - specialized models for research, copywriting, design, each requiring its own prompt, its own review cycle, its own corrections. The demos show people orchestrating three different agents to produce a single PowerPoint slide. Twenty minutes of coordination for ten minutes of direct work.
This is the future they’re selling us: not the elimination of work, but its transformation into meta-work. Every knowledge worker becoming a middle manager, coordinating teams of specialized AI agents rather than doing the actual thinking. We’re witnessing what Dan Shipper recently described as a shift to an “allocation economy” - though I’d argue it’s more perverse than he suggests. We’re not just becoming resource allocators; we’re becoming middle managers in hierarchies of our own making, orchestrating intelligence we increasingly don’t understand.
The irony cuts deep. We built these tools to eliminate the coordination overhead that plagues modern organizations. Too many meetings, too much process, too many layers between idea and execution. Yet here we are, adding another layer - the human-to-AI translation layer, where every task requires its own little management hierarchy.
Consider what we’re actually doing when we use Claude Code or Cursor or any of these agent systems. We’re not programming anymore; we’re writing specifications for programmers. We’re not designing; we’re reviewing design proposals. We’re not writing; we’re editing drafts from entities that never quite understand what we meant. The promise was direct manipulation of ideas through natural language. The reality is indirect manipulation through an intermediary that requires constant supervision.
The shift reveals something uncomfortable about the nature of expertise. What we called “skill” was really two things intertwined: knowing what to do and knowing how to do it. AI agents are surprisingly good at the “how” - they can write syntactically correct code, grammatically proper sentences, structurally sound arguments. But the “what” remains stubbornly human. So we’ve split the atom of expertise, and in doing so, created a new kind of work: the endless specification of what we want, the constant course correction of agents that can execute brilliantly in the wrong direction.
This isn’t entirely new. Software development has been moving this way for decades. We went from assembly to C to Python to no-code platforms, each step abstracting away the “how” and focusing on the “what.” But there’s something different about delegating to AI. When you use a higher-level programming language, you’re still thinking in computational terms. When you delegate to an AI agent, you’re thinking in management terms: How do I specify this task? How do I quality-check the output? How do I provide feedback that will improve the next iteration?
The allocation economy Shipper describes isn’t just about using AI tools. It’s about a fundamental shift in what human work means. We’re all becoming prompt managers, our value measured not by what we can produce but by how effectively we can direct production. The carpenter becomes the foreman. The writer becomes the editor. The programmer becomes the architect who never touches code.
There’s a deeper anxiety here about what happens to craft. When you delegate the actual making to AI agents, you lose the feedback loop between hand and mind that defines mastery. The potter who never touches clay, directing an AI to shape it instead, loses something essential - not just the physical skill, but the way that skill informd their aesthetic judgment. How do you develop taste without developing technique? How do you know what’s possible without knowing what’s difficult?
Perhaps this is temporary. Perhaps we’re in an awkward adolescent phase where the tools aren’t quite good enough to work without supervision, but not bad enough to abandon. Perhaps future AI agents will be so capable that management becomes unnecessary - they’ll intuit what we want without elaborate prompting, course-correct without feedback, produce excellence without revision.
But I doubt it. Management exists not because human workers are incompetent, but because the gap between intention and execution is irreducible. Someone has to hold the vision. Someone has to make the trade-offs. Someone has to say “good enough” or “try again.” These aren’t failures of capability; they’re the essence of judgment.
So we’re all becoming middle managers, whether we wanted to or not. The question isn’t whether this is good or bad - it’s already happening. The question is what we do with this new role. Do we become better at articulating vision? Do we develop new languages for specifying intent? Do we find ways to maintain craft knowledge even as we delegate the crafting?
Or do we just accept that everyone gets a team now, even if that team is made of code, and the future of work is an endless series of status meetings with entities that never get tired, never push back, and never quite understand what we really meant?
Watch the demos carefully and you’ll notice something: they never save the prompts. Each interaction starts fresh, the entire management process repeated from scratch. Even our meta-work is becoming disposable.
P.S. Writing this required its own little management hierarchy - prompting Claude, reviewing drafts, redirecting when anecdotes rang false. The medium is the message: even critiquing the allocation economy requires allocating. There’s no outside anymore, only different depths of delegation.