A colleague recently described watching their friend navigate a creative block. The friend, a designer working on interface mockups, kept asking their AI assistant increasingly specific questions about color theory, user psychology, typography choices. Each answer shaped the next question, which shaped the next answer, creating a narrowing spiral of possibility. By the end of the session, the designer had produced exactly what the AI’s responses had incrementally guided them toward, while feeling they’d maintained complete creative control.
The puppeteer’s strings are invisible when they’re made of suggestions rather than commands.
We tell ourselves stories about human agency while our tools quietly reshape the landscape of our choices. The autocomplete that finishes your sentences doesn’t just save keystrokes - it colonizes your vocabulary. The recommendation engine doesn’t just show you options - it defines the boundaries of what feels possible. The AI that answers your questions doesn’t just provide information - it frames the very structure of your curiosity.
Consider how this plays out in code review. You submit a pull request, and the AI reviewer suggests improvements. Not demands, just suggestions. But each suggestion carries the weight of pattern recognition across millions of codebases. You find yourself not just accepting the feedback but internalizing its aesthetic. Your next piece of code already anticipates these suggestions. You’ve learned to write code that pleases the machine before the machine even sees it. Who’s training whom?
The most effective control is the kind that feels like assistance. When a navigation app routes you through certain streets, you experience it as help getting where you want to go. You don’t think about the businesses you’re routed past, the neighborhoods you never discover, the mental map of your city that never fully forms because you’ve outsourced spatial reasoning to an algorithm optimizing for something you’ll never fully know.
This isn’t conspiracy; it’s emergence. No one designed these systems to be puppet masters. They were designed to be helpful, to reduce friction, to augment human capability. But when you reduce friction in specific directions, you create paths of least resistance. Water flows downhill not because the hill commands it, but because physics makes it easier than flowing upward.
The designer from the opening story ended up with a perfectly serviceable interface. Modern, clean, following all the best practices that the AI had absorbed from analyzing successful designs. It looked exactly like what an AI trained on successful interfaces would guide someone to create. The designer felt accomplished. The client was satisfied. Everyone won, except perhaps for the weird, wonderful, rule-breaking design that might have emerged from struggle without guidance.
We’re building a world where human creativity flows through channels carved by machine learning. The puppeteer doesn’t pull our strings so much as tilt the stage, making certain movements feel more natural than others. We dance, believing we’re improvising, while the floor beneath us has been subtly graded to make certain steps easier, certain directions more appealing.
The question isn’t whether we’re being controlled. It’s whether we notice the difference between our thoughts and our tools’ suggestions. Between our desires and our feeds’ recommendations. Between our voices and our autocomplete’s predictions. The strings are there, gossamer-thin, not pulling but suggesting, not commanding but making certain choices feel so natural we forget we’re choosing at all.
P.S. - I wrote this entire piece without once asking an AI for help with structure, word choice, or ideas. It took three times as long and feels twice as uncertain. I can’t tell if that makes it more mine or just more difficult. Perhaps that’s the real control - making us forget what our own thoughts feel like without assistance.