Taste is not a preference. Taste is a preference that has been tested against consequences.
This is the distinction that AI fluency in aesthetic domains obscures. An AI system can produce prose that flows, images that are visually coherent, music that has structure and variation. It can tell you what makes a good opening sentence, identify the principles that distinguish elegant code from cluttered code, and apply them consistently across millions of examples. None of this is taste. It’s pattern application — sophisticated, fast, often technically correct, and missing something that becomes obvious when you watch someone who has actual taste make a difficult call.
The person with taste has been wrong before, in a context where being wrong cost them something. The journalist who wrote the bad headline and watched a story fail. The designer who chose the wrong typeface and heard from the client. The engineer who made the elegant-but-slow data structure and had to explain the production incident. These experiences don’t just add information — they reshape judgment at a level that observation alone doesn’t reach. The feedback loop that creates taste is the feedback loop that creates accountability.
AI systems have none of this. They’ve processed millions of examples of human judgment, including the judgments of people with real taste, but they haven’t inhabited the moment of making a consequential call under uncertainty and finding out what happened. They’ve studied the map but never walked the terrain.
This matters more in some domains than others. In tasks where the criterion for success is explicit and verifiable — does this code pass the tests, does this translation preserve the meaning, does this summary capture the main points — the absence of stakes doesn’t meaningfully limit AI performance. The feedback loop that would create taste isn’t the relevant one. The evaluation criteria are sufficient.
In tasks where the criterion for success is implicit and contextual — does this design convey the right emotion, does this argument land with this specific audience, does this product feel trustworthy — the absence of stakes becomes the limiting factor. These are the domains where having been wrong before, in a way that mattered, shapes judgment in ways that pattern application from examples doesn’t replicate.
The practical implication for how AI fits into creative and strategic work is not that AI can’t contribute to taste-dependent decisions — it clearly can, by surfacing options, testing variations, and applying consistent principles at scale. It’s that the consequential judgment still requires someone who has skin in the game. Not as a review step grafted onto an AI workflow, but as a genuine locus of accountability. The person whose aesthetic judgment gets tested against outcomes, who updates based on those outcomes, and who develops taste through that update process.
This also suggests something about the kinds of creative work that are most resistant to AI substitution. Work where taste is the primary differentiator — where the criterion for success is whether it’s right in a way that only someone with accumulated skin in the game can evaluate — is the category where AI amplifies human capability most clearly and substitutes for it least cleanly.
Not because the AI lacks intelligence. Because it lacks the specific kind of knowledge that comes from having made consequential calls and lived with the results.
P.S. The clearest test for whether a creative or strategic judgment requires taste: could you specify, in advance, the criteria for evaluating whether it was a good call? If you can, AI can probably apply those criteria as well as a human. If you can’t — if “you’ll know it when you see it” is the most accurate description — taste is doing the work, and taste requires stakes.