I ran an experiment recently where I asked the same question to five AI models and to a single model working alone. The five-model council produced the same answer as the solo model every time. No divergence. No unique insight from the crowd. But here’s what caught me: I trusted the council’s answer more. Five independent intelligences converging on the same conclusion — that feels like something. It feels like truth getting triangulated.
It wasn’t. It was the same training data, refracted through slightly different architectures, producing slightly different prose around identical reasoning. The appearance of independent confirmation without the substance. I wrote about the experiment elsewhere, but the finding that stuck with me wasn’t about AI tooling. It was about how I process information in general.
Every input I encounter — a model’s analysis, a colleague’s opinion, an article, a podcast — shifts my prior beliefs. That’s how thinking is supposed to work. Bayesian updating. Rational, even. But there’s a failure mode that’s almost invisible from the inside: accumulating inputs that all share the same blind spot. Each one nudges your confidence higher without actually testing the underlying belief. You feel more certain. You are not more correct.
The five-model council is a perfect laboratory for this effect because the mechanism is so clean. Same training corpus, same RLHF patterns, same Western-educated-internet-user worldview baked in. But the same thing happens with human sources. Read five articles by authors who all cite the same three studies. Talk to five colleagues who all trained at the same institutions. The agreement feels like convergence. It’s actually just correlated noise.
The question I keep coming back to is: when does an input genuinely update my understanding versus merely reinforce what I already believe? And the uncomfortable answer is that I usually can’t tell the difference in the moment. Confirmation feels identical to discovery. Both arrive as a satisfying click of “yes, that’s right.”
There’s a useful partition that helps. Some questions have ground truth. Does this code run? Did the payment go through? Is this date correct? For these, the answer is always the same: stop discussing and test it. Run the code. Check the bank. No amount of confident opinion substitutes for contact with reality. The pathology of asking AI whether your code works instead of running it is obvious, but I catch myself doing versions of it all the time — seeking reassurance when I should be seeking evidence.
Then there are questions where ground truth doesn’t exist. Should I take this job? Is this the right architecture? Should I organise my reading by topic or processing mode? These are the dangerous ones, because they feel like they should have correct answers. Smart people discuss them as if there’s a right answer waiting to be uncovered. But what you’re really choosing is a framing — which lens is most useful, not which lens is most true. For these, the counterargument that survives sustained attack is more valuable than the consensus that accumulates undisturbed. The surviving counterargument has been tested. The consensus might just be five people who never thought to question the same assumption.
I’ve started noticing a specific sensation: the moment I feel certain about a strategic question. Not evidence-based certainty (“the test passed”) but opinion-based certainty (“this is clearly the right approach”). That feeling, I’m learning, is almost always my prior talking. It’s the accumulated weight of inputs that all pointed the same direction, creating a gravitational pull that feels like insight but is really just momentum.
The antidote isn’t more opinions. It’s not asking a sixth model, or reading a sixth article, or consulting a sixth person. It’s action. A decision tested against reality — shipped code, a conversation had, a strategy executed — teaches more in an afternoon than a year of deliberation. Not because deliberation is useless, but because deliberation without contact with reality is just a more sophisticated way of staying still.
Trust contact with reality over any number of opinions. Including this one.