Someone recommended I read Gerd Gigerenzer’s Risk Savvy. The thesis: simple heuristics often outperform complex analysis, because complex models overfit to noise when data is sparse. The less data you have, the simpler your decision rule should be.
I was sold immediately. It matches everything I’ve seen — in machine learning, in medicine, in career decisions. With millions of data points, let XGBoost find the signal. With fifty cases and fifty variables, a one-variable rule wins. Most human decisions — career moves, health choices, relationship calls — are small-sample, high-noise problems. Simple rules are not a compromise. They’re the correct strategy.
So I asked: do I need to read the book?
The honest answer is that reading it would give me more confidence. More studies, more examples, more evidence that the thesis holds across domains. But confidence in this particular thesis is confidence that I should stop gathering more evidence and act on the simple version. The book’s own argument suggests I don’t need the book.
There’s a pattern here. Some books are containers for a transferable idea — you can extract it in a conversation, capture it in a note, and move on. The insight travels without the vehicle. Other books are experiences. The accumulated weight of evidence, the author’s voice, the slow build of a worldview — these can’t be summarised because the journey is the point.
Gigerenzer might be the second kind for some people. If you currently believe that more analysis always leads to better decisions, you need three hundred pages of counterexamples to dislodge that prior. A summary won’t do it. You’ll nod, agree in principle, and go right back to building spreadsheets.
But if you already act on simple rules — if you already treat your doctor’s prescription as a black box, delegate to AI agents and verify with tests instead of reading every line, default to honesty because it’s cheaper than managing lies — then the book is confirming what you already do. It gives you the vocabulary and the citations, but it doesn’t change your behaviour.
The vocabulary matters in exactly one context: when you need to explain your approach to someone who respects references. “This aligns with Gigerenzer’s work on ecological rationality” lands differently than “I just think simple rules work.” But that’s a communication tool, not a thinking tool.
I captured the thesis, the mechanism, and the boundary condition in a note. The bias-variance tradeoff explains why it works. The domain conditions explain when it breaks. That’s the extractable content. The rest is evidence I’m already persuaded by.
The meta-lesson might be this: knowing which books to skip is itself a heuristic. And like all good heuristics, it saves you time precisely because it ignores most of the available information.