The Knowledge Mining Gap

Most people use LLMs the way they used Google: type a question, get an answer. Some use them as writers: draft this email, summarise this document. Almost nobody uses them the way you’d use a subject matter expert you’ve hired for an hour.

The difference matters. A search gives you a fact. A draft gives you prose. But a structured debrief gives you a decision framework — the kind of thing that takes a consultant three weeks and a deck to deliver.

Here’s what I mean. My two-year-old refused to go to school for three days running. I know nothing about child developmental psychology. The search-engine approach would give me a parenting blog post titled “10 Things To Do When Your Toddler Won’t Go To School.” The writer approach would give me a reassuring paragraph. Instead, I ran a structured extraction:

  1. Probe — what actually drives school refusal at this age?
  2. Push past the first answer — “separation anxiety” is too broad. What are the distinct mechanisms?
  3. Find the bones — what’s the taxonomy? What are the failure modes? Where do parents make it worse?
  4. Distill — compress into a reusable reference I can consult at 7am when my son is crying.

Two minutes later I had five distinct mechanisms that look identical from outside but need different responses, a drop-off protocol grounded in attachment theory, a table distinguishing normal phases from warning signs, and a developmental timeline by age. Not a blog post — a framework.

The shape of this problem is everywhere in knowledge work:

A manager handling their first performance issue. They need employment law heuristics, conversation frameworks, documentation requirements. What they actually do: read one HR article and wing it.

An engineer evaluating a vendor. They need procurement heuristics, contract red flags, total cost of ownership frameworks. What they actually do: build a feature comparison spreadsheet.

Anyone negotiating anything. They need BATNA structure, concession sequencing, anchoring research. What they actually do: recall that they read Getting to Yes once.

The common shape: you need a decision framework in a domain you visit infrequently. Not a fact. Not prose. A structured way to think about a class of problem. Books exist but take hours. Blog posts give you one person’s perspective. The model gives you the compressed structure of the field.

Why isn’t everyone doing this? Because the mental model is wrong. “Ask AI a question” is a search interaction. What I’m describing is an interview interaction — you’re treating the model as someone who’s read everything in the field and you’re debriefing them. You probe, you push, you look for the bones underneath the first answer.

The first answer is always too general. The second and third layers are where the structure lives. “What distinguishes this from that?” and “where does this break down?” are the questions that surface the distinctions practitioners know but textbooks bury in chapter 7.

Can the model hallucinate? Yes. Are the specific numbers approximate? Sometimes. But the structure — the taxonomy, the failure modes, the key distinctions — reflects the weight of the literature. And for decisions where the cost of no framework exceeds the cost of an imperfect one (which is most decisions), the imperfect framework wins.

This has implications for consulting, which is largely the business of selling domain expertise to people who need it infrequently. If a client can mine 80% of the framework themselves in an afternoon, the value of the consultant shifts from “I know things you don’t” to “I’ve seen this go wrong in ways the literature doesn’t cover.” Field experience, pattern recognition across engagements, organisational context — that’s the 20% you can’t extract from a model. The 80% that’s in textbooks and papers? That’s already in the weights.

The gap isn’t in the technology. It’s in how people think about what these models are for. Stop searching. Start debriefing.