I started a conversation about my two-year-old not wanting to go to school. Ninety minutes later I had: a developmental psychology decision framework, a thesis about what knowledge workers should learn in an AI world, a career insight I’d been circling for months, and the research foundation for my first client deliverable at a new job.
None of this was in a prompt. I didn’t sit down with a plan. The first topic pulled the second, which surfaced the third. Each thread opened because the previous one left something unresolved.
This is not how most people use AI.
Most interactions are transactional. Question in, answer out. “Summarise this document.” “Write this email.” “What’s the capital of France.” The model is a tool — you know what you want, and you use it to get there faster.
There’s a different mode. Call it conversational, or the interlocutor mode — you think with the model instead of through it. You say half-formed things. You follow threads without knowing where they lead. You let the model push back, and you push back on the pushback. The value isn’t in any single answer. It’s in the questions the conversation surfaces that you wouldn’t have asked yourself.
“My kid won’t go to school” is not a prompt that produces career insights. But in conversation, it leads to “I can extract a parenting framework from an LLM in two minutes,” which leads to “what does that mean for knowledge work,” which leads to “what are humans still for,” which leads to “wait — is AI governance what I actually want to do, not AI solutioning?” That last one had been sitting in my head for months, unformulated. The conversation didn’t answer it. It surfaced it.
This is what a good thinking partner does — therapist, coach, colleague you respect. Not someone who gives you answers, but someone whose responses cause you to think things you wouldn’t have thought alone. The AI version of this is available to anyone, for any topic, at any time. And almost nobody uses it.
Why not? Partly because the transactional mental model is sticky — we’ve been trained by search engines to think in queries. Partly because the conversational mode requires a different posture: you have to be willing to not know where you’re going. You have to say things that aren’t fully formed and let the conversation shape them. That feels inefficient if you think the point is to get an answer. It’s extremely efficient if you think the point is to find the right question.
The transactional mode gives you what you asked for. The interlocutor mode gives you what you didn’t know to ask for. Both are useful. But the second one is where the leverage is, and it’s almost entirely untapped.