This started with my two-year-old refusing to go to school.
I mined an LLM for developmental psychology frameworks, got a structured decision guide in two minutes, and realised the output was good enough to act on. Then I asked what’s left for humans when the knowledge layer is commoditised, and landed on field experience, judgment, accountability, taste.
Reassuring. For about ten minutes.
Because there’s no reason any of those are permanent. Field experience is pattern recognition over more contexts — models with agency and long memory are already accumulating that. Judgment under ambiguity is calibration plus context weighting — computable in principle. Taste is prediction of what will resonate with an audience — solvable with enough data. Accountability — someone has to be fired when it goes wrong — might stay human for legal and political reasons long after the actual judgment is better done by a machine.
Every layer humans retreat to, AI follows. Not today, not all at once, but the direction is clear and there’s no obvious floor.
Which brings me back to my son.
He’s two and a half. He’ll enter the workforce around 2045. What does that world look like? I have no idea. Nobody does. But I can see enough of the trajectory to know this: anything I teach him as a specific skill — coding, analysis, writing, even “critical thinking” as it’s currently framed — has a shelf life that’s probably shorter than his career.
So what do you optimise for when you can’t predict which skills will hold value?
I think the answer is something like: the capacity to adapt faster than the environment changes. Not any particular knowledge, but the meta-skill of reading new rules quickly, abandoning old ones without identity crisis, and finding the seam of value in whatever landscape exists at the time.
Concretely, for a toddler, that probably means:
Comfort with not knowing. Most education optimises for having the answer. The more valuable skill is being functional while not having it — tolerating ambiguity long enough to figure out which question actually matters.
Agency over compliance. “Do what you’re told” is training for a world where humans are the executors. In a world where AI executes, the value is in deciding what to do. Kids who are allowed to choose, fail, and course-correct develop something that kids who follow instructions well don’t.
Reading the room as it changes. Not social skills in the “be likeable” sense — more like situational awareness. What’s valued here, right now, by these people? That’s a moving target and the ability to track it is more durable than any fixed competence.
Identity that isn’t anchored to what you do. If your sense of self is “I’m a data scientist” or “I’m a consultant,” you’re fragile to the thing being automated. If it’s something more fundamental — curiosity, integrity, craft in whatever medium is available — you can let go of the specific role without falling apart.
I don’t know if these are right. I’m reasoning from first principles about a future I can’t see, which is exactly the kind of thing I’d normally tell people not to do. But the alternative — raising a child as if the world will look roughly like this — seems worse.
The honest version is: I don’t know how to prepare him. I just know that optimising for any particular skill is a bet on a future that’s changing faster than the skill can be acquired. So I’m trying to optimise for the thing underneath the skills — the ability to find footing in a world that won’t stop moving.
He’s two. He doesn’t want to go to school. The school refusal research says this is normal and will pass in two weeks. The AI research says the world he’s being prepared for might not exist by the time he gets there.
Both of these things can be true. You still walk him to the gate, say the same two sentences, and leave without looking back.