Two job ads. Same bank. Same week. Both titled with the word “AI” prominently. Completely different jobs.
The first: Head of AI Adoption. Requirements listed IT programme management experience, stakeholder management, change delivery frameworks. No mention of machine learning, model risk, or technical AI work. This was a change management role dressed in AI language, which is a real and legitimate role — banks need people who can manage the operational and cultural change that AI adoption requires. But calling it an “AI” role without that clarity attracts the wrong candidates and repels the right ones.
The second: Head of AI and Data Applications. Requirements included data science leadership, ML platform experience, model deployment in production. This was a technical leadership role. The “AI” in the title was accurate but adjacent to the previous role’s “AI” in a way that could easily mislead.
Same institution. Same week. The AI job market has a labelling problem.
The problem runs in multiple directions. For candidates, it means that searching for “AI” roles surfaces a range of positions so varied in actual content that each one requires independent analysis to understand what it is. The title doesn’t carry information about whether the role requires technical depth, programme management capability, governance expertise, or something else entirely. You have to read the requirements list, not the title — and even then, the requirements sometimes mix technical and non-technical signals in ways that reflect internal confusion about what the role actually is.
For hiring managers, the inflation is self-defeating. “AI” in a job title now competes with so many other “AI” titles that the signal value of the designation has collapsed. The title that was meant to signal that this is a strategic, forward-looking role now just signals that the role has something to do with AI — which covers roughly forty percent of new job postings in financial services. The distinctive becomes generic.
For organisations, there’s a deeper problem underneath the hiring inefficiency. The fact that banks are posting externally for AI adoption programme managers is itself a signal: they have a mandate but no internal owner for the organisational change work. That’s a different problem from not having the technical capability. It suggests that in many institutions, the AI strategy has been treated primarily as a technology question, and the organisational change question — how do you actually get humans to work differently in an AI-augmented environment? — has been treated as downstream of the technical deployment, when it’s often upstream.
A useful categorisation that the AI job market hasn’t settled on: there are at least three genuinely different functions that get called “AI” roles. There’s the build function (ML engineering, model development, data science leadership). There’s the deploy function (AI product management, implementation, infrastructure). And there’s the change function (adoption, transformation, governance, training). All three are necessary. All three require different skills. All three are being hired under the same label.
The organisations that are hiring most effectively for AI capability tend to be the ones that have gotten clear about which of these functions they actually need and can describe the role accordingly. The ones posting vague “AI” titles are usually the ones still figuring out what they’re trying to do.
For candidates navigating this market: the requirements section is the job description. The title is marketing. Look at what’s being evaluated, not what it’s being called. And notice when the requirements list is internally inconsistent — mixing technical depth requirements with programme management experience requirements in ways that suggest the hiring manager hasn’t resolved what the role actually is. That inconsistency often persists into the role itself.
P.S. The most reliable signal that an organisation knows what it’s hiring for in AI: they can answer the question “what does success in this role look like in twelve months?” with something specific. “Driving AI adoption” is not specific. “Three production model deployments, with governance frameworks signed off by the model risk committee” is. The specificity of the success criteria predicts the specificity of the role.