Most AI governance conversations in banking start wrong. They start with frameworks — NIST AI RMF, ISO 42001, internal “Responsible AI Principles” — and work forward toward implementation. The implicit framing: governance is a good thing, and more governance is more good.
This framing is polite. It’s also why governance programs stall.
The reframe
Governance is a tax. A necessary tax — like income tax — but a tax nonetheless. Every governance control adds friction to AI deployment. Every review board adds latency. Every policy document adds compliance burden.
The consultant’s job isn’t to maximize governance. It’s to minimize the tax while achieving the risk objective.
This one shift changes everything about how you scope an engagement, talk to stakeholders, and design controls.
Same engagement, five different pitches
The thing that makes AI governance consulting tricky is that every stakeholder in the same organization has a different relationship with the tax:
| Who | What they actually want | What they’ll fund |
|---|---|---|
| Chief Risk Officer | No regulatory findings, no surprises | Gap assessments, exam readiness |
| CTO/CDO | Deploy AI without getting blocked | Lightweight framework that doesn’t slow teams |
| Board | A credible answer to “do we govern AI?” | A one-page dashboard and a policy package |
| Business lines | To not think about governance at all | Pre-approved templates, fast-track classification |
| Model Risk team | Help scaling existing discipline to AI/ML | Tools and templates that map to SR 11-7 |
The CRO buys protection. The CTO buys speed. The Board buys a narrative. Business lines buy invisibility. Model Risk buys capacity.
Same governance program, five different value propositions. If you pitch “comprehensive AI governance framework” to all of them, you’ve sold to zero of them.
What this means in practice
When governance is a tax, you optimize like you’d optimize a tax:
Proportionality over completeness. A credit-scoring model and an internal chatbot don’t need the same controls. Risk-tier first, then calibrate governance intensity. Most frameworks don’t prioritize — they treat everything as equally important. That’s the equivalent of a flat tax with no thresholds.
Build on existing infrastructure. Banks already have model risk management under SR 11-7. Extending MRM to AI is 5x faster than building greenfield governance. The MRM team is your natural ally — they’ve been doing validation for statistical models and need help scaling, not a new framework.
Operationalize before you document. A working risk classification tool beats a 50-slide governance framework deck. One end-to-end governed process beats ten policy documents. The failure mode I see most often: beautiful governance documents that nobody follows, because the people doing the work were never in the room when the documents were written.
The question that cuts through
Whenever I’m evaluating an AI governance control, I ask one question: “Would this actually catch the problem, or would this just satisfy the examiner?”
The best governance does both. But when they diverge — and they do, more often than anyone admits — you have to choose. Governance theater satisfies the examiner. Governance engineering catches the problem.
The tax metaphor helps here too. There’s tax compliance (filing correctly) and tax efficiency (structuring intelligently). Most governance programs are stuck at compliance. The interesting work is in efficiency — minimum viable governance that actually manages risk.
That’s where the consulting value is. Not in building bigger frameworks, but in making governance as light as possible while keeping the bank safe. Minimum effective dose. Everything else is overhead.