Your AI vendor just became a political statement.
For most of the last several years, enterprise AI vendor selection was primarily a technical and commercial decision: capability benchmarks, pricing, API reliability, support quality. The values questions — what does this company believe, what will it refuse to build — were present in principle but rarely determinative in practice.
That’s changing, and the change is structural rather than episodic.
When a major AI provider refused a significant government contract on grounds of incompatibility with its responsible AI commitments, and a competing provider accepted it, the alignment schism in frontier AI became a procurement-visible event. The two leading providers now have clearly different answers to the question “what will you build for military applications?” That’s not a philosophical abstraction for enterprise risk managers — it’s a due diligence question with a concrete answer that maps onto institutional ESG commitments, reputational risk appetite, and regulatory obligations.
For banks specifically, the implications compound. Regulatory frameworks in several jurisdictions now require AI vendor risk management that includes assessment of the vendor’s usage policies and their stability over time. MAS guidance on AI risk management explicitly addresses vendor dependency and operational resilience in ways that go beyond traditional SLA coverage. A vendor whose ethical red lines shift — or who accepts work that creates reputational exposure for their enterprise clients — is a vendor risk in a sense that procurement frameworks weren’t designed to evaluate.
There are three distinct risk categories that standard vendor due diligence currently underprepares for.
The first is geopolitical availability risk. A vendor can become operationally inaccessible through government action — licensing restrictions, sanctions, forced divestiture — on timescales that don’t allow for orderly migration. The probability of any specific restriction is low; the consequence of being caught mid-deployment when it happens is severe. Enterprise AI deployments that depend on a single provider without a migration plan carry this risk implicitly. Most current risk assessments don’t quantify it.
The second is total cost of ownership repricing risk. Current AI pricing reflects a period of aggressive market-share capture. The companies providing frontier model access are, in many cases, pricing below the long-run sustainable cost to drive adoption. A significant pricing correction — possible when the competitive dynamics shift or when capital allocation pressures increase — could make current deployment economics unviable without an opportunity to redesign. Switching costs, once an enterprise has integrated AI deeply into workflows, can be substantial. Locking in at current prices without contractual protections on repricing is a risk that looks small today and could look large in eighteen months.
The third is the alignment schism itself. As the leading AI providers increasingly differentiate on the basis of what they will and won’t build — not just what they can build — enterprise clients are implicitly co-signing those positions through their procurement decisions. For institutions with explicit values commitments — sustainability policies, human rights due diligence requirements, responsible technology frameworks — this is no longer a hypothetical. It’s a procurement decision with ESG implications that procurement teams and risk committees may not have fully registered.
The response isn’t to freeze vendor decisions while waiting for clarity that won’t arrive. It’s to add a layer to vendor due diligence that wasn’t there before: explicit evaluation of the vendor’s ethical commitments, their track record of maintaining them under commercial pressure, and the organisation’s own values compatibility with those commitments. This assessment won’t produce a clean score. But having done it, and documented it, positions the organisation to defend its vendor decisions to regulators, boards, and stakeholders who will increasingly ask about it.
AI vendor selection was a technical decision. It’s still a technical decision. It’s also now a values decision, and institutions that haven’t updated their due diligence frameworks accordingly are making the values decision implicitly rather than explicitly.
P.S. The most useful framing for the internal conversation: “If our vendor publicly takes a position we would not take, what is our exposure?” That question — asked before signing rather than after — surfaces the values alignment issue in terms that procurement, legal, and risk functions can engage with.