The governance gap in humanitarian AI: addressing the structural gap between global frameworks and operational reality

The first instalment of SAFE AI – the Standards and Assurance Framework for Ethical AI

Helen McElhinney (Founding architect, SAFE AI / CDAC Network), Anjali Mazumder (The Alan Turing Institute) & Michael Tjalve (Humanitarian AI Advisory)

AI is becoming operational infrastructure in humanitarian action. It determines who receives aid, who is flagged, who is excluded.

The governance architecture that should govern those decisions does not yet exist at sector level.

This briefing establishes the nature and scale of that gap, why it matters and why individual agency policies cannot close it alone. It sets the analytical foundation for the SAFE AI framework, arriving May 2026.

No one has yet asked, on behalf of affected populations, what humanitarian AI is for, who approved its use, or against what standard that approval was made. That question will come. The sector should be ready to answer it.
— The Governance Gap in Humanitarian AI

The gap is structural

Global AI frameworks – the EU AI Act, NIST RMF, OECD Principles – provide important foundations but were not designed for fragile and conflict-affected environments, with adversarial information ecosystems, limited redress avenues and no visibility of the systems that influence decisions about you.

Individual agency policies cannot close this gap. The most governance-capable organisation in the sector cannot independently verify its own claims, provide assurance to external partners, or resolve accountability gaps when systems are procured from vendors whose architecture is not fully visible. If the most capable actor cannot close the gap alone, the gap is structural.

The briefing identifies the specific factors that compound this – from dual-use data risk and ungoverned adversarial information environments to deepening accountability asymmetry and responsibility without control – which show this isn't by design but rather is a structural issue.

When those decisions cannot be explained or contested, impartiality weakens. When data flows are governed elsewhere, independence narrows. When there is no route to redress, humanity becomes conditional
— The Governance Gap in Humanitarian AI

Why independent assurance is the answer

Aircraft certification involves regulators and third-party inspectors. Drug safety requires independent clinical review. Credibility depends on separation between those providing assurance and those whose systems are assessed.

The same logic applies here. 

Independent humanitarian AI assurance requires:

  • Evaluation against the failure modes that matter in these specific contexts.

  • Transparency obligations that apply before deployment, not after.

  • Proportionate human oversight calibrated to risk tier.

  • A guaranteed ‘right to know’ that an AI system is being used, that it is influencing decisions about you, and that there is a route to challenge it. Where that does not exist, the audit must say so.


Why this investment is necessary now

  • For innovation. Governance infrastructure gives organisations permission to act rather than reasons to stall and positions them to integrate the next wave of AI capabilities within a framework that already exists. 

  • For risk management. No agency should carry sole accountability for high-stakes AI deployment. Independent assurance distributes that accountability across a shared architecture. When something goes wrong, the question shifts from “Why did you do this?” to “Did you follow the standard?” That distinction matters most at the moment of failure.

  • For political accountability. Aid spending is under sustained parliamentary and public scrutiny sceptical to perceived waste or opacity in decision making. A sector that cannot answer questions about its own AI governance is exposed on two fronts simultaneously the use of AI, and the absence of oversight. 

  • Because smaller organisations can’t do this alone. Without shared assurance infrastructure, AI capability will track funding and amplify existing power asymmetries. Localisation commitments could be delivered with the right architecture.

The SAFE AI Framework launches May 2026

This briefing establishes the governance problem. The SAFE AI Framework, launching May 2026, provides the deployment-level tools to address it. But the framework alone is not the destination.

The governance task the sector must now invest in completing is independent assurance infrastructure which is adequately resourced, structurally separate from organisations selling or deploying AI, and capable of verifying claims rather than only receiving them.

That is what SAFE AI Phase 2 builds: the humanitarian AI assurance and auditing hub the sector currently lacks.

To engage directly on Phase 2, contact our founding architect at Helen.McElhinney@cdacnetwork.org

The principles are there. The framework is arriving. What determines whether we close the governance gap is the sector now investing in making it verifiable.

Register your interest using the form below and we’ll notify you when the SAFE AI Framework launches.

Previous
Previous

Tipsheet: Detecting and responding to inauthentic network behaviour

Next
Next

Snapshot: Information environment in Myanmar post-earthquake (November 2025)