The governance gap in humanitarian AI: addressing the structural gap between global frameworks and operational reality
The first instalment of SAFE AI – the Standards and Assurance Framework for Ethical AI
Helen McElhinney (CDAC Network), Anjali Mazumder (The Alan Turing Institute) & Michael Tjalve (Humanitarian AI Advisory)
Why this matters: the world’s most vulnerable people are governed by AI systems with no sector-wide governance architecture to ensure safety, accountability and legitimacy
AI is now operational infrastructure in humanitarian action. It determines who receives aid, who is flagged, who is excluded. Yet the governance architecture to ensure these systems operate safely – in contexts defined by vulnerability, power asymmetry and limited avenues for redress – does not exist at sector level.
This is the central finding of SAFE AI’s inaugural briefing paper. The humanitarian sector has reached a tipping point: adoption is accelerating while governance infrastructure lags. Global AI frameworks provide important foundations, but they were designed for states, regulators and technology companies. They do not translate into the operational realities of organisations working in fragile and conflict-affected environments.
This briefing establishes the nature and scale of that gap, why it matters and why individual agency policies cannot close it alone. It sets the analytical foundation for the SAFE AI framework, arriving May 2026.
“No one has yet asked, on behalf of affected populations, what humanitarian AI is for, who approved its use, or against what standard that approval was made. That question will come. The sector should be ready to answer it.”
What this briefing establishes
The governance gap is structural. It cannot be addressed by individual agency policies, however rigorous. It requires infrastructure that operates above the agency level.
Global frameworks don’t translate. The EU AI Act, NIST RMF and OECD Principles, for example, provide important foundations, but were not designed for humanitarian operational conditions.
Verification is the critical weakness. Organisations are responsible for decisions influenced by systems they cannot independently scrutinise, test or explain.
Shared assurance is the solution. In a resource-constrained sector, shared infrastructure is the only realistic path to making governance feasible in practice without duplication.
Incentives determine uptake. The main barrier to stronger AI governance is not a lack of guidance. It is whether governance aligns with existing incentives across donors, agencies and vendors. Governance must reduce burden, not add to it.
Where existing frameworks fall short
The briefing identifies four structural challenges that existing governance frameworks do not adequately address in humanitarian contexts:
AI intensifies an existing dual-use problem. Systems used for registration, targeting and needs analysis may rely on data and model infrastructure that can serve commercial, intelligence, or military purposes. In active conflict environments, the distinction between humanitarian and military AI is increasingly difficult to maintain – placing neutrality and independence at structural risk.
Adversarial information environments are ungoverned. AI systems in humanitarian contexts interact with contested information ecosystems where misinformation, manipulation and limited verification capacity are baseline conditions. No existing framework was designed for contexts where technical infrastructure is contested and the civilian/military boundary is unstable.
Accountability asymmetry is deepening. Humanitarian organisations are structurally more accountable to donors than to affected populations. AI systems risk further entrenching this imbalance – decisions shaped by systems that are neither transparent nor contestable by those they affect. This asymmetry has documented, differentiated impacts on those already facing barriers, including women and girls.
Responsibility without control. Organisations remain accountable for AI decisions they did not design and cannot fully inspect. Proprietary systems, external platforms and concentrated cloud infrastructure create structural asymmetries that procurement alone cannot resolve.
“When those decisions cannot be explained or contested, impartiality weakens. When data flows are governed elsewhere, independence narrows. When there is no route to redress, humanity becomes conditional”
The case for independent assurance: internal governance cannot verify itself
An organisation that designs, deploys, and assesses its own AI systems faces the same problem as a company that audits its own accounts. The output may be accurate, but without independence, there is no reliable way to tell. Drug safety relies on independent clinical review. Aircraft certification involves regulators and third-party inspectors, not manufacturers. In each case, credibility depends not only on the rigour of the process, but on the separation between those providing assurance and those whose systems are being assessed.
Even the sector’s most governance-capable organisation cannot independently verify its own claims, cannot provide assurance to external partners and donors, and cannot resolve accountability gaps when systems are procured from vendors whose architecture is not fully visible. If the most governance-capable actor cannot close that gap alone, the gap is structural.
This is the shift SAFE AI is designed to enable: from governance as process to governance as assurance.
What humanitarian AI auditing involves
Independent assurance requires a methodology built for humanitarian deployment conditions – not adapted from general-purpose frameworks. Four components constitute the core:
Context-specific evaluation: Testing against the failure modes that matter in these contexts – performance across low-resource languages, reliability in adversarial information environments and behaviour in the specific functions for which a system is deployed.
Transparency obligations: Access to training data provenance, documented failure modes, update policies and supply chain disclosure. Where a developer cannot provide this, the audit cannot proceed and the system cannot be recommended for humanitarian deployment.
Human oversight architecture by risk tier: From documented internal processes for lower-risk applications, to mandatory human authorisation at each decision point for systems influencing eligibility, prioritisation or protection.
Community accountability mechanism: A meaningful pathway for affected people to raise concerns about system outputs, and for those concerns to result in adjustment, limitation or withdrawal. Where none exists, the audit finding must name its absence.
These components apply at deployment – and upstream, to whether systems are appropriate for humanitarian use before they are integrated into applications at all. This is the direction SAFE AI's Phase 2 auditing hub is designed to pursue.
What SAFE AI provides: governance infrastructure for AI deployments in humanitarian contexts
✓ A governance architecture to translate global AI norms into operational tools for humanitarian contexts
✓ A risk-tiered framework for humanitarian AI use cases – proportionate, not one-size-fits-all
✓ Actionable controls across procurement, deployment, monitoring, and accountability
✓ The foundation for independent, third-party assurance of high-risk deployments
✓ A minimum standard achievable now – regardless of organisational capacity
SAFE AI is not a voluntary ethical code, a single organisation’s internal policy, a technology platform, a mandatory regulatory regime or a substitute for legal compliance or independent critical thinking.
The SAFE AI Framework is coming soon
This briefing paper establishes the problem. The SAFE AI Framework, launching May 2026, provides the architecture to address it: governance tools designed for the operational realities of humanitarian action.
Four reasons to invest in independent humanitarian AI assurance
To enable innovation. Governance infrastructure gives organisations permission to act rather than reasons to stall, and positions them to integrate new AI capabilities within a framework that already exists.
To share the risk. Independent assurance means the question shifts from “Why did you do this?” to “Did you follow the standard?” at precisely the moment the sector can least afford to be without an answer.
To relieve political pressure. Aid spending is under sustained political scrutiny. A sector that cannot answer questions about its own AI governance is exposed on two fronts simultaneously.
To deliver on localisation commitments. Shared infrastructure and independent assurance are the only structural counterweights to AI capability tracking funding and amplifying existing power asymmetries.
Register your interest using the form below and we’ll notify you when the SAFE AI Framework launches.