SAFE AI
Standards and Assurance Framework for Ethical AI
SAFE AI gives humanitarian organisations the governance infrastructure to deploy AI with confidence – managing risk, distributing accountability, and keeping communities in the loop. Free to use. Built for every humanitarian, not just AI engineers.
WHY NOW
Built for organisations that want to act – not stall
AI is already reshaping humanitarian operations: needs assessments, cash transfer targeting, anticipatory action systems that trigger financing before a crisis hits. For organisations navigating this fast-moving landscape, the question is no longer whether to engage with AI – it’s how to do so responsibly, without carrying the full weight of that risk alone.
Right now, there is no shared infrastructure to help you do that. Global frameworks like the EU AI Act were not designed for fragile and conflict-affected environments. Individual agency policies can only go so far. And as funding pressure mounts, the cost of getting AI wrong – operationally, reputationally, politically – is rising.
SAFE AI changes that calculus.
-
No agency should carry sole accountability for high-stakes AI deployment. SAFE AI distributes that accountability across a shared architecture. When something goes wrong, the question shifts from “Why did you do this?” to “Did you follow the standard?”
-
Governance infrastructure gives organisations permission to move forward rather than reasons to stall, and positions them to integrate the next wave of AI capabilities within a framework that already exists.
-
Aid spending faces sustained parliamentary and public scrutiny. A sector that cannot answer questions about its own AI governance is exposed on two fronts at once: the use of AI, and the absence of oversight. SAFE AI closes both gaps simultaneously.
THE FRAMEWORK
How SAFE AI works
SAFE AI is an end-to-end assurance process that takes you through the four stages of implementing AI in a humanitarian context: (1) problem definition and concept; (2) design; (3) development; and (4) deployment and monitoring. It is scalable: a small organisation and a large agency can both use it, at whatever level of depth their risk profile requires. Each stage creates a traceable governance record.
Community in the Loop – embedded throughout
This is not a checkbox. Community in the Loop means affected communities have had meaningful input into the design, oversight and adjustment of AI processes – at every stage of the lifecycle. It is the governance principle that makes SAFE AI distinctively humanitarian.
GROUNDED IN HUMANITARIAN PRINCIPLES
SAFE AI doesn’t bolt humanitarian values onto a generic AI governance framework – it builds from them. Humanity, impartiality, neutrality and independence are not constraints on what AI can do in humanitarian contexts: they are the architecture of how it must be governed.
RESOURCES
SAFE AI in practice
The full framework launches 19 May 2026. Explore the research and guidance developed alongside it.
LAUNCHING 20 MAY 2026
The SAFE AI Framework
The full four-component governance framework for ethical AI in humanitarian action. Free to use. Available in English, French and Arabic. Scales to fit any organisation.
-

New to humanitarian AI? Start here
GLOSSARY
Plain-language definitions to help organisations build better, safer and more effective AI partnerships. No technical background needed.
-

The governance gap in humanitarian AI: addressing the structural gap between global frameworks and operational reality
BRIEFING PAPER
The analytical foundation for SAFE AI – establishing the nature and scale of the structural governance gap, and why individual agencies cannot fill it alone.
-

From experimentation to engagement
ACADEMIC PAPER
Evidence on the paradox of participatory AI and power in contexts of forced displacement and humanitarian crises.
-

Co-designing AI solutions with crisis-affected communities
HOW-TO NOTE
Practical guidance for meaningful community co-design of AI in humanitarian contexts. -

Addressing power dynamics in participatory AI for crisis-affected communities
POLICY BRIEF
Research on the structural challenges of participation and power when AI meets forced displacement. -

Co-design vs. user-centred design for AI solutions
FACTSHEET
A clear breakdown of the difference – and why it matters for responsible humanitarian AI.
THE CDAC STORY
Our journey to SAFE AI
CDAC’s commitment to community voice in humanitarian response predates AI – it is the foundation of everything we do.
As AI began reshaping humanitarian operations, we asked: what happens to participation and accountability when decisions are made by systems communities cannot see or contest? These videos trace that arc.
Perspectives on AI from Kakuma refugee camp
The film that started it all – we consulted crisis-affected communities in Kenya directly about AI governance, asking whether this moment can disrupt power imbalances rather than entrench them.
Dr Abeba Birhane: AI For Good?
A critical challenge to the sector at a watershed moment: AI for genuine social good must be built, controlled and owned by the communities it serves.
Nyalleng Moorosi: Data, power and participation
The full stakes laid out – AI promises participation at scale, but without ethical governance it risks deepening the inequalities and harms humanitarian action seeks to address.
OUR POSITION
What we advocate for
SAFE AI is a framework – but frameworks need backing. These are our three key asks of donors, governments and the sector to make humanitarian AI safe, accountable and ethical.
SAFE AI contributors
-

Helen McElhinney
Founding architect, SAFE AI
& Executive Director, CDAC Network -

Anjali Mazumder
Research Director, AI, Accountability, Inclusion & Rights, The Alan Turing Institute
-

Michael Tjalve
Founder, Humanitarian AI Advisory
-

Sarah Spencer
Founding architect, SAFE AI
-

Suzy Madigan
CDAC Network expert consultant
-
Shruti Viswanathan
CDAC Network expert consultant
BE FIRST TO KNOW
The SAFE AI Framework launches
May 2026
Register to be notified when the framework goes live. Free for all humanitarian organisations.
This project has been funded by UK International Development from the UK government.