SAFE AI
Standards and Assurance Framework for Ethical AI
AI has the ability to accelerate safe innovation and change humanitarian action for the better. Yet, there is a real risk of an underfunded, overstretched humanitarian sector accelerating towards unsafe use of AI to reduce costs with serious unintended consequences for vulnerable populations.
CDAC Network, The Alan Turing Institute and Humanitarian AI Advisory have partnered to launch the SAFE AI project: Standards and Assurance Framework for Ethical Artificial Intelligence. This initative, funded by the UK Foreign, Commonwealth & Development Office (FCDO), will develop a practical and useable foundational framework for enabling responsible AI in humanitarian action.
We're creating practical AI compliance and regulation guidelines, developing AI technological assurance tools to check if AI systems are fair and trustworthy, ensuring affected communities can participate (community-in-the-loop) and have a real say in how AI is used, and engaging with humanitarian organisations to build solutions that address their actual needs.
The governance gap in humanitarian AI: addressing the structural gap between global frameworks and operational reality
The first instalment of the SAFE AI Framework establishes the nature and scale of the humanitarian AI governance gap, why it matters and why individual agency policies cannot close it alone. It sets the analytical foundation for the full framework, arriving May 2026.
Click to read the briefing paper, or explore our other tools below.
THE FRAMEWORK
How SAFE AI works
SAFE AI is an end-to-end assurance process. It is scalable: a small organisation and a large agency can both use it, at whatever level of depth their risk profile requires. Each stage creates a traceable governance record.
-

Onboarding & Readiness Checklist
Is your organisation ready to deploy AI responsibly? What governance foundations need to be in place before you begin?
-

AI Impact Assessment
A structured assessment of risks, dependencies and potential harms, calibrated to the specific context and population affected.
-

AI Transparency Card
A plain-language record of what an AI system does, how it was assessed, and what safeguards apply.
-

AI Technical Assurance
Tools for evaluating whether AI systems are fair, reliable and trustworthy in your specific operational context – not just in theory.
Community in the Loop – embedded throughout
This is not a checkbox. Community in the Loop means affected communities have had meaningful input into the design, oversight and adjustment of AI processes – at every stage of the lifecycle. It is the governance principle that makes SAFE AI distinctively humanitarian.
RESOURCES
SAFE AI in practice
The full framework launches 19 May 2026. Explore the research and guidance developed alongside it.
LAUNCHING 20 MAY 2026
The SAFE AI Framework
The full four-component governance framework for ethical AI in humanitarian action. Free to use. Available in English, Spanish and Arabic. Scales to fit any organisation.
-

The governance gap in humanitarian AI: addressing the structural gap between global frameworks and operational reality
BRIEFING PAPER
The analytical foundation for SAFE AI – establishing the nature and scale of the structural governance gap, and why individual agencies cannot fill it alone.
-

From experimentation to engagement
ACADEMIC PAPER
Evidence on the paradox of participatory AI and power in contexts of forced displacement and humanitarian crises.
-

Humanitarian AI terms
GLOSSARY
Plain-language definitions to help organisations build better, safer and more effective AI partnerships.
-

Co-designing AI solutions with crisis-affected communities
HOW-TO NOTE
Practical guidance for meaningful community co-design of AI in humanitarian contexts. -

Addressing power dynamics in participatory AI for crisis-affected communities
POLICY BRIEF
Research on the structural challenges of participation and power when AI meets forced displacement. -

Co-design vs. user-centred design for AI solutions
FACTSHEET
A clear breakdown of the difference – and why it matters for responsible humanitarian AI.
THE CDAC STORY
Our journey to SAFE AI
CDAC’s commitment to community voice in humanitarian response predates AI – it is the foundation of everything we do.
As AI began reshaping humanitarian operations, we asked: what happens to participation and accountability when decisions are made by systems communities cannot see or contest? These videos trace that arc.
Perspectives on AI from Kakuma refugee camp
The film that started it all – we consulted crisis-affected communities in Kenya directly about AI governance, asking whether this moment can disrupt power imbalances rather than entrench them.
Dr Abeba Birhane: AI For Good?
A critical challenge to the sector at a watershed moment: AI for genuine social good must be built, controlled and owned by the communities it serves.
Nyalleng Moorosi: Data, power and participation
The full stakes laid out – AI promises participation at scale, but without ethical governance it risks deepening the inequalities and harms humanitarian action seeks to address.
OUR POSITION
What we advocate for
SAFE AI is a framework – but frameworks need backing. These are our three key asks of donors, governments and the sector to make humanitarian AI safe, accountable and ethical.
-

A universal right to know
People affected by AI-influenced decisions should be told when automation has materially shaped an outcome affecting them, and should have a named route and a named person responsible for contesting it.
-

Sector-wide sight of SAFE AI
Donors and humanitarian organisations should be aware of and supported to use the SAFE AI Framework. Donor conditionality is one pathway from voluntary use to sector standard.
-

Independent AI assurance for the sector
The sector needs assurance infrastructure structurally separate from those deploying or selling AI – capable of verifying claims, not just receiving them. This is what SAFE AI Phase 2 builds.
SAFE AI contributors
-

Helen McElhinney
Founding architect, SAFE AI
& Executive Director, CDAC Network -

Anjali Mazumder
Research Director, AI, Accountability, Inclusion & Rights, The Alan Turing Institute
-

Michael Tjalve
Founder, Humanitarian AI Advisory
-

Sarah Cotton
Membership & Delivery Lead, CDAC Network
-

Suzy Madigan
CDAC Network expert consultant
-
Shruti Viswanathan
CDAC Network expert consultant
Get in touch.
We want to hear your ideas - if you want to be involved, have a project or research we should know about, feedback or tips, please either contact us via the online form or at info@cdacnetwork.org.
This project has been funded by UK International Development from the UK government.