Strengthening Sudan's humanitarian information environment with AI
Sudan's humanitarian responders are operating in one of the world’s most dangerous conflicts – more than 120 aid workers have lost their lives since the war began. Disinformation and misinformation, fuelled by AI, have become weapons of war, directly obstructing aid delivery, inciting violence and shaping perceptions of humanitarians.
CDAC Network’s latest research and Local Lifelines project revealed the depth of the information warfare targeting aid workers – and its devastating real-world life-threatening impact on communities.
Now, a new innovative collaboration between CDAC Network and tech partner Valent is moving from diagnosis to response. Based on CDAC’s recent research, we're co-designing AI-supported digital tools with local actors to respond to harmful information that threatens humanitarian responders, undermines aid delivery, and puts communities at risk.
Why do we need an ‘information in crisis’ response?
Sudanese responders and communities are facing an information battlefield. The information domain is polluted, politicised, and weaponised by armed groups and regional actors, creating an ecosystem where truth is a casualty and aid workers are targets.
CDAC's recent research on ‘Sudan’s information war’ documented just how dangerous this environment has become. Grassroots responders and Emergency Response Room (ERR) volunteers face escalating insecurity, political targeting, and surveillance. For example, in 2024, misinformation spread on Facebook had devastating and fatal consequences for a communal kitchen in Khartoum, set up to serve communities affected by the conflict. Within hours of misinformation being spread on Facebook about them feeding one of the warring faction soldiers, these local responders were targeted and killed. In addition, in 2024, CDAC member NRC faced false accusations on social media of involvement in smuggling weapons or supporting parties to the conflict in Darfur, disrupting their ability to deliver lifesaving aid.
This research also found that information barriers, such as a weak local media and destruction of telecommunications networks, fuel mistrust, delay critical aid delivery, and expose responders to coordinated smear campaigns and physical violence. One clear recommendation from the report was to treat the information environment as a strategic and critical part of humanitarian response—and to urgently invest in communication infrastructure that supports trust and rapid clarification. This project aims to do exactly that.
This new project will bring together CDAC Network's three key priority areas – harmful information, ethical AI (using the SAFE AI framework), and community participation – in an innovative co-design pilot project that supports local resilience and humanitarian outcomes.
What will the project aim to do?
This collaboration between CDAC Network and Valent will approach the impact of misinformation, disinformation and hate speech on communities through four interconnected strategies:
Detection and monitoring: We're deploying a safe, mobile-first AI tool that identifies when narratives are being manipulated and may be harmful to communities and humanitarians. It also detects when disinformation campaigns are using AI to artificially increase their reach and impact, for example, by using bots to spread hatred or incite violence. Using machine learning, the system detects patterns humans might miss—providing early warning of emerging threats.
Protection and response: Local responders and other humanitarian actors will receive narrative analysis and practical counter-content strategies to act quickly when harmful information emerges.
Community resilience: Arabic-language resources to dispel myths, safeguard ERR volunteers, and reduce the panic, anger, and fear that harmful narratives deliberately provoke. Co-creation with these champions will help make sure these resources are targeted and usable.
System learning: Through our Sudan CDAC Community of Practice—connected to CDAC's global network—we're creating space for donors, UN agencies, international NGOs, and local actors to share lessons and strengthen responses to information threats.
This project will:
Give local actors greater protection from online targeting, alongside AI-powered tools to verify, counter, and safely share information in real time. Taking the tech out of the hands of the big companies and into the hands of communities.
Provide Cash Consortium members and other response actors with analysis from social media data that allows them to adapt quickly to emerging narratives that could undermine operations or endanger staff.
Access to trusted, clear, and timely information in Arabic that supports safer decision-making amid displacement and crisis.
This pilot is being (third party) monitored and analysed in real time, which will crucially provide evidence and lessons that strengthen localisation, accountability, and resilience—in Sudan and beyond.
Co-designing AI solutions with local responders
On 10-11 October, we hosted a co-design workshop in Kampala with members of the Emergency Response Rooms and Sudanese NGOs working in Sudan. This workshop is a hugely important step for the project, ensuring that the project tools developed, including the use of AI to assess social media data and its reporting, are enriched by contextual expertise, are designed in a way that best works for them, and that the analysis is made useful and informs action.
For more information about this project or CDAC Network's work in Sudan, please visit our website www.cdacnetwork.org or contact stijn.aelbers@cdacnetwork.org.