Everyone’s problem, no one’s job: the humanitarian response to harmful information

An AI-generated image circulates online, falsely suggesting that a school in Iran is being used for military purposes. The next day, a real school is struck.

‍Those of us working to improve information integrity knew this sort of thing could happen, but when it did on 28 February 2026, it felt like the world had crossed a threshold.

The changing information environment has real-life consequences for crisis-affected communities: it shapes whether people trust aid, whether they access services, and whether humanitarians can operate safely. In some contexts, it is already affecting who receives assistance and who doesn’t. And now, it seems, harmful information can even determine who lives and who dies.

Yet, despite growing recognition of these risks, the aid sector still struggles to play a proactive and coordinated role in the information domain.

At CDAC Network’s Information Integrity Community of Practice call on 20 April, we will hear from Witness and colleagues about how the information environment is impacting the humanitarian situation in Iran – and what options we have for addressing the harms.

Before we get there, it’s worth reflecting on what we learned at our recent panel discussion at Humanitarian Networks and Partnerships Weeks (HNPW), because it helps explain why this call is needed and why the sector’s response to harmful information still falls short.

A ‘wicked problem’: why harmful information is so hard to address

‍One explanation for the sector’s slow response is that we’re grappling with what DW Akademie has called a ‘wicked problem’, where there are no clear and definitive solutions. The problem is so inextricably connected to other complex, unwieldy issues across psychology, media studies, political economy and more, that the causal chain is frustratingly difficult to establish.

Humanitarian organisations are used to counting things they can see: destroyed infrastructure, displaced families, disease outbreaks, convoy delays, attacks on staff. Harmful information behaves differently: it travels invisibly; it is absorbed before it is observed; it mutates across contexts. It appears online and then produces effects offline that may not be immediately legible as information problems at all.

The scale of impact also exceeds individual instances of misinformation. When images of the aftermath of the school bombing emerged, they were dismissed as fake. The problem is not just the circulation of false content, but something more destabilising: a collapse of shared reality, where even the basic question of whether something is real is increasingly difficult to answer. The conditions that make trust possible are eroding, and humanitarians are not yet organised to respond to that reality.

What the panel at HNPW told us

CDAC Network’s panel at HNPW 2026

In an effort to untangle this web of problems, CDAC hosted a panel on harmful information at HNPW 2026 in Geneva, bringing together experts from the Red Cross Red Cresent Movement, media development and conflict security analysis to talk about one of the defining challenges of modern crisis response: when the information environment itself becomes a source of risk.

One of the most striking things about the conversation was how speakers emphasized that this is not a new issue. Sacha Meuter of Fondation Hirondelle pointed to the organisation’s origins in the aftermath of the Rwandan genocide, when the catastrophic role of hate media became impossible to ignore. Philippe Stoll, an independent expert on technologies, humanitarian action and conflict who spent more than two decades at the ICRC, traced his own thinking back to operational communication work in the West Bank in the early 2000s. Even before algorithmic feeds and synthetic content, there was a clear need to explain humanitarian action, answer suspicion and respond to public anger in real time.

What has changed is scale, speed and machinery through which harmful information now travels. These shifts have amplified both its immediate and more indirect or delayed effects, such as the ‘liars dividend’: where growing awareness of misleading content enables public figures to dismiss genuine, incriminating evidence as fabricated, fostering widespread skepticism and eroding civil societies’ ability to distinguish truth from falsehood.

A cross-cutting risk, not a communications problem

The IFRC’s World Disasters Report 2026, focused on harmful information, offers perhaps the clearest sign yet that this issue has moved from the margins to the centre of humanitarian concern. And yet as Charlotte Lindsey Curtet, the report’s lead author and editor, described, one of the key inhibitors to progress is that harmful information is still too often treated as a communications problem rather than a risk factor that cuts across humanitarian response.

That distinction matters. A communications problem can be delegated to the communications team. A risk factor must be integrated into analysis, planning, security, programme design and community engagement. It has to be budgeted for, tracked and owned.

Christina Wille of Insecurity Insight brought this into sharp focus from the perspective of aid security. What keeps her up at night, she said, is not only false content in the abstract but the way harmful narratives increasingly target the purpose and principles of aid itself. Aid is portrayed as interference or espionage; humanitarian workers are accused of wrongdoing and doxxed; narratives that might once have remained fringe now circulate rapidly and cross borders with ease.

Too often, she argued, aid agencies only monitor mentions of their own brand or acronym, treating harmful information as a reputational issue alone. But online audiences rarely distinguish between one agency and another: harmful narratives attach not to a single logo but to the entire humanitarian enterprise.

No agency, therefore, can fully protect itself alone, and the problem cannot be solved by better brand management. A collective response is needed: shared analysis, shared language and a shared defense of humanitarian principles. Yet humanitarians, the panel observed, still tend to underestimate the extent to which silence creates a vacuum and are often hesitant to respond publicly to challenges. In highly contested information environments, refusing to speak does not necessarily preserve neutrality; sometimes, it just leaves room for others to define what you stand for.

Trust is built through relationships, not messaging

Aarni Kuoppamäki of DW Akademie and Sacha Meuter both argued that the humanitarian sector still too often misunderstands communication itself. Communication is not just public relations but is bound up with protection, accountability, access, participation and local ownership. When done badly, or not at all, everything else becomes harder.

This is where our panel diverged from the usual calls for better messaging. What emerged is a stronger argument: that resilience to harmful information is built not through polished content but through trusted relationships and credible local information ecosystems.

If the strongest defence against harmful information is trust, then our response cannot be reduced to fact-checks or campaigns. It requires deeper investment in community engagement, public-interest media, information needs assessments, and local platforms where affected communities are active participants, not passive recipients of humanitarian messaging.

It also means facing an uncomfortable possibility: some humanitarian organisations are more vulnerable to harmful information because they have weak local relationships to begin with. One audience member made this point bluntly, arguing that agencies with real, organic ties to communities often prove more resistant to rumours than those that parachute in with little local legitimacy. Harmful information spreads through existing frustrations, grievances, exclusions and power imbalances. It finds traction where relational weaknesses already exist.

A more honest starting point

The HNPW panel did not offer a neat solution, because there isn’t one. The funding environment is bleak. Humanitarian organisations are under pressure to prioritise the immediately visible. Communications and community engagement remain underfunded. Shared learning is weak. Institutional memory is patchy. Teams rotate quickly. Local media outlets close every day. Context shifts faster than strategy.

And all the while, the information environment continues to mutate. How do you preserve and learn from an evidence base when so much harmful content appears briefly and vanishes? How do you adapt when information is no longer only mediated through journalists, broadcasters or social platforms, but increasingly through generative AI systems that remix and surface material in new ways?

The issue is no longer just the spread of false or manipulative information. It is that the entire architecture through which people encounter and interpret information is shifting at speed – and humanitarian institutions are struggling to keep pace.

What our panel did offer was a more honest starting point. Yes, humanitarians must build stronger relationships with digital rights experts and technologists to understand and respond to these information threats. But safeguarding trust in modern crises will require more than improved messaging. It requires us to build conditions that make trust possible: proximity, local legitimacy and the courage to treat information not as an afterthought but a fundamental part of aid.

These are the issues that will underscore our Information Integrity Community of Practice call on 20 April. Witness, an organisation with deep expertise in how visual evidence is documented, verified and protected in human rights crises, brings a critical perspective to this landscape.

We hope you will join us. Email info@cdacnetwork.org for your invite.

Next
Next

CDAC Network joins International Association for Safe and Ethical AI (IASEAI)