In the age of AI, how do we scale digital opportunities and secure safer information landscapes for people caught in conflict?

CDAC Network’s 2023 Public Forum convened a panel of multidisciplinary experts to explore the trade-offs associated with digital opportunities for information integrity in conflict. Chaired by CDAC Board members Dr Gozibert Kamugisha, the panel featured Briana Orr (Information Services Advisor, International Rescue Committee); Mike Walton (Chief of Digital Service, UNHCR), Stijn Aelbers (Senior Humanitarian Advisor, Internews) and Suzy Madigan (Founder, The Machine Race).

Listen to their conversation above or on SoundCloud.

Key takeaways

Artificial intelligence (AI) has huge potential – if crisis-affected communities are central to its design. To ensure its capabilities are harnessed for positive impact, communities must be decision-makers in the design, deployment and governance of AI. We’ve learned from the movement to decolonise aid that consultation does not go far enough; communities must be empowered to drive decisions around aid. The same is true for AI. Humanitarian and civil society actors have an important role to play in brokering this engagement and promoting ethical approaches to AI development and deployment in crises, informed by humanitarian values and commitments.

Let’s not lose the ‘human’ in humanitarian. While leveraging digital solutions, human intervention will remain critical in addressing the complexities encountered in humanitarian settings. Digital technologies can bring efficiencies and scale, but human insight, empathy and judgement cannot be replaced by an algorithm. Maximising the benefits of digital solutions means finding a complementary balance between human and machine.

Humanitarians aren’t afraid of taking risks in the ‘real world’ to reach people in need; similarly, digital risks are real but can be overcome with specific skills. The panel stressed the need for strategic and intentional engagement in the digital sphere, while recognising that this can open up greater cybersecurity concerns, possible surveillance and data exploitation risks. Managing risks involves being able to identify and understand their implications, then trying to reduce negative impacts. Digital and AI literacy will be critical to achieve this. 

Media needs to meet people’s needs in the age of AI and mis/disinformation. Media actors should place greatest emphasis on providing verifiable information, nurturing a culture of scrutiny and collaborating with digital and civil society actors to create safer and more trustworthy information ecosystems.

Humanitarians must communicate, even when facts are unclear in conflict settings. They should engage in dialogue with communities affected by the crisis, respond thoughtfully to queries, and provide information even when uncertainties exist. Not doing so can create a vacuum that allows misinformation, confusion and lack of trust to proliferate.

Previous
Previous

Do we need a humanitarian manifesto for AI? Join CDAC Network at the Alan Turing Institute’s AI UK Fringe

Next
Next

The next level in community-led engagement and accountability: integrating a participatory mixed-methods approach