Who shapes global narratives in today’s AI-enabled world – and what do conflict-affected communities and humanitarians need to know?

Artificial intelligence (AI) is rapidly becoming a critical factor in global governance, international relations and the information environment, and it is crucial that humanitarians understand its implications in conflict contexts.

CDAC Network’s 2023 Public Forum convened a panel of experts to explore these issues from a macro perspective. Chaired by Helen McElhinney (Executive Director, CDAC Network), the panel featured Amil Khan (CEO, Valent), Jonathan Tanner (Founder, Rootcause) and Kristin Bergtora Sandvik (Professor of Legal Sociology, University of Oslo; Research Professor in Humanitarian Studies, Peace Research Institute Oslo). 

Listen to their conversation above or on SoundCloud.

Key takeaways 

Why are we focusing on AI? Does this even matter for people in crises who may have other priorities or lack digital access?

2023 was the breakout year for generative AI, raising public consciousness and compelling us to confront the opportunities and risks of emerging technologies. The panel acknowledged that these debates could feel very far removed from the needs of people during acute crises. However, they made the case for urgent attention on the links to mis/disinformation and the increasing usages of AI within the aid sector.

Developments in AI have enabled sophisticated disinformation campaigns to be deployed at speed, scale and relatively low cost. As a result, information environments are increasingly polluted, overloaded and manipulated – particularly in conflict-affected contexts – undermining people’s ability to easily access reliable information and make critical decisions.

This becomes even more dangerous in contexts such as Sudan, where official media channels have long not been trusted and people rely instead on social media. Whoever controls the messages that dominate social media determines the public debate. Indeed, several recent conflicts have served as testing sites for wide-scale disinformation campaigns enabled by AI.

Even those without internet access are impacted by the messages shaping public narratives. These can determine how communities, including minorities, are perceived; how resources are distributed; whether politicians or humanitarians are held accountable.

The impact of AI on the information environment is a critical issue in humanitarian settings – yet communities affected by crises and civil society actors are largely absent from global policy and governance conversations

The recent UK-led AI Safety Summit at Bletchley Park kicked off an AI ‘global governance carousel’ with further meetings planned in South Korea and then France. The momentum being generated is welcome, as is the inclusion of discussions on known harms of racial, cultural and linguistic biases within large language models. 

But there were no representatives of global south civil society present, nor representation of views of communities affected by crises. Whether due to perceptions that AI is too ‘complex and hard to understand’, time pressures, or commercial and strategic incentives, these voices have been largely absent from the debate – yet they have ‘something incredibly valuable and important to add’.

So far, much of the global focus has been on regulation, but this ‘plays into the hands of technology companies’ because it frames the conversation as solely technical rather than ‘values-based’.

Instead, we should focus on ways to ‘infuse’ these conversations with the values of humanitarian, civil society and community-based organisations confronting the impacts of AI in crisis, and framing an agenda for the types of regulation that will serve communities.

Humanitarian agencies should also be specific about the kinds of donor and foreign affairs strategies that would be beneficial in this field and seek to understand the future implications for their work.

We must urgently update our collective media and digital literacy to be AI-aware

The panel emphasised the need for investment in journalism that can hold tech companies with ‘entrenched power’ accountable – transparent and informed reporting on these emerging issues will be critical to upholding democracy and enabling the global public to have a stake in the conversation.

There was also recognition that media literacy programming must go further to upskill people to recognise AI-generated disinformation and navigate this shifting digital landscape. This capacity building should also be a priority across all levels of the humanitarian sector, since communication is ‘the business of every single humanitarian worker’.

Ultimately, the direction of AI development – currently at the forefront of global policy agendas – will impact everyone, not least those living through crisis. The time is now for us to collectively determine ‘the future we want and how we shape it’.


As part of CDAC’s commitment to advancing this agenda, the Network has submitted a pledge to the Global Compact on Refugees to ‘advance collective understanding, practices and influencing on how to mitigate and prevent online hate speech, mis- and disinformation targeting crisis affected populations, host communities and response actors’.

Previous
Previous

The next level in community-led engagement and accountability: integrating a participatory mixed-methods approach

Next
Next

Empowering communities: insights from the #ShiftThePower Summit in Bogotá