In 2022, #BookTok on TikTok boosted literary sales in the US, local commerce via WhatsApp expanded small businesses across Latin America, and the use of Facebook for voter outreach boosted youth and first-time-voter participation in Indian regional elections. That same year, social media amplified ethnic tensions during the Kenyan elections, spread disinformation in the war in Ukraine, and incited violence during protests in Iran. As in previous years, and those that followed, social media shaped culture, politics, and conflict in multiple, often contradictory ways. But 2022 also marked a turning point: global time spent on social platforms began to decline, especially among young people. Could this signal that its influence on society is starting to wane?
Answering that question requires understanding the mechanisms by which social media influences societies and, of special interest to this monograph, impacts conflict. Fundamentally, social media trends are about a consensus held by a specific group. In the peacebuilding field, we worry about how misinformation erodes truth, damages institutions, and leads to violence. But it’s essential to dig a little deeper into the causes for this erosion of truth.
Truth in a post-modern sense is not static; it is dialogical, or, in other words, the reflection of a societal consensus that provides stability and the common ground needed for disagreement, until this consensus itself is constructively challenged. Social media has impacted truth and stability by mining societal consensus, and the key characteristic of this mined consensus is not so much misinformation — that is, a signal or, at most, a tactic — as affective polarisation. Affective (or toxic) polarisation is distinct from issue-based or idea polarisation. It refers to situations in which one believes certain people for who they are, not for what they say. The more powerful social media trends and digital sub-groups are, the more affective polarisation, and the more a broad societal consensus on the truth splinters into sub-group consensus. It’s not so much that truth has been eroded; it’s more that it has been fragmented.
It’s not so much that truth has been eroded; it’s more that it has been fragmented.
In this issue, Ahmad Qadi explores in depth how this fragmentation is impacting conflict, looking at how the “triad of disinformation, hate speech, and polarisation” on social media both reflects and actively fuels tensions, deepens divisions, and ultimately sustains conflict. He calls this — and I would agree — “an essential component of modern information warfare”. He describes how Israel has waged this psychological warfare on Palestinians, using AI-generated content and bot farms to amplify anti-Palestinian sentiment and promote Israel’s military actions across Western digital spaces. Sanjana Hattotuwa’s article further explores the impact of these manipulated online sentiments on a specific scenario of importance to building peace: closed-door negotiation rooms and mediation processes. The interview with Stephanie Williams tells a similar story, looking at how online hate speech and foreign interference during the UN peace efforts in Libya between 2019 and 2020 not only deepened divisions on the ground but also endangered women peacebuilders and negotiators (including Williams herself).
I would love to think that the decline in social media use that started in 2022 can also mean a slow return to a less fragmented, less polarised world where conflicts are not as likely to escalate and become intractable; one where we need to worry less about the risks that Qadi and Hattotuwa explore in their articles. However, the rise of AI over the past few years not only continues the trend started by social media use but also complicates it in ways that further intensify conflict.
In her article on AI risks for sustainable peace, Evelyne Tauchnitz explores this from the perspective of relational peace – that is, how AI diminishes the capacity of individuals and their networks (societies) to turn disagreement into constructive deliberation rather than destructive or violent conflict. She shines a light on how AI impacts dignity — through data colonialism and extraction, by mining and responding to our emotional states, through digital surveillance, and by eroding our practical and moral agency — and how it hollows out trust in institutions. In her view, AI systems erode the conditions that enable societies to be resilient to conflict.
I would love to think that the decline in social media use can also mean a slow return to a less fragmented, less polarised world where conflicts are not as likely to escalate and become intractable
Building on Tauchnitz’s exploration, there are two specific risks of AI to peace worth paying special attention to. First, AI can undermine truth-as-consensus by impoverishing journalism and research. Machines cannot provide the deeply contextual, human-driven nuance and instinct that reporters and researchers offer, such as interviewing witnesses to events. In his article, Wahbi Abdelrahman weaves a beautiful, sorrowful story of how a digital archive is preventing culturcide in Sudan’s war, and how embedding digitisation skills across local institutions is a form of resilience to erasure. Her piece highlights that the digital archive is a socio-technical process —a human-machine collaboration that AI cannot replace. Ironically, this diminishing of human reporting simultaneously threatens the future development of AI itself. Today’s Large Language Models (LLMs) rely on massive amounts of data, and there is some concern (debated) that if AI depletes the quality human-generated content it needs for training —like quality journalism, archiving and research— it may face a data dead end.
Second, AI can weaken deliberative processes. There are some excellent use cases of AI for deliberation. Luke Thorburn’s article explores how we can design ranking algorithms (i.e. the computer programs that decide what content we see first or most, in any digital medium) to foster connection —for example, by finding statements where those who otherwise disagree can agree— and how the Alliance for Middle East Peace has used this. Stephanie Williams describes how using AI to summarise positions in the Libya digital dialogues run by the UN helped to put pressure on the political class to acknowledge popular opinion on difficult issues. As with research, these use cases are embedded in socio-technical processes that retain human connection. Where AI is used to represent constituencies without human participants, it erodes the essential human processes of listening and deliberation, fundamental to democracy and peace. Deliberation is central to public debate and legislative processes, enabling groups to challenge biases, solicit information, and forge compromises that lead to more defensible and durable decisions. Over-relying on AI-driven summarisation or modelling fails to capture this vital, messy, lived experience.
There is some concern (debated) that if AI depletes the quality human-generated content it needs for training —like quality journalism, archiving and research— it may face a data dead end
Neither social media nor AI are inevitable forces. They are business endeavours. What makes social media worse for society is that its business model is premised on capturing attention. For a minute, it looked like the AI business model might be more focused on cloud infrastructure than on advertising/capturing attention, but earlier this year it became clear that this is changing: OpenAI, one of the most prominent players in the AI space, has chosen advertising as its primary business model, and others will follow suit. Here we go again. As long as the design of the internet, digital technologies, and AI is steadfastly focused on maximising attention, it will have a negative impact on conflict— via fragmentation, via polarisation, and through the erosion of opportunities for our society to deliberatively arrive at a consensus that spells out a sustainable, pluriversal peace.
What’s more, if we don’t apply conflict-sensitive principles to the design and governance of technology, it can—and will—be weaponised for control by powerful actors. Nerima Wako brings to life this risk of capture in her passionate piece on cyberactivism in East Africa—where, she says, the internet is not just a medium, but the movement—and the repressive digital backlash against this new (primarily youthful) democratic activity. Her piece is a cautionary tale about how regulation can mean repression, but also a hopeful and powerful call to defend a free and trusted internet that offers spaces for deliberation when offline spaces are under attack. In her own words, “A digitally peaceful society is one where activists don’t need burner phones; where laws protect speech and where we can disagree loudly, passionately, safely.”
People may have had enough of digital spaces that are simultaneously repressive and polarising; maybe the 2022 decline in social media use is a harbinger of things to come. None of these technologies are unstoppable forces of nature. We can stop using them. We can use them differently. We can regulate them. We can redesign them. The six articles in this monograph also point to hope.
Having painted a bleak picture of how digital warfare has been waged on Palestinians, and how this is an example of broader trends in modern information warfare, Ahmed Qadi’s article lists a plethora of avenues to turn from division to dialogue. Similarly, Evelyne Tauchnitz’s article ends on a hopeful note, arguing that AI can contribute to peace if we align its design principles not only with improving efficiency and robustness, but also with safeguarding human dignity and freedom. To these broader visions of hope, Wahbi Abdelrahman and Luke Thorburn’s articles examine concrete, practical examples of how digital technologies can contribute to peace. Stephanie Williams describes how the negative impact of social media on the Libyan peace process sparked a conversation within the United Nations about how digital dialogues could be used to increase transparency in mediation processes.
There are as many challenges to peace from the internet, digital technology, and AI as there are opportunities. Addressing them it is a moral and political question that appeals to anyone who is working towards peace in the digital age
How do we get peace-supporting, pro-social technology design, such as Thorburn describes, into the mainstream? Here is one possible answer: Sanjana Hattotuwa’s article ends with a call for mediators and peacebuilders not just to become more adept at navigating “today’s adversarial digital commons,” but also to influence how they are designed.
Hattotuwa’s conclusion is also the direction I think we should be walking in. There are as many challenges to peace from the internet, digital technology, and AI as there are opportunities – the articles in this issue offer an overview of what we should worry about in a digitally mediated world and how we might turn the tide towards peace. But crucially, this monograph also makes the case that addressing these challenges is not only a technical question reserved for those of us who know how to run social media campaigns to counter online hate or deploy AI-sensemaking for a consultative process. It is a moral and political question that appeals to anyone who is working towards peace in the digital age.
This issue of the “Peace in Progress” e-magazine (Number 43) is a co-edition by ICIP and Build Up. The collaboration stems from the co-organization of Build Peace 2025 in Catalonia, the Build Up’s annual conference on technology, innovation, and peacebuilding. The event took place from November 21 to 23 in Santa Coloma de Gramenet (Barcelona).
Photography
Symbolic representation of collaboration between humans and digital technologies. Author: Rawpixel.com (Shutterstock).