Peace is often misunderstood as the mere absence of war or overt violence – a state of order and stability. But this narrow conception of what Galtung has called “negative peace”[1] misses the deeper relational, systemic, and ethical foundations that make peace sustainable over time. In contrast, sustainable peace is not a static achievement or institutional end state, but a dynamic condition that emerges from the quality of relationships, the resilience of social networks, and the presence of human dignity and freedom.
To grasp peace more fully, it is therefore necessary to move beyond abstract ideals or treaty frameworks and attend to how people, groups, and institutions relate to one another. Peace is not a substance or structure, but a positive pattern of interaction – something that is closely tied to human behaviour. It is a phenomenon that happens “in between the nodes”: in the connections, exchanges, and mutual recognitions that bind people together through positive human experiences and exchanges. These linkages can be mapped as networks, where each node – whether a person, group, or institution – carries capacities and vulnerabilities, and each link that connects the different nodes reflects a particular quality of relationship that consists of a unique mix of characteristics such as trust and care, but also conflict or neglect. This networked view of peace shifts the focus from static indicators to the dynamics of connection: how relationships are built, strained, or broken; how power circulates; and how peace and violence coexist on a continuum. Networks can be robust, bridging divides and supporting cooperation – or brittle, marked by fear, exclusion, and fragmentation. Peace is thus not a given, but same as violence, an emergent property of the conditions that define the characteristics of the system – always at risk of tipping, especially when disrupted by crisis, injustices, or also technological transformations (including AI).
At the heart of peaceful networks lie the ethical principles of human dignity and freedom: to handle crises sustainably and transform conflicts though non-violent depends on the levels of resilience
At the heart of peaceful networks lie the ethical principles of human dignity and freedom. Dignity is not only intrinsic worth, but a relational quality, sustained when people are recognized in their uniqueness, included, and treated as persons with voice and value. Freedom is not just freedom from coercion, but the ability to participate meaningfully in shaping one’s life and relationships. As Hannah Arendt[2] and Amartya Sen[3] have argued, freedom is agency-in-context, exercised within structures that either support or suppress it. When dignity and freedom are systematically denied – through surveillance, exclusion, or disinterest – the relational infrastructure of peace begins to unravel. Trust erodes, participation fades, and violence emerges mostly not from hatred, but from systemic breakdown – the “banality of evil,” as Arendt warned, where ordinary actors follow harmful norms in dysfunctional systems. The genocide in Gaza and increased violence by settlers in the Westbank in the aftermath of the Hamas attack on October 7, 2023, provides a tragic example.
While dignity and freedom are the ethical anchors of peace, resilience is its operational backbone – the set of capacities that enable individuals, communities, and systems to withstand disruption and recover or transform without descending into violence. Crises and even conflicts will always happen, but whether violence erupts or, in contrary, individuals, communities, and institutions manage to handle crises sustainably and transform conflicts though non-violent means into states of peaceful co-existence, depends on their levels of resilience. Resilience is not just about survival or bouncing back. It is about navigating crises in ways that preserve core values and relationships, adapt to changing conditions, and open space for positive change and renewal.
Four interdependent pillars of resilience can be identified:
- Resources: Access to food, shelter, healthcare, knowledge, emotional meaning – all the material and symbolic tools needed to cope with crises.
- Social Capital: Trust, solidarity, and networks of mutual support that enable people to share burdens, access resources, and coordinate responses.
- Adaptive Capacity: The ability to learn, innovate, and adjust strategies when old ones no longer work.
- Enabling Environments: Institutions and policies that ensure fairness, protect human rights, and provide avenues for peaceful change.
Together, these factors determine whether peace can endure under pressure. When resilience is strong, networks flex but do not break; when it is weak, crises can push systems past tipping points where spirals of violence become self-reinforcing (see e.g. recent outbreaks of tribal violence in Syria 2025). These tipping points occur when social norms shift toward fear or aggression, resources become scarce, trust erodes, or opportunistic actors exploit conflicts. As in network theory, a small rupture in one part of the system can cascade – especially if relational ties are already strained. The next section looks at the disruptive force of AI and its potential impact on resilience which shapes the capacity of networks to safeguard peace.
AI as a Risk to Networks Resilience and their Capacity to Safeguard Peace
Artificial intelligence is reshaping how people work, communicate, and are governed – often framed in terms of innovation, efficiency, or optimization. But when seen through the lens of relational peace, AI reveals a deeper risk: it diminishes the resilience of individuals and societies to handle crises and conflicts peacefully.
Understanding AI’s impact on peace thus requires moving beyond overt threats like autonomous weapons systems or new cyber threats. Looking at resilience, AI’s effects on resources, social capital, adaptive capacity, and enabling environments must be examined, as these elements together form the relational infrastructure of sustainable peace.
AI and the Disruption of Resources: Dignity Through Livelihood and Emotional Integrity
Resources are the foundation of resilience in any network – not just material goods like food or shelter, but also symbolic, emotional, and informational capacities. AI technologies increasingly control the flow and distribution of these resources, and in doing so, reshape the conditions under which a life in human dignity is affirmed or denied.
Through the lens of relational peace, AI reveals a deeper risk: it diminishes the resilience of individuals and societies to handle crises and conflicts peacefully
On a structural level, AI-powered tools are increasingly taking over repetitive administrative tasks traditionally performed by back-office clerks – for example, processing invoices, updating payroll records, or verifying electronic forms. While these systems may not fully replace entire roles overnight, they significantly reduce the demand for human labour, leading to fewer entry-level opportunities. Similar trends are visible in other sectors: AI chatbots now manage a large share of customer service inquiries; computer vision systems in warehouses guide autonomous robots in picking, sorting, and packing goods, replacing or reducing manual handling roles; in finance, algorithmic trading systems execute trades and optimize portfolios once handled by teams of junior analysts.
While some new jobs emerge – for example, in AI model training, data labelling, or maintaining and overseeing automated systems – the nature of these opportunities is often narrowly defined. Many are tied to specific functions within the AI development pipeline, such as annotating datasets for machine learning, fine-tuning language models, monitoring algorithmic outputs for errors, or servicing autonomous machinery. Others involve supervisory roles where humans oversee automated processes, intervening only when the system encounters exceptions or failures. In addition, many of these new job opportunities are often temporary, geographically concentrated, or require advanced skills many displaced workers cannot easily acquire.
The net effect of AI on jobs is therefore likely to be uneven and disruptive for three reasons. First, automation tends to benefit highly skilled or capital-rich actors, while workers in the Global South, precarious labor markets, and routine jobs where many workers are employed face the highest risk of dispossession. Second, the transition periods between job loss and new employment can be long and destabilizing, eroding economic security. Third, even when AI increases efficiency and lowers the cost of goods, this does not automatically translate into better livelihoods. Gains are often distributed unevenly: while some individuals and regions benefit from cheaper products, improved services, and new market opportunities, others experience job loss, wage stagnation, or weakened labor protections. These disparities can widen existing inequalities, both within and between countries, as highly skilled workers and capital owners capture a disproportionate share of the benefits. The result for many is a loss of economic dignity: the ability to support oneself and one’s family with purpose and security. These technological disruptions provide a strong argument in favour of an unconditional basic income – a guaranteed, regular cash payment to all citizens, grounded in the right to a life of dignity in which everyone’s basic needs are met.
AI contributes to what scholars have called “data colonialism”: Vast quantities of data are extracted, and people are rendered legible and exploitable by historical patterns of resource extraction and domination
Moreover, algorithmic hiring and performance systems – such as automated CV screening tools or AI productivity tracking software – reduce people to metrics, leading to discrimination (for instance, rejecting candidates due to gaps in employment) and undervaluing the human dimensions of work like creativity, care, and collaboration.
At the global scale, AI contributes to what scholars have called “data colonialism.”[4] Vast quantities of data are extracted – often without consent – from individuals, communities, and devices across the world, only to be analysed, monetized, and controlled by a handful of powerful firms and states. This process mirrors historical patterns of resource extraction and domination, with the added twist that the “resource” in question is the relational and behavioural trace of human lives. Dignity is compromised not only because consent is bypassed, but because people are rendered legible and exploitable without reciprocity or recourse.
Even emotional and psychological resources are affected. AI-enhanced platforms – such as TikTok’s For You feed, Instagram’s Explore page, Facebook’s News Feed, X’s (formerly Twitter’s) timeline algorithm, or YouTube’s autoplay and suggested videos – use machine learning to predict and amplify the content most likely to capture and hold a user’s attention. In social media especially, these mechanisms intensify comparison, competition, and performativity, as users are continually exposed to curated portrayals of others’ achievements, lifestyles, and opinions.
AI reshapes resource flows in ways that may improve access to information or certain services yet also undermines the dignity
By mining attention and emotion for profit, these systems erode people’s sense of interior stability and self-worth – essential dimensions of resilience. Surveillance systems further degrade these resources by producing constant low-level anxiety, especially among marginalized populations and civil society actors who already face disproportionate monitoring, whether through predictive policing algorithms, workplace monitoring, automated facial recognition in public spaces, or AI-driven monitoring of online activism.
Taken together, AI reshapes resource flows in ways that may improve access to information or certain services yet also undermines the dignity that comes from having one’s labor valued, one’s needs met, and one’s boundaries respected.
AI and the Erosion of Social Capital: Fracturing the Trust That Sustains Peace
Social capital – the web of trust, norms, and informal relationships that bind people together – is a core pillar of resilience in any community. It enables coordination, mutual aid, and collective action. Where trust and empathy circulate freely, networks tend to bend rather than break under stress. Where they are thin or brittle, crises more easily tip into blame, fear, and fragmentation.
AI systems increasingly intervene in the relational web that underpins peace. Most visibly, social media algorithms, optimized for engagement, prioritize content that elicits outrage, affirmation, or other strong emotions. This shifts the balance of what circulates within and between networks, giving greater visibility to divisive or emotionally charged material while crowding out content that fosters deliberation or nuance. Over time, such algorithmic filtering encourages the formation of polarized echo chambers – tightly knit clusters within broader networks where information flows mainly among like-minded members. These clusters fragment the overall network, weakening bridging social capital – the connections that link people across different backgrounds or perspectives – while reinforcing bonding social capital within homogenous groups.
As in-group identities strengthen, so too does toxic (affective) polarization: a form of division in which individuals not only disagree but develop deep contempt for those with opposing views, alongside intense loyalty to their own group. This “us versus them” mentality narrows the range of perspectives people encounter and undermines the trust, reciprocity, and shared norms that sustain cooperation. While polarization is not new, AI accelerates and amplifies it – often invisibly – shifting the network dynamics that support understanding and mutual recognition.
Social media algorithms, optimized for engagement, prioritize content that elicits outrage, affirmation, or other strong emotions. AI accelerates and amplifies polarization
Beyond shaping what we see online, AI increasingly shapes how we feel and behave. AI technologies – deployed in social media, retail, education, workplaces, and public administration – often use facial analysis, voice tone detection, and biometric data to infer emotions in real time. Once detected, these emotional states can be used to tailor content or responses: a frustrated customer might receive a calming tone from a chatbot; a student flagged as disengaged by an AI learning platform might be pushed more stimulating material; an employee whose voice is deemed insufficiently “positive” might be prompted to adopt a more upbeat tone in calls.
While often presented as tools to improve service quality or workplace efficiency, the same techniques for reading and influencing emotion can be repurposed in political or social contexts. For instance, political campaigns or activist movements could use emotional AI to identify and target individuals already primed for anger or fear, pushing content that reinforces grievances and strengthens in-group solidarity. Both effects deepen existing divides, making it harder for individuals to engage constructively across differences. When combined with AI-driven surveillance the effects are magnified. Constant awareness of being watched or profiled encourages self-censorship and suppresses dissent. This erodes the authenticity, trust, and openness needed for resilient peace, replacing them with guarded, calculated interactions that weaken the very networks on which social cohesion depends.
In authoritarian regimes such as for example Russia or China, AI-driven surveillance and social credit systems formalize this breakdown of trust. When people fear that any social tie might make them vulnerable to state control or reputational damage, they begin to withdraw from civic and communal life. The social fabric frays not through direct violence, but through relational corrosion – a slow collapse of intermediating institutions and everyday solidarities.
AI Undermining Adaptive Capacity: Locking Societies into Past Harmful Practices
Adaptive capacity is perhaps the most subtle yet essential form of resilience. It is what allows individuals and institutions to navigate novelty, respond to feedback, and change course when old strategies no longer work. Without adaptive capacity, systems become fragile – unable to recover or transform under pressure. In a world shaped by AI, this capacity is increasingly constrained.
Many AI systems are designed to optimize for specific outcomes based on historical data. But in doing so, they often lock in certain assumptions, metrics, or patterns that resist contestation or adaptation. For example, predictive policing systems may perpetuate historical bias based on ethnicity or socioeconomic background. Moreover, individuals affected by AI decisions often lack the capacity to understand, question, or resist those systems. This is especially true for marginalized populations who may be subjected to algorithmic welfare assessments, automated visa denials, or digital surveillance – all without access to explanation, redress, or participation possibilities. This produces not only epistemic injustice but practical disempowerment, frustration and eventual resignation: the denial of one’s ability to learn from and respond to the systems that shape one’s life.
Where people once relied on their own judgment and common sense, they are now often expected to comply with algorithmic protocols
Perhaps most importantly, AI’s can discourage moral agency and the capacity to make exceptions – both central to adaptive capacity. Where people once relied on their own judgment and common sense, they are now often expected to comply with algorithmic protocols, even when those protocols do not fit the lived reality of the situation. In the criminal justice system, for example, risk assessment algorithms may recommend harsh sentencing for unemployed youth from marginalized neighbourhoods without considering the root causes of criminality, such as unemployment – potentially linked to automation – or the absence of meaningful perspectives for the future. The ability to make exceptions, to recognize potential for rehabilitation, offer second chances, and weigh factors not captured in the data, is crucial not only for the long-term prospects of individuals’ lives, but also for the social cohesion of communities.
Similarly, in social welfare systems, automated eligibility checks may deny support to families in acute need if their circumstances do not match pre-set categories, leaving frontline workers with little discretion to override the decision. In both cases, rigid adherence to algorithmic outputs narrows the space for contextually sensitive moral judgment and pragmatic problem-solving – the very qualities that enable societies to adapt constructively to changing circumstances over time.
AI and the Breakdown of Enabling Environments: Hollowing Out Democratic Institutions
The final pillar of resilience – the enabling environment – includes legal systems, political institutions, and governance frameworks that manage risk, protect rights, and ensure fairness. These institutions play a vital role in structuring relationships, mediating conflict, and upholding the dignity and freedom of all members of a community. Yet AI is developing in ways that increasingly outpace, bypass, or undermine these structures.
The opacity of many AI systems – often protected by intellectual property law or trade secrecy – makes it difficult for affected individuals or even regulators to understand how governance and administrative decisions are made. This undermines accountability and procedural justice. The fact that AI often learns from biased or incomplete data further exacerbates this challenge: when systemic injustices are coded into algorithmic logics, injustice is naturalized and rendered invisible.
In many contexts, the use of AI in surveillance, border control, or protest monitoring has actively eroded civil liberties, especially for already vulnerable groups
Moreover, the development and deployment of AI are currently dominated by a handful of powerful corporate actors and state agencies. These actors often operate transnationally, with minimal democratic oversight. In many contexts, the use of AI in surveillance, border control, or protest monitoring has actively eroded civil liberties, especially for already vulnerable groups. In China’s Xinjiang region, for instance, advanced facial recognition systems have been deployed to monitor and detain Uyghur Muslims – a clear violation of freedom of movement and religious practice. In Greece, EU-funded drones, cameras, and AI-powered systems monitor migrant movements at land borders with Turkey, often in non-transparent ways that raise serious human rights concerns around forced displacement and the criminalization of migration. In the occupied Palestinian territories, Israeli authorities have deployed AI-driven facial recognition systems, such as the “Blue Wolf” and “Red Wolf” programs, to identify and track Palestinians across checkpoints and in public spaces. Elsewhere, during protests in countries such as Russia, India, and Iran, AI-based surveillance tools – including automated identification systems such facial and gait recognition which analyses an individual’s walking patterns – have been used to identify and target demonstrators, severely undermining the rights to freedom of expression and assembly.
The use of algorithmic systems to monitor, profile, or target groups without nuanced context or recourse undermines both human rights and social resilience. As enabling environments falter under these pressures, the networks that once sustained peaceful interaction become increasingly prone to rupture. Trust in democratic institutions diminishes. Recourse to redress becomes elusive. People lose faith that the system can protect them – and, in the absence of peaceful pathways for change, may turn instead to resistance, withdrawal, or violence.
Conclusion: Reweaving Peace in the Age of AI
While not inherently violent, AI systems can erode the very conditions – resources, social capital, adaptive capacity, and enabling environments – that make societies resilient to conflict and crisis. By disrupting access to resources, fragmenting networks, constraining problem-solving capacity, and weakening democratic institutions, AI risks hollowing out the relational fabric that holds communities together. These risks rarely unfold through sudden shocks; they emerge quietly, through disconnection, disempowerment, and dehumanization. But they are not inevitable.
AI is not a single, deterministic force. Its impacts depend on the purposes it serves, the values embedded in its design, and the contexts in which it is deployed. As the following positive examples show, AI can also be used along the different pillars of resilience to promote peace rather than undermine it:
Equitable access to resources: In contexts where tensions between communities are heightened – such as between herders and farmers competing for scarce land – AI-powered climate forecasting and land-use planning tools can help anticipate droughts, optimize grazing rotations, and reduce disputes over resources. In Kenya, for example, platforms like Virtual Agronomist and PlantVillage deliver localized farming advice in multiple languages, enabling smallholders to improve yields.
Building social capital across communities: AI-assisted dialogue platforms, such as Pol.is which has been used in Taiwan, Canada, Singapore, Philippines, Finland, Spain, and other countries, have shown how technology can bridge divides by mapping areas of agreement across large and diverse groups. In deeply polarized contexts, such tools can strengthen bridging social capital by fostering connections between people who might otherwise never meet across ethnic, religious, or socioeconomic lines.
Enhancing adaptive capacity through inclusive planning: In Brazilian cities like Porto Alegre and Belo Horizonte, AI-enhanced participatory budgeting allows residents – including those from marginalized areas – to propose, debate, and vote on public investment priorities. This not only strengthens local problem-solving capacity but also builds the confidence and skills needed for communities to adapt to future challenges.
Strengthening enabling environments through digital democracy: Civic platforms such as Decidim in Barcelona offer transparent, open-source systems for citizens to co-create policies, draft proposals, and hold institutions accountable. Such tools, when managed well and in an inclusive manner, can revitalize trust in governance and enable more direct, participatory forms of democracy.
Is not merely to make AI systems safer or more robust and efficient. It is to re-anchor them in a vision of peace: a vision that prioritizes human dignity and freedom, especially for the most vulnerable
By aligning AI’s use with principles that safeguard human dignity and freedom, we can help transform networks toward trust rather than mistrust, and toward cooperation rather than polarization. In doing so, AI could become not a threat to sustainable peace, but an active contributor to it. The task, then, is not merely to make AI systems safer or more robust and efficient. It is to re-anchor them in a vision of peace: a vision that prioritizes human dignity and freedom, especially for the most vulnerable. This may include ethics-by-design and co-creation approaches, but also to define AI-free spaces that are reserved for humans-only. In any case, two ethical frameworks prove indispensable:
First, the ethics of care reminds us that peace is sustained not through control, but through attentiveness to need, vulnerability, and the context. Care-oriented approaches encourage the design of systems that are inclusive, responsive, and grounded in human relationships. They invite us to value not only outcomes, but the quality of interactions that get us there.
Second, a human rights approach ensures that peace is not left to benevolent intention alone. Rights frameworks embed dignity and freedom into legal norms and institutional practices, protecting individuals from systemic abuse and enabling meaningful participation in shaping technological futures. They provide structural guardrails such as transparency, accountability, and non-discrimination that are essential for preserving peace under conditions of rapid change and crises.
In this sense, peace in the age of AI is not a technical or regulatory problem alone. It is a moral and political project. It requires rethinking not only the tools we build, but the values we encode and the futures we make possible. If we are to preserve peace in an intelligent machine age, we must begin by defending – and designing for – what makes us human.
[1] Galtung, J. 1969. “Violence, Peace and Peace Research.” Journal of Peace Research 6 (3): 167–91.
[2] Arendt, H. 2006 [1963]. Eichmann in Jerusalem: A Report on the Banality of Evil. Penguin Publishing Group.
[3] Sen, A. 2014 [1999]. “Development as Freedom.” A The Globalization and Development Reader: Perspectives on Development and Global Change, 525.
[4] See for example Couldry, N., & Mejias, U. A. (2019). The costs of connection: How data is colonizing human life and appropriating it for capitalism. In The costs of connection. Stanford University Press.
Photography
Visual representation of the concept of artificial intelligence and digital connection. Author: Pixel-Shot (Shutterstock).