Online activity can lead to violence, including genocide. As Nuredin Ali writes, effective content moderation can help.
How Tech Platforms Fail to Protect Societies in Conflict Regions
From our series Democratizing AI for the Global Majority
By Nuredin Ali
January 21, 2026
While social media platforms have become an integral part of people’s lives around the world, from developed countries to rural villages, the way these platforms are used varies significantly across regions. The harms social media causes differ depending on the context: in some places, such as Europe, Central Asia, and Canada, concerns about harm revolve primarily around mental health, while in others, these platforms have fueled incitement to hate and violence.
The Tigray genocide (2020–2022), is a central example of such failures. The deadliest armed conflict of the 21st century, this genocide claimed approximately 600,000 lives, and more than 100,000 women were victims of rape. (In 2024, the NewLines Institute confirmed that genocide had been committed against Tigrayans.) Located in the northern part of Ethiopia, the Tigray region comprises approximately six percent of the country’s population; Ethiopia is the second most populous country in Africa and the most populous landlocked country in the world. In the Tigray genocide, social media served as a battleground: social media platforms such as Facebook and Twitter (now X) predominantly hosted incitement to hate and violence that directly led to the loss of individual lives and eventually contributed to the genocide. Yet despite platforms’ central role in this crisis, research on it remains scarce.
How does online activity lead to genocide? Consider a violent post calling for murder or harm. People may ignore an individual post on any one day. However, when such posts multiply, circulate widely, and start to get traction — and users are exposed to similar content multiple times — it can lead to the psychological normalization of violence and radicalize the audience. Perceptions can shift toward viewing an individual or group as a legitimate target and imminent danger. This phenomenon is particularly rampant in countries with deep ethnic divisions like Ethiopia, where platforms’ role in facilitating violence during the genocide have been extensively documented. In 2021, for example, Meareg Amare, a Tigrayan professor, was murdered by militiamen after multiple Facebook posts labeled him a traitor and called for his killing; his son and other victims are currently suing Facebook for$2 billion, to get justice and hold the platforms accountable.
Social media platforms can and must take action to protect vulnerable societies from such harms, and they can do so by taking actions ranging from adopting flexible policies to investing more resources in human and technological moderation, both of which play central roles in platform safety. As platforms seek to increase revenue from advertisements, they have an incentive to conduct effective content moderation: advertisers don’t want their products displayed alongside toxic or violent content. Yet these platforms’ approach to moderation is insufficient. They commonly outsource this moderation to specialized business process outsourcing companies (BPOs). While some content, like adult content, typically doesn’t require contextual expertise to identify and take down, in multi-ethnic societies where social media is used to incite hate and violence, specific cultural and contextual expertise is necessary.
In “The Role of Expertise in Effectively Moderating Harmful Social Media Content,” my colleagues at the Distributed AI Research Institute (DAIR) and I examined what effective moderation requires in conflict settings. We collaborated with seven contextual experts (activists, journalists, and a former content moderator) and interviewed 15 current content moderators, including some who had handled genocidal content.
We discovered there were stark differences between the expertise currently being practiced by content moderators, and what our group of experts and moderators identified as valuable. Platforms and their outsourcing companies currently prioritize language skills (in other words, whether a potential moderator speaks the relevant language), superficial cultural awareness (such as “Do you know current affairs in Ethiopia?” or “Are you familiar with current affairs in the Oromo region?”), and the results of mental health resilience assessments.
We grouped the expertise and skills that are valuable for effective content moderation into eight categories. Some of these include specific knowledge of dialects, in-depth understanding of cultural contexts, teamwork skills, and familiarity with specific networks and social media platforms. Understanding the specific cultural context of how a particular term is used, for instance, can make a significant difference. The term ‘ወይነ/ወያኔ‘ or “Woyane” can be interpreted differently across Tigray, Eritrea, and Ethiopia. In Tigrinya, it means “revolutionary,” while among the other two groups, it may be used to refer to the Tigray People’s Liberation Front (TPLF), which is a political party. This distinction can significantly impact content moderation decisions, as political parties may not receive the same protections as when the term is used by others to refer to Tigrayan society as a whole. Effective teamwork is also essential for handling certain types of posts, as one person may not be proficient in evaluating all content. Within the group of experts labeling potentially hateful posts through multiple rounds, there was initially disagreement in as many as 71 percent of cases. Emphasizing the importance of collaborative decision-making, this decreased to 40 percent through five rounds of deliberation.
Exploitative working conditions and rigid organizational hierarchies also undermine effective content moderation. Recently, some moderators have been speaking out about these working conditions through the Data Workers’ Inquiry project. Moderators often have little voice in influencing policies, and are vulnerable to wage reductions and job insecurity if they take issue with how certain content should be handled. These dynamics harm societies and can fuel conflict when a small number of moderators are unable to influence policy decisions. This is to say nothing of the broader problem of resource scarcity, where only a handful of moderators are tasked with moderating millions of users’ posts daily. This results in existing moderators being overworked not only by the disturbing content they must review, but also by the oppressive organizational structures in which they operate.
These failures are systemic, harming not only the content moderators but the societies in which the content resides. Platforms’ incentives are tied to revenue, and when revenue and advertiser pressure are lower in a region, investment in safety often lags — with real human costs.
The Tigray genocide was, in part, actively facilitated via social media platforms. Content moderation is difficult, and it may not be possible to take down every harmful or violent content. Still, current efforts remain profoundly inadequate, and the people who run these platforms are well aware of the problem. To prevent another social media platform-facilitated genocide and incentivize safety, social media platforms and their outsourcing companies must enact structural and systemic change, enforced by policymakers and governments. To that end, our study recommends some practical steps: updating recruitment to ensure dialectical diversity among moderators who speak the same language; removing punitive measures against moderators; establishing and enforcing standards for ensuring responsible and ethical outsourcing; and providing detailed annual reports that disclose the company’s moderation practices (as they do for the European Union through the Digital Services Act). At DAIR, my team continues to investigate these problems to develop better alternatives.
Nuredin Ali is a PhD candidate at the University of Minnesota and a researcher at the Distributed AI Research Institute (DAIR). His research focuses on social media and mental health. He explores how we can design and develop approaches to effectively understand the mental well-being of humans working in high-exposure digital environments.