Facebook’s hate-speech-promoting algorithm has led to genocide once, and it could do it again

Safa Ahmed

Over the past few years, Facebook has made itself the centre of media attention. In 2018, it was partly because of the Cambridge Analytica scandal, and partly because Mark Zuckerberg almost nailed his impersonation as a human being during his testimony before the United States Congress. In 2021, after white supremacists stormed Capitol Hill, Facebook came under fire for failing to regulate inflammatory content on its platform, paving the way to insurrection. 

Now, in 2021, Frances Haugen has landed Facebook in the spotlight yet again, but this time with the most serious allegation yet: that the platform has been used, with Facebook’s knowledge, to commit violent atrocities against vulnerable groups worldwide. 

Haugen blew the whistle on Facebook in the late summer of 2021, leaking tens of thousands of pages of internal Facebook documents to the United States Congress and the Securities and Exchange Commission. For all its filtered pictures and perky, almost psychedelic Meta ads, Facebook is much more similar to a cesspool - one where fake news, political propaganda, and hate speech go virtually unmonitored, images of dead bodies are circulated freely, and anti-minority sentiments online lead to real blood spilled offline. 

In the United States, the consequences of Facebook misinformation have been vast, ranging from annoyance (Pizzagate) to frightening (a failed but nonetheless damaging white terrorist attack on Capitol Hill) to life-threatening (anti-vax and anti-mask propaganda). That’s a lot of cases of incessant, widespread, and potentially dangerous misinformation, happening despite the fact that the United States is the only country where Facebook actually puts in the money and effort to monitor fake news and hate speech. 

According to the Economic Times, Facebook spent 87% of its budget for 2020 to combat misinformation only in the US, despite the fact that only 10% of its active users live there. On top of that, the company takes action against only 3-5% of hate speech and a minuscule 0.6% of violent and inciting content, allowing the majority of such content to circulate freely. 

What, then, happens in countries where the budget for combating hate speech is low or even nonexistent, especially in a time when nationalism is on the rise worldwide? Haugen emphasised this to lawmakers in the United Kingdom, stating that Facebook is “unquestionably making hate worse,” especially in countries where the barrier between speech online and action offline has become frighteningly thin due to an increase in extremism. 

“My fear is that without action, divisive and extremist behaviours we see today are only the beginning,” said Haugen in her testimony before US Congress. “What we saw in Myanmar and are now seeing in Ethiopia are only the opening chapters of a story so terrifying, no one wants to read the end of it.”

“Terrifying” only barely describes the types of atrocities inflicted on Myanmar’s Rohingya minority by the state military. The first attacks on the minority began in 2017; by 2018, an estimated 25,000 people had been killed and 700,000 had fled. Villages were torched, people were slaughtered by bullets and knives, women were gang-raped, and children were assaulted, to the point where the United Nations accused Myanmar of genocide

“Kill all you see, whether children or adults,” were the official military orders. But similar calls had been made for years leading up to the genocide, many of them posted to Facebook. One post from 2013 read, “We must fight them the way Hitler did the Jews, damn Kalars [a derogatory term for Rohingya people].” 

Other posts claimed that Islam is a threat to Buddhism and encouraged people to fight back against “jihadi attacks.” Time and again, over a period of several years, Facebook did nothing. 

In December of 2021, Rohingya refugees based in North America and the United Kingdom sued Facebook for £150 billion for its “widely recognised and reported” role in the genocide. 

A class action complaint states that Facebook was “willing to trade the lives of the Rohingya people for better market penetration in a small country in south-east Asia… In the end, there was so little for Facebook to gain from its continued presence in Burma, and the consequences for the Rohingya people could not have been more dire. Yet, in the face of this knowledge, and possessing the tools to stop it, it simply kept marching forward.”

Even as Facebook scrambles to make what few amends it can in the face of one genocide, the company continues to ignore its role in several other calls for mass violence brewing in other countries. 

Like the Rohingya in Myanmar, the Tigrayan ethnic group has been demonised as a whole in Ethiopia. Since November of 2020, the nation has been embroiled in a civil war between the central government and rebel groups from the ethnically-defined Tigray region.

While atrocities have been documented by both the rebels and the military, there is a clear power imbalance: the Ethiopian military has been called out by Human Rights Watch and Amnesty international for targeted atrocities against Tigrayan civilians. 

CNN reports that thousands of people have died in the fighting; refugee camps have been razed; homes have been looted; and on multiple occasions, the central government has subjected the region to internet blackouts to put a chokehold on communication. 

Sexual violence, extrajudicial killings, and massacres have also become rampant. Since the beginning of the conflict, over 2 million people have been forced to flee their homes. The violence is severe enough that last year, the US State Department prepared a declaration labelling it a genocide against the Tigrayan people.  

Like with Myanmar, the violence isn’t actualised through weapons alone. Fear-mongering and hateful messages against the Tigrayan minority have flooded Ethiopian Facebook, normalising violent rhetoric to its 6 million users.  

It’s not just everyday citizens, either. State media commentator Muktar Ousman celebrated the deaths of two Tigrayan university professors on Facebook, where he has 210,000 followers. Dejene Assefa, an activist who is known for his television appearances in Ethiopia, posted a fear-mongering message for his 120,000 followers: “The war is with those you grew up with, your neighbour. If you can rid your forest of these thorns … victory will be yours.” 

Prime Minister Abiy Ahmed himself has lashed out on social media, calling Tigrayan rebels “cancerous” and “weeds” - language that is echoed by other government-backed Facebook accounts. 

Timnit Gebru, a former Google data scientist and expert on bias in AI, was interviewed by the publication Rest of World on the very real consequences of Facebook-circulated hatred in Ethiopia. He described the content of Ethiopian Facebook as “some of the most terrifying I’ve ever seen anywhere.”

“It was literally a clear and urgent call to genocide,” he said. “This is reminiscent of what was seen on Radio Mille Collines in Rwanda.”

He added, “It was not one random person with 10 or 100 or 1,000 followers. It was a group of leaders, with hundreds of thousands of followers, clearly instructing people what to do. The most shocking part was the urgency, and the horrifying way in which the words were designed to make people act now.”

Terrifying, horrifying, shocking. In the eyes of most rational people, this type of content would be seen as unacceptable on a platform that claims to be all about bringing people together. Facebook, however, has a very different idea of what constitutes “good” and “bad” content, as Haugen described in a 60 Minutes interview. 

“There were conflicts of interest between what was good for the public and what was good for Facebook. And Facebook, over and over again, chose to optimise for its own interests, like making more money,” she said. 

She was referring specifically to Facebook’s engagement-based algorithms, which promotes the share and spread of popular content. The problem is, inflammatory content is far more likely to be shared than positive content - something that Facebook knows, but doesn’t change. From a business standpoint, a decrease in divisive content means a decrease in engagement, which means a decrease in ad revenue. 

In other words, Facebook quite literally profits off of hate. 

“The dangers of engagement-based ranking are that Facebook knows that content that elicits an extreme reaction from you is more likely to get a click, a comment, or a reshare,” said Haugen.

This phenomenon has grown especially deep-rooted in India, the world’s largest democracy as well as Facebook’s largest market. Over 340 million people use Facebook daily in India, while WhatsApp has a staggering 487 million Indian users. 

Research shows that Facebook in India is full of viral hate content. Hateful posts, videos of mob lynchings, and images of dead bodies are circulated regularly and enthusiastically. Browse through the comments on an activist’s Facebook or Instagram page and the hateful messages are quick to jump out. 

Then there are the posts calling for the transformation of India from a secular democracy to a Hindu state. 

A popular video, reposted multiple times by different users, shows a snippet of a Hindu supremacist leader declaring, “My only goal in life is to exterminate Islam and kill Muslims.” He receives applause not only from his recorded audience, but also the hundreds of thousands of people online who support his message. The reactions to these posts are a colourful wave of likes, hearts, and laughing emojis. 

It goes without saying that calling for the mass death of a minority group goes against community guidelines. Despite this, Facebook rarely takes down such content, and engagement with hatred remains high. As a result, the offline consequences have worsened by the day: during a Hindu nationalist event in December, influential Hindu extremist leaders repeatedly called for a Muslim genocide

“If you want to finish them off, then kill them... We need 100 soldiers who can kill 20,000,000 of them to win this,” said Sadhvi Annapurna Maa, general secretary of the Hindu Mahasabha political party. “If you want to eliminate their population, then kill them… Even if 100 of us are ready to kill 20 lakhs of them (Muslims), then we will be victorious.”

“[Facebook] has realised that if they change the algorithm to be safer, people will spend less time on the site, they'll click on less ads, they'll make less money,” said Haugen. “​​Its own research is showing that content that is hateful, that is divisive, that is polarising - it’s easier to inspire people to anger than it is to other emotions.”

Mark Zuckerberg has responded to these allegations with indignation. 

“If we didn't care about fighting harmful content, then why would we employ so many more people dedicated to this than any other company in our space - even ones larger than us?” he wrote in a statement posted on Facebook itself. Addressing his employees, he added, “I know it's frustrating to see the good work we do get mis-characterised.”

There was no mention in his statement of the Rohingya, Indian Muslims, or Tigray people, who know far better than Mark Zuckerberg what it means to be mis-characterised. The group of employees Zuckerberg has tasked with fighting hate might be big, but it’s not big enough, and it’s certainly not effective enough. 

The world has promised “never again” far too many times, but it seems like to Zuckerberg and the Facebook/Meta machine, those words are just that: words. 


PoliticsjfaComment