//Taboola script starts //Taboola script ends
Cyber security
Of the 1,000 hate speech posts surveyed by Equality Labs, the highest number of posts - 37% - were Islamophic, while 13% were casteist.

“Those people whose mothers took their pants (salwars) off after seeing swords in the hands of Mughals today proudly claim to be Muslim,” reads a Facebook post, allegedly quoting a BJP leader. The user then asks if other readers agree with the statement.

Many would agree that this is hate speech at multiple levels – it is misogynistic and Islamophobic, and risks inciting violence against minorities. However, Facebook – the platform where it was made – is not as sure. While the post was reported several times, the platform repeatedly restored it.

This is what Equality Labs, a South Asian community technology organisation that works to end caste apartheid, gender-based violence, Islamophobia and religious intolerance, found. It studied 1,000 hate speech posts on Facebook, collated over a period of four months in six Indian languages including Marathi, Telugu, Kannada, Tamil, Hindi and English.

The report titled “Towards the tipping point of violence: caste and religious hate speech” reveals that most of the hate speech posts analysed were Islamophobic, accounting for 37% of the posts. Of these 37%, 6% posts were anti-Rohingya. Next in the posts surveyed were fake news (16%), casteism and gender/sexuality hate speech (13% each), violence (11%), and anti-religious minorities (9%).

The report is written by Thenmozhi Soundararajan, Abishek Kumar, Priya Nair and Josh Greely of Equality Labs, who supervised an inter-faith, inter-caste and multi-racial research team composed of researchers from over 15 countries and included persons from Indian minorities.

Inaction against hate speech

Facebook was found to be most likely to remove a post if it was violent, and least if it was casteist.

With the use of several examples, Equality Labs has demonstrated how Facebook does not take adequate action against hate speech. Further, while the company claims to review the majority of reports in less than 24 hours, the report found that the median time for Facebook India to respond to a report was twice as much.

What’s more, even if one were to report these posts, there was a good chance that they would be restored. 43% of posts reported by Equality Labs were restored an average of 90 days after they were reported. Alarmingly, 100% of these restored posts were found to be Islamophobic in nature.

Further, 93% of all hate speech posts reported on the platform were found to remain there. 11% of the reported posts received no response from Facebook.

Ironically, while hate speech was found to remain and tough to report on the platform, the company had disabled the personal accounts of more than a dozen leading journalists and prominent Dalit Bahujan pages like National Dastak and Ambedkar’s Caravan due to false reporting. “Also significant is that numerous pages related to Kashmir by Kashmiri activists, journalists and press outlets with the outlet Free Press Kashmir were taken down from Facebook for over 48 hours. Even journalists who mention Kashmir face censorship,” the report says.

Islamophobic posts

One of the examples given is a post that featured a common white supremacist character, Pepe the Frog, depicted as a Hindu nationalist standing in front of the infamous image of the 1992 demolition of the Babri Masjid in Ayodhya by Hindu nationalist mobs. The image is sardonically captioned “Good evening Jai Shri Ram.”

“To glorify a pogrom that led to the desecration of a place of worship and the murder and rape of innocent Muslims is a vile act of hate and a clear violation of Facebook standards. The fact that this image was returned to the platform is just shocking,” the report argues. Further, 10% of all Islamophobic posts were about the Babri Masjid demolition.

Researchers also pointed out a worrying trend when it came to anti-Rohingya posts: “These posts follow patterns of violation that are extremely similar to those seen in Myanmar just before and during the Rohingya genocide.” One such post referred to the Rohingya as cockroaches that should be "trampled", and a fake news post said that the Rohingya were slaughtering and cannibalising Hindus.

Posts warning against ‘Love Jihad’ were also found to be unmoderated on the platform, as were posts that used offensive and sexist slurs against Muslims.

Apart from anti-Muslim posts, there were also anti-Christian and anti-Buddhist posts that were found unchecked on Facebook.

Casteism

When it came to caste, not only posts but pages such as one called ‘Anti-chamaar group’ were found to be existing on Facebook, despite numerous attempts to report them. 40% of all casteist posts from the 1,000 surveyed were anti-reservation.

There were also several posts which showed images of Dr BR Ambedkar photoshopped into offensive casteist memes and posts.

Researchers pointed out, “To understand why this post is so deeply offensive, one must understand the work conditions of Dalits. These castes have historically been forced into forms of slavery requiring them to do the filthiest jobs, like handling dead bodies and cleaning toilets,” adding that thousands of Dalits die every year in manual scavenging jobs even though it’s illegal in India.

Due to the lack of cultural context, Facebook also does not see words like ‘bhangi’ and ‘chamaar’ as offensive. Further, hatemongers have found ways of avoiding tracking. One person for example, used “Da – lit” for an offensive meme instead of “Dalit”.

Hate speech based on gender, sexuality

When it came to hate speech based on gender and sexuality, 25% were transphobic. “12% of the posts in this category made direct reference to violent rape, either calling for rape or glorifying or trivialising rape.” 


“Trans people are not those who wear a saree and roam around in markets. Trans people are those who despite being Hindus, oppose Hindutva.”

Here is another example.

Fake news

There is no dearth of fake news on Facebook and the platform recorded least response on these posts when reported. However, “the sophistication of these posts can make it difficult for the user to identify the misinformation […] Further, these posts are amplified in the Facebook algorithm by a sophisticated system of inter-connected pages,” the study says.

50% of fake news posts were found to be misreporting details about a current event while over 20% were about spreading a piece of falsified South Asian history. For instance, one post attempted to depict seals from the Indus Valley civilization showing people crossing sticks or swords to misinform that this was actually dandiya, a Gujarati Hindu dance celebration, to argue that Hinduism has existed from 5,000 years ago.

Fake news posts can prove to be very dangerous as well, and directly incite violence.

Lack of historical, social and cultural context

Facebook’s failure to take down such hate speech is rooted in the lack of diversity and community guidelines that are not in line with the evolving Indian context, and also because community guidelines are unavailable in many local languages - in some cases, only the headings were translated while the body of the text continued to remain in English, Equality Labs found. This, even though India has 294 million active accounts on the platform. Further, the researchers say that these community standards were formulated with little to no collaboration with civil society experts or Internet Freedom advocates from Indian minorities.

“Civil society and Internet Freedom advocates could also assist in creating culturally competent training that enables all Facebook users to understand how its standards, coupled with rigorous content moderation, can work together for a safer Facebook India,” the report states. “The Indian religious and socio-political contexts are complex enough to require their own review and co-design process to adequately address safety,” it adds.

Equality Labs also found that Facebook staff was lacking the cultural competency and literacy in the needs of caste, religious and gender/queer minorities. One of the major factors behind this, researchers say, is the lack of diversity in the staff and contractors of Facebook India. “Hiring of Indian staff alone does not ensure cultural competence across India’s multitude of marginalised communities,” it added.

Difficulties in reporting

Apart from these, Facebook’s reporting mechanisms also don’t always allow for accurate reporting. This means that if you get one set of reporting options today, there is no guarantee that you will get the same screen and options the next time you report something on Facebook.

For instance, if you want to report a post for casteist hate speech, you may not always get the option of choosing “caste” in the reasons for reporting. In fact, caste as a protected category was not even present in most reporting mechanisms in the Indian market.

“Because of our continuous advocacy and monitoring, in late March 2019, we finally uncovered one sample workflow that offers a way to report casteist hate speech. […] In fact, many members of our team of researchers have not been able to find this reporting screen again,” the report says. “We would recommend that Facebook cease in experimenting with workflows on vulnerable Indian minority communities. While we appreciate that software development practices require testing between user groups, hate speech is too grave a matter to allow for experimentation in real time,” it adds.

Facebook’s response

Responding to Equality Labs’ report, a Facebook spokesperson told TNM, “We recognise, respect and seek to protect the rights of minorities and communities that are often marginalised, both in India and around the world.” The spokesperson added that Facebook takes hate speech seriously, and removes such content as soon as they become aware of it. “To do this, we have invested in staff in India, including content reviewers, with local language capabilities and an understanding of the country’s longstanding historical and social tensions. We’ve also made significant progress in proactively detecting hate speech on our services before anyone reports it to us, to help us get to potentially harmful content faster.”

Facebook has 30,000 people around the world who moderate safety and security issues on the platform, with 15,000 dedicated content reviewers, including in India. The team apparently supports most of the regional languages in India, including Hindi, Tamil, Telugu, Kannada, Punjabi, Urdu, Bengali and Marathi. It also says that 95% of problematic content on Facebook was detected by AI, without external reporting; however, admittedly, a lot more remains to be done regarding hate speech.