Internal Facebook documents have shown that rumours and calls to violence had spread on WhatsApp in late February 2020 during the Delhi riots, the Wall Street Journal reported. In its largest market, Facebook’s documents had shown that inflammatory content was up 300% above previous levels in the months after December 2019 the report said, which was the time there were protests regarding the Citizenship Amendment Act. As per the report, titled ‘Communal Conflict in India’ it said that researchers found that inflammatory content was primarily targeted at Muslims.
The researchers reportedly found that Hindu and Muslim users in India told the company that they are subject to a “large amount of content that encourages conflict, hatred and violence on Facebook and WhatsApp”. This includes blaming Muslims that they are responsible for the spreading of the pandemic, and that Hindu women have been targeted for marriage. They also found that there was content that can incite violence, including posts that compared Muslims to pigs or dogs, and claiming that the “Quran calls for men to rape their female family members”. This content was from two groups with ties to India’s ruling political party, the BJP.
The Wall Street Journal also referred to another report called ‘Adversarial Harmful Networks: India Case Study’ which reportedly found that content posted by those individuals, groups or pages related to the RSS are “never flagged” because Facebook does not have the systems to detect content in Hindi and Bengali. The report added that these users were posting content about ‘Love Jihad’ as well. Facebook researchers wrote that there were groups and pages replete with inflammatory and misleading anti-Muslim content on Facebook.
Citing the Facebook report, the New York Times had said that of India's 22 officially recognised languages, Facebook has trained its AI systems on five. But in Hindi and Bengali, it still did not have enough data to adequately police the content, and much of the content targeting Muslims "is never flagged or actioned."
In another report by The New York Times, internal Facebook documents showed "a struggle with misinformation, hate speech and celebrations of violence" in India, with researchers pointing out that there are groups and pages "replete with inflammatory and misleading anti-Muslim content" on its platform, US media reports have said.
In a report published on Saturday, NYT said in February 2019, a Facebook researcher created a new user account to look into what the social media website will look like for a person living in Kerala.
"For the next three weeks, the account operated by a simple rule: Follow all the recommendations generated by Facebook's algorithms to join groups, watch videos and explore new pages on the site. The result was an inundation of hate speech, misinformation and celebrations of violence, which were documented in an internal Facebook report published later that month," the NYT report said.
Internal documents show a struggle with misinformation, hate speech and celebrations of violence in the country, the company's biggest market, said the report based on disclosures obtained by a consortium of news organisations, including the New York Times and the Associated Press.
The documents are part of a larger cache of material collected by whistle blower Frances Haugen, a former Facebook employee who recently testified before the Senate about the company and its social media platforms.
The report said the internal documents include reports on how bots and fake accounts tied to the country's ruling party and opposition figures were wreaking havoc on national elections.
The NYT said that in a separate report produced after the 2019 national elections, Facebook found that over 40% of top views, or impressions, in the Indian state of West Bengal were fake/inauthentic. One inauthentic account had amassed more than 30 million impressions.
The internal documents also detailed how a plan "championed" by Facebook founder Mark Zuckerberg to focus on "meaningful social interactions'' was leading to more misinformation in India, particularly during the pandemic.
The NYT report added that another Facebook report detailed efforts by Bajrang Dal to publish posts containing anti-Muslim narratives on the platform.
Facebook is considering designating the group as a dangerous organisation because it is inciting religious violence on the platform, the document showed. But it has not yet done so, the NYT report said. The documents show that Facebook did not have enough resources in India and was not able to grapple with the problems it had introduced there, including anti-Muslim posts.
Responding to US media, Facebook spokesman, Andy Stone, said the social media platform has reduced the amount of hate speech that people see globally by half this year. "Hate speech against marginalised groups, including Muslims, is on the rise in India and globally,” Stone said in the NYT report, adding, “So we are improving enforcement and are committed to updating our policies as hate speech evolves online.”
In India, "there is definitely a question about resourcing" for Facebook, but the answer is not "just throwing more money at the problem," said Katie Harbath, who spent 10 years at Facebook as a director of public policy, and worked directly on securing India's national elections.
With PTI inputs