Facebook in India failed to flag-off hate speech, misinformation and inflammatory posts- particularly anti-Muslim content- despite being made aware through internal research, according to leaked documents acquired by the Associated Press.
The research that dates back to 2019, highlighted the social media giant’s failure to flag abusive content, especially in cases where members of Prime Minister Narendra Modi’s ruling Bharatiya Janata Party, are involved.
The leaked documents include internal company reports from a Facebook employee’s test, titled “An Indian Test User’s Descent into a Sea of Polarizing, Nationalistic Messages,” that saw them create a test user account in 2019.
Active for a period of three weeks, the test ran parallel to the Pulwama attacks that had killed over 40 Indian soldiers in India-occupied-Kashmir, on February 14. What followed, was a “near constant barrage of polarizing nationalist content, misinformation, and violence and gore,” the employee said in a company memo.
Facebook-recommended groups were flooded with fake news, anti-Pakistan rhetoric and Islamophobic content, with majority circulation of graphic content. The employee further noted “blind spots,” particularly in “local language content.”
According to the documents, Facebook identified India as one of the most “at risk countries” in the world, and identified both Hindi and Bengali languages as priorities for “automation on violating hostile speech.” Yet, it didn’t have enough local language moderators or content-flagging in place to curb its spread.
In a statement to the AP, Facebook said it has “invested significantly in technology to find hate speech in various languages, including Hindi and Bengali” which has resulted and “reduced the amount of hate speech that people see by half” in 2021.
“Hate speech against marginalized groups, including Muslims, is on the rise globally. So we are improving enforcement and are committed to updating our policies as hate speech evolves online,” a company spokesperson said.
Facebook’s claims, however, do not go hand-in-hand with the reports of violence against minority communities in a country where Facebook enjoys a user base of over 300 million, while its messaging offshoot, Whatsapp has over 400 million Indian users. In February 2020, BJP leader Kapil Mishra uploaded a video, in which he called on his supporters to remove mostly Muslim protesters from a road in New Delhi if the police didn’t, referring to the then on-going anti-CAA (Citizenship Amendment Act) Shaheen Bagh protests in Delhi.
Consequently, a violent clash ensued in North-east areas of the national capital, between pro-CAA and anti-CAA groups, resulting in the death of 53 people, majority of which were Muslims. Only after thousands of views and shares did Facebook remove the video.
Other such instances of organized violence, that targeted Muslims, include the online campaigns against “Love Jihad,” or the Hindu extremist theory of Muslim men using inter-faith marriages to convert Hindu women, and its recent pandemic equivalent, titled “Coronajihad,” that blamed the community for a surge in COVID-19 cases.
In a document titled “Lotus Mahal,” the company noted that several individuals linked to the BJP, had multiple Facebook accounts aimed at promoting anti-Muslim propaganda.
The research found that much of this content was “never flagged or actioned” since Facebook lacked “classifiers” and “moderators” in Hindi and Bengali languages. Facebook said it added hate speech classifiers in Hindi starting in 2018 and introduced Bengali in 2020.
The issue around the absence of local-language moderators and fact-checkers was first raised in January 2019, when an assessment, prior to the test user experiment, concluded that Facebook’s misinformation tags weren’t clear enough for users to distinguish from hate speech and fake news. The lack of fact-checkers meant that majority of the content went unverified.
Users told researchers that “clearly labeling information would make their lives easier.”