India News

Five Takeaways From Facebook’s Controversy on Fake News and Hate Speech in India

Facebook has been embroiled in controversy for its failure to curb hate speech and misinformation across the world. The social media giant has allowed anti-Muslim propaganda to spread India, its largest largest market with over 340 million users despite being made aware of its impact through internal research.

A D V E R T I S E M E N T

Leaked documents acquired by a consortium of American publications, detail accounts of several instances of Facebook employees flagging-off malcontent on its platform while the executives in higher positions chose to keep silent.

Earlier this month, Facebook came in the eye of a storm after a former employee, Frances Haugen, shared leaked documents with the Wall Street Journal and filed a complaint with the Securities and Exchange Commission in September, alleging that Facebook prioritized monetary benefits over user safety through its family of apps, including Instagram and Whatsapp.

A second whistleblower, and former employee with Facebook’s Intergrity team, recently came out with additional evidence that supplemented Haugen’s allegations, and verified them. They said that the company turned a blind eye towards misinformation and hate speech as it did not want to upset former United States of America President Donald Trump.

Silverscreen India bring to you five takeaways from Facebook’s scandal in India:

Facebook aided the ruling party’s propaganda

Much of the material circulated around Facebook groups, promoted Rashtriya Swayamsevak Sangh (RSS), the parent organization of the Bharatiya Janata Party (BJP), and its anti-Muslim propaganda.

The leaked documents, majorly internal researches conducted by Facebook employees, detailed incidents such as the riots that followed the anti-CAA (Citizenship Amendment Act) protests in Shaheen Bagh in Delhi. It was an outcome of a video uploaded and shared by BJP leader Kapil Mishra, in February 2020. He called on his supporters to remove mostly Muslim protesters who were launching protests against thr act from an area in New Delhi.

Consequently, a violent clash ensued in portions of North-East Delhi between pro-CAA and anti-CAA groups resulting in the death of 53 people, majority of which were Muslims. Only after thousands of views and shares did Facebook remove the video.

Other such instances of organized violence that targeted the Muslims include online campaigns against “Love Jihad”- the Hindu extremist theory that Muslim members use interfaith marriages as a means to convert Hindu women. Calls for separating interfaith couples were taken down only after several hundred shares.

A D V E R T I S E M E N T

Misinformation peaked ahead of the 2019 general elections

According to the internal reports, several Facebook employees travelled to India and conducted studies in the run-up to the 2019 general elections. A test titled “An Indian Test User’s Descent into a Sea of Polarizing, Nationalistic Messages” had an employee create an account to specifically join Facebook-recommended pages and content.

Active for a period of three weeks, the test ran parallel to the Pulwama attacks that had killed over 40 Indian soldiers in India-occupied-Kashmir, on February 14. What followed, was a “near constant barrage of polarizing nationalist content, misinformation, and violence and gore,” the employee said in a company memo.

In addition to this, Facebook witnessed a spike in fake accounts, or bots, linked to various political groups, an observation that did not make it to the company’s internal report Indian Election Case Study, just like other issues such as voter suppression.

In a separate report produced after the elections, Facebook found that over 40 percent of top views, or impressions, in the Indian state of West Bengal were “fake or inauthentic”. One inauthentic account had amassed more than 30 million impressions, according to a New York Times report.

“White Lists” separated the elite from the masses

Both whistleblowers said that Facebook of safeguarded the elite and the political sections in both India and US.

The Indian Election Case Study report noted how Facebook had created a “political white list to limit P.R. risk”. This essentially included a list of politicians who received a special exemption from fact-checking. While Facebook was aware of the RSS’s tendency to spread communal hatred, through a report titled “Lotus Mahal,” it refrained from declaring it a dangerous organization due to “political sensitivities” that could affect the company’s functioning in the country.

The same treatment was meted out to the Bajrang Dal, another right-wing Hindu outfit with links to the BJP, that has been trying to publish anti-Muslim narratives on the platform. While Facebook is considering designating this group as a dangerous organization, it has not done so yet.

A D V E R T I S E M E N T

“Join the group and help to run the group; increase the number of members of the group, friends,” said one post seeking recruits on Facebook to spread Bajrang Dal’s messages, as reported by the New York Times. “Fight for truth and justice until the unjust are destroyed.”

Facebook’s “meaningful social interaction” exacerbated during the pandemic

Facebook decided to introduce a plan that focused on “meaningful social interactions” on the platform, that led to the circulation of additional misinformation, especially during the pandemic in 2020.

Championed by Facebook Co-founder and CEO Mark Zuckerberg, the Meaningful Social Interaction setting is a part of its News Feed, that “showed fewer viral videos and more content from friends and family,” Zuckerberg said while defending Facebook from Haugen’s allegations.

However, the plan backfired after the platform started brimming with fake news that alleged that Muslims were at the heart of the surge in COVID-19 cases in India after videos of a rally of the Muslim missionary, Tablighi Jamaat, went viral during the coronavirus outbreak.

Soon, social media followed the bandwagon of Indian political leaders, who called out Muslim leaders, and circulated older videos, from before 2020, of Muslim gatherings and slandered the community online. Hashtags such as #CoronaJihad or #NizamuddinIdiots and #BanJahilJamat circulated despite fact-checking organizations dismissing the narratives.

Little investment in local-language content moderation

The root cause behind majority of the issues that went unsolved is being attributed to the lack of local language content-moderators in India.

Recommended

While Facebook recognised 22 official Indian languages with Hindi and Bengali as two of the most widely-used languages, research found that much of the content was “never flagged or actioned” since Facebook lacked “classifiers” and “moderators”.

The issue around the absence of local-language moderators and fact-checkers was first raised in January 2019, when an assessment, prior to the test user experiment, concluded that Facebook’s misinformation tags weren’t clear enough for users to distinguish from hate speech and fake news. The lack of fact-checkers meant that majority of the content went unverified.

Part of the problem, as per the internal documents, lies in the fact that Facebook has allotted 87 percentage of its funds to its users in the US, and the rest for the other countries despite American users contributing to only 10 percent of its user base.