Meta, Facebook‘s recently renamed mother company, released data on the prevalence of bullying and harassment on both Facebook and Instagram. They divulged information about prevalence metrics for hate speech on Instagram and all metrics for violence and incitement.
Facebook’s Community Standards Enforcement Report released during its third quarter in 2021 stated that Facebook users saw bullying or harassment around 14 or 15 times out of every 10,000 views of content on the app between July and September. Instagram users viewed such content between five and six times out of every 10,000 in the same period.
The company added that it removed 9.2 million pieces of bullying and harassment content from Facebook, and 7.8 million pieces of bullying and harassment content on Instagram.
Meta defines bullying and harassment as personal in nature. It “shows up in different ways for different people, from making threats to make personally identifiable information public, to making repeated and unwanted contact”.
Vice President Guy Rosen during a recorded audio conference, called it a unique policy area and said that identification of bullying and harassment requires context just like hate speech.
He added, “It’s very difficult to know what is a bullying post or comment and what is perhaps a light hearted joke without knowing the people involved or the nuance of the situation. That’s why in some cases we will require a user report from those who may experience this behavior in order to even remove something which means we may not take action proactively in those cases.”
The company has also deployed a tool of adding warning screens on both Facebook and Instagram “to educate and discourage people from posting or commenting in ways that could be bullying and harassment,” according to its official statement.
The new policies come in the backdrop of allegations against Facebook and its family of apps for prioritising profits over user safety and allowing the spread of hate speech and fake news in countries like India and Myanmar, among others.
In October, two former Facebook employees came out as whistleblowers. They shared leaked information which said that Facebook made profit off content that made people angry, and in turn increased user engagement through its family of apps including Facebook, Whatsapp and Instagram. In addition to this, it failed to flag-off hate speech, misinformation and inflammatory posts- particularly anti-Muslim content- in India, despite being made aware through internal research, according to leaked documents acquired by a consortium of international media houses.
Facebook is also said to have changed its algorithm and dissolved civic integrity post the 2020 US Presidential elections according to the first whistleblower Frances Haugen. The documents that she submitted to the Securities and Exchange Commission in September, further stated that Facebook’s Public Policy Team defended a “white list” that safeguarded members of the elite community from ordinary rules.
Thus, Facebook announced in October that the new technologies being developed by the company will be done with the involvement of human rights and civil rights communities to ensure that they are built in a way that is inclusive.
Zuckerberg added that the company will be more transparent regarding what data is collected and how and when it is utilised. It will provide easy-to-use safety controls and parental guidance mechanisms.