Facebook Releases Their Community Standards Report

Facebook have just released their second Community Standards Enforcement Report. The Community Standards define what is and isn't allowed on Facebook. This report covers the period from April 2018 through September 2018. 

It covers 8 categories of violations including 2 new ones: 

  • Adult Nudity and Sexual Activity
  • Hate Speech
  • Terrorist Propaganda (ISIS, al-Qaeda and Affiliates)
  • Fake Accounts 
  • Spam
  • Violence and Graphic content 
  • Bullying and Harassment  - New
  • Child Nudity and Sexual Exploitation of Children  - New 

How did Facebook Measure Community Violations? 

Facebook measured the estimated percentage of views that were of violating content and for fake accounts, they estimated the percentage of monthly active Facebook accounts that were fake. These metrics are estimated using samples of content views and accounts from across Facebook.

Here are some of the results: 

Adult Nudity and Sexual Activity 

Facebook has seen a moderate increase in the prevalence of adult nudity and sexual activity.  This means more nudity was posted on Facebook and Facebook's systems did not pick it up fast enough to prevent an increase in views. In Q3 in 2018 that meant for very 10 000 content views, an estimate of 11 to 13 contained adult nudity and sexual activity which violated community standards. 

In Q3 2018, Facebook took action on a total of 30.8 million pieces of content. which was down from Q2 2108 where they took action on 34.8 million pieces of content. This decrease in content actioned was attributed to a change inFacebook's accounting methodology.  

In Q3 2018, Facebook found that around 95.9% of the content they subsequently took action on was before users reported it. They other 4.1% was because users reported it to Facebook. 

Hate Speech

Hate speech is defined by Facebook as "a direct attack on people based on protected characteristics - race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender identity and serious disability".  Facebook also provide some protections for immigration status. 

Facebook define an attack "as violent or dehumanising speech, statements of inferiority, or calls for exclusion or segregation". 

In the Q3 report, Facebook has not yet been able to define a global metric but are in the process of developing one. Facebook did take action on 2.5 million pieces of content.  In Q3 2018 Facebook found around 51.6% of the content they subsequently took action on before users reported it to them.  Facebook acted on 48.4% that users reported to them.  

Facebook's prevalence measurement is slowly expanding to cover more languages and regions to account for cultural context and nuances for individual languages so as they develop this global metric, Facebook will be able to be more proactive in this area. 

Terrorist Propaganda

Whilst Facebook does not tolerate any content that praises, endorses or represents terrorist organisations and enforces the community standard for this content, this report only measures actions taken on terrorist propaganda related to ISIS, al-Qaeda and their affiliate groups. 

Facebook cannot provide a metric because they cannot reliably estimate it but the number of views for this content is very low. 

In Q3 2018, Facebook removed 3.0 million pieces of content.  Improvements in Facebook's detection technology identified more violations plus several global events elevated the number of violating content on Facebook. 

Fake Accounts

In most cases, fake accounts on Facebook are due to bad actors trying to create fake accounts in large volumes automatically using scripts or bots, with the intent of speaker spam or conducting illicit activities such as scams. 

Facebook estimate that fake accounts represented approximately 3% to 4% of monthly active users (MAU) on Facebook during Q2 2018 and Q3 2018. Most fake accounts are acted upon within minutes of registration. 

In Q2 2018, Facebook disabled 800 million fake accounts, up from 583 million in Q1 2018. In Q3 2018, Facebook disabled 754 million fake accounts, a modest decrease from the previous quarter. 

In Q2 and Q3 2018, Facebook found and flagged 99.6% of the accounts they subsequently took action on before users reported them. 

Violence and Graphic Content 

Facebook defines this content as "content that glorifies violence or celebrates suffering or humiliation of others". Facebook understands that different people have different levels of sensitivity regarding graphic and violent content and cover this kind of content with a warning and refrain from showing it to underage viewers.

An estimate of 0.23% to 0.27% of views were of content that violated Facebook's standards for graphic violence in Q3 2018. What that means is of every 10,000 content views, an estimate of 23 to 27 contained graphic violence in Q3 2018, compared to an estimate of 21 to 24 in Q2 2018. The increase was likely due to a slightly higher volume of graphic violence content shared on Facebook.

In Q3 2018, Facebook took action on a total of 15.4 million pieces of content, an increase from 7.9 million pieces of content in Q2 2018. This increase was due to continued improvements in Facebook's enforcement technology.

Facebook's tools may automatically cover photos and videos that they detect as potentially disturbing with a warning. People on Facebook can choose to uncover these photos to view the content if desired.

Bullying and Harassment 

This was the first time that Facebook has reported on this category and their metrics are still in development. 

In Q3 2018, Facebook found and flagged around 14.9% of the content that they subsequently took action on, before users reported it to them. Facebook acted on the other 85.1% because users reported it first. In the cases where Facebook can proactively find and remove violating content without a user reporting it, they have done so. 

Bullying and harassment has a relatively low proactive rate due to the personal nature of the category. This is reflected in Facebook's policies, which often require someone to report content before they can consider it a violation and take action.

Child Nudity and Sexual Exploitation

Facebook can't currently provide this metric because they can't reliably estimate it. The number of views of content that contains child nudity and sexual exploitation on Facebook is very low. Facebook remove much of it before people see it; as a result, the sampling methodology they use to calculate prevalence can't reliably estimate how much these violations are viewed on Facebook.

 

Tags: Facebook, Community Standards Report

Related Articles

Back to Articles

Comments

blog comments powered by Disqus