Tech giants Facebook and YouTube both published their respective community standards, making Tuesday a banner day for online moderation transparency.
The community standards rules used by both are what moderators use to determine what content is flagged and removed.
Facebook has kept its community standards close to the vest for years, but the social media platform with 2.2 billion users worldwide has been on the defensive about user information, data privacy and the spread of misinformation in recent week.
Facebook detailed the types of inappropriate posts and content into six different categories: “Violence and Criminal Behavior,” “Safety,” “Objectionable Content,” “Integrity and Authenticity,” “Respecting Intellectual Property,” and “Content-Related Requests.”
Earlier this year, Facebook CEO Mark Zuckerberg said: “We won’t prevent all mistakes or abuse, but we currently make too many errors enforcing our policies and preventing misuse of our tools.”
“Publication of today’s internal enforcement guidelines — as well as the expansion of our appeals process — will create a clear path for us to improve over time. These are hard issues and we’re excited to do better going forward,” said Monika Bickert, Vice President of Global Policy Management, in a blog post.
Facebook said it blocks and works keep the following from being on its site:
- Terrorist activity
- Organized hate
- Mass or serial murder
- Human trafficking
- Organized violence or criminal activity
In its safety section, Facebook said it will “remove content, disable accounts, and work with law enforcement when we believe there is a genuine risk of physical or direct threats to public safety.”
Other “objectionable content” that will be banned is hate speech, graphic violence that “glorifies violence or celebrates the suffering or humiliation of others,” adult nudity and sexual activity, and content that “depicts real people and mocks their implied or actual serious physical injuries, disease, or disability, non-consensual sexual touching, or premature death.”
Facebook also discussed content that includes misinformation or “false news” as part of its section on integrity and authenticity. Satire is allowed, it says: “For these reasons, we don’t remove false news from Facebook but instead, significantly reduce its distribution by showing it lower in the News Feed.”
Overnight, Facebook said it removed or added content warning to 1.9 million pieces of “ISIS and al-Qaeda” content from January to March — twice as much from the previous quarter.
“In most cases, we found this material due to advances in our technology, but this also includes detection by our internal reviewers,” said Monika Bickert, and Brian Fishman, Facebook’s global head of counterterrorism policy.
Facebook also revealed its counterterrorism team has grown from 150 people to 200, and “will continue to grow.”
YouTube
YouTube announced overnight more than 8.2 million videos were removed after being flagged as “violative” between October and December 2017.
“At YouTube, we work hard to maintain a safe and vibrant community. We have Community Guidelines that set the rules of the road for what we don’t allow on YouTube,” the community guidelines post says.
“For example, we do not allow pornography, incitement to violence, harassment, or hate speech. We rely on a combination of people and technology to flag inappropriate content and enforce these guidelines. This report provides data on the flags YouTube receives and how we enforce our policies.”
YouTube reported 75 percent of the videos were removed without any views. YouTube also revealed more than 10,000 people work company-wide to “address violative content.”
“We’ve also hired full-time specialists with expertise in violent extremism, counterterrorism, and human rights, and we’ve expanded regional expert teams,” Youtube said.

