Two years after suspending then-President
Donald Trump’s
account following events at the Capitol on January 6th, Meta (which owns
Facebook
and Instagram) is
weighing
whether to allow the former president to return to its platforms. Regardless, restoring public trust and protecting free speech will require Meta to curtail the subjective, biased enforcement of its content moderation policies.
Meta is a private company entitled to set content moderation as it chooses. Still, many prominent civil rights organizations, including the ACLU recognize that “Facebook (Meta) exercises quasi-monopoly power over a critical forum in our marketplace of ideas.” If one of the world’s most powerful social media companies can censor a sitting President for arbitrary reasons, imagine the implications for the free speech of less powerful or visible individuals and groups?
TRUMP TEAM PETITIONS META TO END BAN AND STARTS DRAFTING RETURN TWEET: REPORT
The company justified deplatforming Trump by citing its “Dangerous Individuals and Organizations” policy, which prohibits content that the company determines is supportive of violence or hateful ideologies. On its face, this may appear reasonable. But the policy’s expansive and vague language opens the door to abuse. Specifically, its ban on “praise” or “support” for certain groups or events, though perhaps well-intentioned, has serious unintended consequences.
For one, the policy has had the effect of silencing speech on important issues. When worshipers posted clashes with Israeli police at the Al Aqsa mosque in Jerusalem, Meta censored posts with the hashtag “(#AlAqsa)” tagging them as promoting a terrorist organization. In another case, Facebook deactivated the accounts of journalists who were posting content critical of Israel during recent Palestinian-Israeli tensions.
The problem, according to the
ACLU
, is that social media giants don’t have transparent definitions for terms like “support,” for “terrorism,” or “violent extremism.” That leads to subjective rules and biased enforcement. But the problems actually go deeper. It’s not just that terms like “extremism” and “hate” lack a transparent definition, it’s that they’re impossible to define without siding with one viewpoint over another. No matter how you slice it, these terms invite arbitrary application based on the viewpoints of the user.
These terms are hopelessly vague and have been weaponized in recent years to justify censorship of controversial ideas across the political spectrum — from abortion and gun rights, to sexuality. To escape this quagmire, Meta and other social media platforms should take steps to align their policies with First Amendment case law. That starts with eliminating vague terms like “extremism” or “hate” from content policies and more narrowly restricting expression that courts have historically defined as unprotected speech.
One tool, the
Viewpoint Diversity Business Index
that I helped design, flags terms that are likely to lead to viewpoint-based restrictions on users’ speech, and provides recommendations for how companies, like Meta, can rein-in over broad restrictions on content.
Meta would be wise to reform its policies to avoid further erosion of its own public credibility. It might also hope to stave off increasing calls for regulation.
But aside from the reputational concerns at play, platforms like Meta must recognize their importance to public discourse and the consequent moral responsibility they have to respect civil liberties.
CLICK HERE TO READ MORE FROM RESTORING AMERICA
Daniel Cochrane is an advocate for free speech and religious liberty, and a contributor at Young Voices.