Crackdown trips up human rights groups

Social media platforms, under pressure to block extremists, are sometimes also cracking down on human rights advocates and other legitimate content, according to a new report.

“Caught in the Net: The Impact of ‘Extremist’ Speech Regulations on Human Rights Content” says social media companies have responded to pressure to block extremists “with overbroad and vague policies and practices that have led to mistakes at scale that are decimating human rights content.”

The report was co-authored by two human rights groups, Syrian Archive and Witness, along with the Electronic Frontier Foundation, a nonprofit group that advocates for privacy and free speech rights online.

Examples of content blocked on Facebook, even though the posts had no connection to terrorism or extremist groups, include satirical commentary targeting Hezbollah, posts by a group advocating for the independence of the Chechen Republic of Ichkeria, and posts advocating for an independent Kurdistan. YouTube was found to have censored conflict documentation in Syria, Yemen, and Ukraine.

The report comes at a time when social media companies are under heavy pressure to weed out terrorists, white nationalists, and other extremist content. In April, the European Parliament voted to require internet companies to remove terrorist content within one hour of receiving an order from authorities. The European Parliament will negotiate with the Council of Ministers on the final form of the regulation.

Also in April, during a House Judiciary Committee hearing on white nationalism, witnesses from the Lawyers’ Committee for Civil Rights Under Law and the Anti-Defamation League called on social media platforms to improve their efforts to flag white nationalist content.

Social media is a “driving force” for a recent rise in white nationalist activity and violence in the United States, said Eileen Hershenov, senior vice president for policy at the Anti-Defamation League. Fringe social media sites such as 8chan and Gab “act as echo chambers for the most virulent anti-Semitism and racism and act as active recruiting grounds for potential terrorists,” she added. “These platforms are like round-the-clock, digital white supremacist rallies.”

Gab took issue with Hershenov’s description of the site. “Gab is an around-the-clock, digital version of the United States of America,” said founder Andrew Torba. “Our policy is to protect all political speech that is protected by the First Amendment. We protect American conservatives, Chinese and Pakistani dissidents, and anyone else who has something to say that powerful interests or lobby groups want silenced.”

“If the ADL has an issue with something a very small subset of our nearly 1 million users are saying, they should create a Gab account and try winning the argument instead of closing the venue,” he added.

Efforts by social media companies to counter “vitriolic” racist content have fallen short, committee Chairman Jerry Nadler, D-N.Y., said at the April hearing.

Just weeks later, YouTube announced in early June that it would ban white supremacist and other discriminatory content.

One day after the House hearing, Republican members of the Senate Judiciary Committee complained big tech companies were silencing the voices of conservative commentators online. “Over and over again, I’ve heard from Americans concerned about a consistent pattern of political bias and censorship on the part of Big Tech,” said Sen. Ted Cruz, R-Texas.

Cruz said Congress should consider repealing Section 230 of the Communications Decency Act, which gives websites protections from lawsuits for content posted by users. The federal government should also consider taking antitrust actions against the largest tech companies, he said.

Social media companies should also be subject to fraud charges when they fail to present content in an unbiased manner, Cruz added.

But government pressure on social media companies can lead to unintended consequences, said Jillian York, co-author of the recent Electronic Frontier Foundation report. “We’re definitely concerned about the pushes from policymakers in Europe, the U.S., and elsewhere,” she said. “All too often, regulation aimed at extremist content is constructed in such a way that it deputizes platforms to make determinations about who is or isn’t an extremist, and as we’ve seen, companies aren’t equipped to do this consistently or fairly.”

Related Content