UK censorship firestorm spotlights Washington’s fight over social media

The British government’s proposal for a new internet regulator includes a laundry list of ways digital platforms have been misused, from spreading child pornography to recruiting for terrorist groups.

But that hasn’t been enough, even in the wake of viral video from March massacres at two New Zealand mosques, to forestall criticism that Prime Minister Theresa May’s plan would infringe on free expression and curb innovation in an economy already disrupted by Britain’s plans to leave the European Union.

“The threat of fines, or even prosecution, for CEOs if harmful material is posted on their platforms will radically alter the public’s ability to share content online,” said Mark Littlewood, director general of the Institute for Economic Affairs, a libertarian British think tank. “Such extreme regulations will lead to the adoption of risk-adverse policies, resulting in a downturn for user experience, and more importantly, a crackdown on free speech.”

The outcry is indicative of what similar efforts might generate in the U.S., when the right to free speech enshrined in the Constitution’s First Amendment and Congress’s immunization of tech companies from liability for user content conflict with efforts to ensure the internet isn’t used for sex trafficking or to promote violence.

While Britain’s traditional protection of free expression as a tool to promote democracy is more nuanced than the typical American perception of free speech as a value in and of itself, American lawmakers can still “make a very strong argument that this kind of content should not be allowed,” Ari Waldman, a professor at New York Law School, told the Washington Examiner.

The First Amendment was never intended to allow anyone to say anything, he explained, and the Supreme Court has allowed laws barring the use of fighting words or comments that might spark deadly panic, such as shouting “fire” in a crowded theater.

More problematic is how a law regulating internet content would work, he said. Lawmakers would have to address questions such as whether companies must use artificial intelligence to prevent certain items from being posted at all or if they will simply be required to take them down afterward.

Such a bill might prompt tech companies to “err on the side of taking too much down because they wouldn’t want to run afoul of the law,” he said. “The question should be, ‘What is the right way to craft a law such that permissible content wouldn’t be taken down and that it would be specific enough to put people on notice of what’s OK and what’s not,'” Waldman said. “It’s really hard to write a law like that.”

May’s proposal, outlined in a white paper published on Monday, seeks to impose a so-called duty of care on digital platforms to block harmful content, with efforts overseen by a regulatory agency that would be given broad enforcement powers from levying fines to possible criminal prosecution of individual senior managers.

“Our challenge as a society is to help shape an internet that is open and vibrant but which also protects its users from harm, and there is clear evidence that we are not succeeding,” Digital, Culture, Media and Sport Minister Jeremy Wright told the House of Commons on Monday. The 8,000 sexual offenses against children reported to British police in 2017 that involved an online element are just one example, he said.

“It can no longer be right to leave online companies to decide for themselves what action should be taken, as some of them are beginning to recognize,” he said. “The era of self-regulation of the internet must end.”

The specifics of the government’s plan will be considered during a 12-week consultation period, Wright’s office said in a statement. One alternative already being debated is tasking an existing agency with the new duties, a strategy that backers say might give the effort more credibility and make it easier to implement.

“The issues raised in today’s white paper are of real importance to us and the people that use our services,” said Claire Lilley, public policy manager for search engine giant Google, which also owns the video-sharing platform YouTube. “To help overcome these issues, we haven’t waited for regulation; we’ve created new technology, hired experts and specialists, and ensured our policies are fit for the evolving challenges we face online.”

In the aftermath of the New Zealand mosque shootings, which left 50 people dead and prompted a dramatic increase in scrutiny of how tech platforms manage content, YouTube said it removed tens of thousands of videos and terminated hundreds of accounts created to promote or praise the shooter. The struggle to keep up with postings of footage was intense, with new uploads sometimes arriving as fast as every second.

The suspect, police said, tried to win digital fame for himself and his ideas by livestreaming video from one of the attacks as well as posting an 87-page manifesto.

In response, New Zealand’s prime minister promised to examine the role of social media in the attacks, and Australia — where the suspect lived — called on the G-20 nations to consider new rules for the industry at its meeting in Japan this year. In the U.S., the House Homeland Security Committee summoned executives of Facebook, Google, Twitter, and Microsoft to Washington in late March to explain their handling of materials the suspect posted.

“We still need more information,” Chairman Bennie Thompson, D-Miss., said after the briefing. “In the coming months, I will continue this effort by engaging with other tech and social media companies and also nonprofit groups that have expertise in domestic terrorist movements and its proliferation online.”

Efforts to keep dangerous content off the internet are most successful when companies, governments, and communities work together, said Google’s Lilley. The firm is eager to scrutinize the details of May’s plan and work with the British government “to ensure a free, open and safer internet that works for everyone,” she said.

Facebook, which has invested heavily in the past year in systems to block harmful content and prevent its spread, said it took the mosque attacker’s video down within minutes of being contacted by New Zealand police. Including replays watched afterward, the video was seen about 4,000 times before its removal, said Chris Sonderby, the Menlo Park, Calif.-based company’s deputy general counsel. In the first 24 hours following the attack, Facebook removed 1.5 million more recordings of the scene, including 1.2 million that were blocked at upload.

Facebook’s policies ban anything that “glorifies violence or celebrates the suffering or humiliation of others,” including images that show visible internal organs and charred or burning people, founder Mark Zuckerberg has said. Last year, the company dedicated a team of people to identify and delete content that promoted violence against Muslims in Myanmar, though it was criticized for acting too slowly.

“We have responsibilities to keep people safe on our services, and we share the government’s commitment to tackling harmful content online,” Rebecca Stimson, Facebook’s head of U.K. public policy, said after the white paper was published.

“New regulations are needed so that we have a standardized approach across platforms and private companies aren’t making so many important decisions alone,” she said. Balancing those goals with the need to protect free speech and support innovation is challenging, Stimson conceded, and the Mountain View, Calif.-based company wants to work with British regulators to get it right.

The Open Rights Group, a London-based organization that advocates for privacy and free speech, is worried that May is already going too far.

“We are skeptical that state regulation is the right approach,” said Executive Director Jim Killock. “The government is using internet regulation as a blunt tool to try and fix complex societal problems. Its proposals lack an assessment of the risk to free expression and omit any explanation as to how it would be protected.”

Related Content