Not everyone may know about Section 230, but it plays a huge role in shaping the online world. Itâs a short law; the most important part is just 26 words long. And it was enacted in 1996 because Congress was worried that the internet would become a cesspool of harmful content if internet companies could be sued for anything their users posted.
In 1995, a New York state court had held that Prodigy, an early internet company, could be sued for defamation for something that one of Prodigyâs users had posted. Because Prodigy moderated its usersâ posts, the court reasoned, the company had made itself liable for posts that its users wrote whenever Prodigy failed to remove them.
To eliminate this courtâcreated disincentive to moderate content, Section 230 states that an âinteractive computer serviceâ (think Facebook, Google, or Twitter) canât be held liable for publishing other peopleâs content. Similarly, a service canât be held liable for its decisions on what content to remove, leaving companies free to moderate as they see fit.
Thanks to Section 230, websites can afford to offer forums for freewheeling speech. Want to blow the whistle on misconduct? Criticize someone powerful? Complain about mistreatment? Engage in vigorous, even rude debate? Section 230 lets you do it, because the companies that publish what you have to say donât need to fear getting dragged into a lawsuit. Itâs no exaggeration to say that the open, creative, disruptive internet we know today wouldnât exist without Section 230.
In recent years, tech companies have developed increasingly sophisticated ways to connect people with user content. Today, companies both big and small rely on algorithms to achieve this goal, helping users find speech relevant to their interests. But these algorithms, which are integral to how modern users experience the internet, are now under attack at the Supreme Court. A group of victims of terrorism have sued Google, the parent company of YouTube, alleging that YouTubeâs algorithms aided terrorist recruitment by helping wouldâbe terrorists find radicalizing videos. They argue that YouTubeâs video ârecommendationsâ are distinct from publishing and thus unprotected by Section 230. This argument was rejected at both the district court and Ninth Circuit, but now the Supreme Court is considering their claim.
Cato, joined by the R Street Institute and Americans for Tax Reform, has filed an
amicus brief
urging the Supreme Court to affirm that these algorithms are protected by Section 230. In the brief, we argue that the lower courts have developed an accurate test, grounded in the text of Section 230, to figure out whether the actions of an interactive computer service are covered by Section 230. We then show that YouTubeâs contentârecommendation algorithms meet that test in full. Finally, we explain that a rival test urged by the plaintiffs is unsupported by the text of the statute and not even backed up by the cases they cite in support.
The Court should reject a theory that would atextually limit Section 230 and make the internet less open, less free, and less dynamic.
CLICK HERE TO READ MORE FROM RESTORING AMERICA
This article originally appeared in the Cato at Liberty blog and is reprinted with kind permission from the Cato Institute.