Solving the social media standoff

President Trump’s love-hate relationship with Twitter runs through his presidency, and the feeling goes both ways. Trump loves that Twitter lets him talk directly to the people over the heads of the media but hates that the microblogging site’s owners lean left and openly criticize him. Likewise, Twitter loves the exposure Trump’s tweets have given the site since 2015 but hate that one of its most popular users stands for political sentiments it abhors.

Those tensions boiled over last month when Twitter took it upon itself to fact-check one of the president’s tweets, appending a link below his text that connected users who clicked on it with a page that said the opposite of what Trump had said. Trump reacted as expected, criticizing Twitter, on its own platform, and calling for the repeal of Section 230 of the Communications Decency Act. That would cause more harm than good, but it does get at an important point: After 24 years, Section 230 is in need of an update.

The law’s Section 230 was essential for the development of a free and prosperous internet. The heart of that section is contained in one sentence: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” The sentence, §230(c)(1), creates broad immunity from defamation suits for internet services that provide content from third parties. Although it says nothing about neutrality, the effect was to create a safe harbor for service providers who wanted to be neutral. (Sites with a political bent are also protected, just not for the things over which they exercise editorial control.)

Trump was justifiably disturbed at Twitter’s actions, which were somewhat unprecedented. The law drew a line between providers and publishers, and Twitter’s actions pushed it closer to that line. As Section 230 is written, though, the social media giant is still on safe legal ground. It has dipped a toe into editing, but the act of a user posting a tweet is still one over which it has no control — until it decides that it does. Twitter’s edits after the fact are closer to the policing of a message board or comment section than they are to running a newspaper.

Trying to place a social media company into either side of the Section 230 divide is difficult, which shows the changes in the internet landscape since the law was written in 1996. Social media was in its infancy then, with most of the big names in the sector today still just twinkles in their founders’ eyes. GeoCities was the closest thing to a social media site. Even LiveJournal was three years away. Congress did not deeply consider the nature of social media sites in 1996 because they effectively did not exist.

It has also not updated the law, which explains why legal scholars have been forced to cram social media sites into one of the two categories, neither of which accurately describes them. Twitter will argue that it is not a publisher because it does not exercise editorial control over its users. When it does delete tweets or restrict users, it is consistent with §230(c)(2), which allows providers to act “in good faith” to restrict access to material that is “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” We can disagree with Twitter’s standards for such actions — and it does tend to restrict right-wing speech more than equally objectionable left-wing tweets — but it is within the law.

Even so, Twitter’s actions (and those of other social media giants) lean hard against the law’s purpose. Section 230 was meant to protect owners of servers and websites from liability for the words of others, not only because they had no control over those words, but also because those words were not the main point of the service provided.

If a website is hosted on a server, what control does the owner of that server have over the words on the site? Or any of the other pieces of the content delivery process? Almost none. There have been rare exceptions: The message board 8chan lost its access to Cloudflare’s content delivery system when the site was linked to two mass shootings in 2019. But such interventions are vanishingly rare. Millions of websites come and go. Asking the providers of servers or internet connections to approve each new site and each change to an existing site would cause the internet to grind to a halt. Had such a legal regime prevailed in 1996, it would have crushed the internet before it could ever grow into what it is today.

Likewise, the comment sections of websites are exempt from liability for defamatory content because the sites’ owners do not approve and edit each comment. Requiring that they do so, and that they be responsible for the content of each comment they approve, would simply lead to the end of comment sections. Maybe that would be a good thing — comment sections are often terrible — but Congress judged that whatever is said there should not bring liability upon the site’s owner, and that remains the law.

But a social media site is not a server or a content delivery system or a comment section. It is a form of entity that was not considered in 1996, one in which the users’ comments are the site’s material, and the users’ interaction with each other is the site’s appeal. No one comes to a news website just to read the comments, but with Twitter, that is exactly what we do. With social media, there is no underlying content that must be protected. They create nothing; they only discuss. If we are forced to shoehorn it into one of the two Section 230 categories, it is probably correct to say that Twitter is a provider, not a publisher. But it is logically more correct to say that it is neither.

That alone may not suggest that Congress should revise the law, but there is another factor, too: The social media companies are monopolies. They are an unusual type of monopoly, because the thing they monopolize is a thing they created and that cannot exist apart from them. There is no place to tweet except Twitter; there is no way to create Facebook posts outside Facebook. If Facebook or Twitter delete your posts or restrict your account, that network is closed to you, and each is a network that increasingly dominates the exchange of ideas. Even beyond the market for news and commentary, access to social media for businesses (especially Facebook) can be a make-or-break proposition.

The monopoly the social media giants control is not on information itself. Many pieces of information or opinion are communicated on social media, but most of the old ways of communicating still exist. No, the monopolies are on the connections between users, the links in their networks that make them a natural medium of mass communication. Anyone can put up a blog, print a pamphlet, or even yell out the window. But saying the same thing on Facebook or Twitter puts that idea in a network that amplifies the message in a way an individual alone cannot. The traditional remedies for monopolies fail against them because to break up Twitter into half a dozen “baby Twitters” would destroy the usefulness of the whole network. Twitter only works because everyone who wants to read tweets is on there. It is a natural monopoly, and shattering it would render the whole thing useless.

If your words are deleted from a comment section, it is a minor problem. You can type a new comment or, if you are banned completely, you may comment elsewhere. Likewise, if some service provider declines to host your website, it is not difficult to find another. There are no monopolies in these areas, and the individual providers control only tiny segments of their markets. But Twitter has a monopoly on tweets. There are other microblogging services — Mastodon and Parler are two examples — but no one uses them because everyone who wants to tweet is already on Twitter. And they are there because everyone else is there. No mass exodus has occurred, no matter how many alternatives are announced. Twitter controls its network so completely that most people have never heard of any alternative.

Even the courts have recognized the centrality of Twitter to the national conversation. When Trump blocked certain obstreperous users from following his account, a federal district court judge ruled that he was not allowed to do so, even though they could see his tweets in their entirety simply by logging out of their accounts. The ability to interact with the president’s tweets, full access to the network, was at the heart of the case, and the users’ exclusion from it was ruled unconstitutional. The 2nd Circuit Court of Appeals upheld the ruling. In that case, Knight First Amendment Institute v. Trump, the court characterized Twitter as a public forum, at least as far as politicians’ tweets are concerned. The reasoning behind that ruling is dubious, but it is now the law.

So we have a service owned and administered by a private monopoly that cannot be practically broken up. That is unusual in the history of the internet, but it has a precedent in other fields. We call them public utilities. And we regulate them differently than other companies. Cellphone providers are not a utility or even a monopoly; you can switch among four major providers very easily if you are unsatisfied with the service of one of them. On the other hand, your local water, sewer, or gas companies (if they are not publicly owned in your area) are utilities. Because of the power they could exercise through their monopoly control, they are among the most tightly regulated corporations in the United States. Which of these does Twitter more closely resemble?

Utility-style regulation of social media would look different than the regulations on traditional utilities. For one thing, the price of the product is not an issue: Social media companies give access to their platforms for free, making their money through advertisements. They exert their monopoly power by controlling access, not price, and the decisions they make in doing so, whether by algorithm or human action, are famously opaque.

Clarifying when tweets can be deleted or users banned, perhaps based on the general formula given in §230(c)(2), would go a long way toward equalizing access. Public oversight of that process would ensure that it is applied evenly across the political spectrum. If that sounds like government regulation of a marketplace, it is. But historically, the existence of monopolies has made some external control of commerce inevitable. The choice becomes one between market regulation by the government and market regulation by the monopoly. Under this proposal, regulation by government would lean toward more and freer speech, broadening access to the increasingly crucial platforms where the monopolies have sought to limit it.

The algorithms Facebook and Twitter use to select which items appear in your feed are also shrouded in secrecy. Regulation would require not that their favorites are exchanged for the government’s favorites, but would rather ensure no one’s thumb is on the scale. Users could have the choice of an algorithm that is based purely on their demonstrated choices, or of going back to the old system of having everyone you follow appear in your feed chronologically, with no promotion or demotion by the corporations. Control of what information users receive through the network would be back in their own hands.

With the added dimension of monopolistic behavior, the need for a third class of internet entity becomes clear. Section 230 was a good law, one that paved the way for the free and open internet we have today. But as monopoly actors exert never-before-seen control over that internet, the need for Congress to update the law is clear.

Kyle Sammin is a lawyer and writer from Pennsylvania and the co-host of the Conservative Minds podcast. Follow him on Twitter at @KyleSammin.

Related Content