Why Facebook can’t be ‘fixed’

.

Facebook rapidly expanded. New markets, new users, and new countries quickly came online, and Facebook, focused on growth, was all too happy to watch the platform grow. That growth came with new problems, and Facebook, under increasing pressure, faced two starkly different options: moderate content or leave the platform an unmoderated free-for-all.

In the end, Facebook decided to moderate content. Bad press over abuses of the platform by violent and malicious actors had taken its toll. But actually, following through on cutting posts, groups, and users that Facebook thought might be linked to violence proved far more difficult than the company seems to have anticipated.

As documented in a new investigation from the New York Times based on more than 1,400 pages of leaked company documents that roughly outline how Facebook went about determining what to keep posted and what to delete, the social media platform failed to live up to its goal.

Given the rapidly evolving geopolitical environments where the company operates and the wide range of languages used on the platform, that’s not entirely surprising. As the Times investigation found, the supposed moderation often failed:

Moderators were once told, for example, to remove fund-raising appeals for volcano victims in Indonesia because a co-sponsor of the drive was on Facebook’s internal list of banned groups. In Myanmar, a paperwork error allowed a prominent extremist group, accused of fomenting genocide, to stay on the platform for months. In India, moderators were mistakenly told to take down comments critical of religion.


But setting real challenges aside, once Facebook decided to moderate content, it also wanted to do so cheaply. The expertise necessary for high-quality guidelines, translation, and revisions is difficult to find and expensive. The result was outsourcing moderation to third-party firms ill-prepared to evaluate high volumes of content and lacking the necessary historical, political, and linguistic understanding.

As the report notes, for example, at least one employee reported an office-wide policy of simply approving content in a language that no one understood. That meant that even content in violation of policies carefully crafted by Facebook stayed up, likely contributing to ongoing reports of content promoting violence being shared on the platform.

Fixing those problems and paying more for moderators proficient in diverse languages and well-versed in current politics would likely be so costly as to over-burden the company, which had never planned on playing the role of global speech police in the first place. Worse, even with such moderation, deciding just what should be allowed to stay while complying with Facebook’s own community guidelines and national and international existing laws would still be an all but impossible task.

So why did Facebook pick this more complex and deeply flawed path of moderation in the first place?

The decision most likely came down to money. Being associated with political disinformation or ethno-religious violence, it turns out, does not sit well with investors or users.

That reality leaves Facebook broken with no easy fix. Leaving bad content up hurts the company’s bottom line just as paying more for high-quality moderation would. The middle path of cost-effective moderation, however, has perhaps proved the worst of all alternatives with both dangerous content staying up and a hodge-podge of polices and enforcement raising real questions about Facebook’s power and influence.

Related Content

Related Content