Mass murders inspired copycats well before the existence of social media platforms, but governments around the world still say the industry must do everything in its power to keep criminals from misusing platforms that let anyone with a smartphone instantaneously connect with an audience of millions.
It was exactly that power that the 28-year-old man police say shot 50 people to death at two New Zealand mosques on March 15 sought to leverage, livestreaming video from one of the attacks and posting an 87-page manifesto. His actions prompted renewed scrutiny of how such platforms can become tools for terrorism, what safeguards are available, and how readily they can be deployed.
Not only has New Zealand’s prime minister promised to examine the role of social media in the attacks, Australia — where the suspect lived — is calling on the G-20 nations to consider new rules for the industry at its meeting in Japan this year. In the U.S., the House Homeland Security Committee has summoned executives of Facebook, Google, Twitter, and Microsoft to Washington to explain their handling of materials the suspect posted.
“There is no question that the ideas and language of hate have existed for decades, but the forms of distribution, the tools of organization, they are new,” New Zealand Prime Minister Jacinda Ardern told her country’s Parliament a day after the shootings.
“We cannot simply sit back and accept that these platforms just exist and that what is said on them is not the responsibility of the place where they are published,” she said. “They are the publisher, not just the postman.”
Footage from the attack, originally livestreamed on Facebook, was later uploaded on Twitter, Google’s YouTube video-sharing site, and other platforms, Rep. Bennie Thompson, D-Miss., noted in a letter to industry executives that requested a briefing on March 27 before the Homeland Security Committee he chairs.
Reports that Facebook was alerted to the video by New Zealand police, rather than its protective algorithms, and that YouTube was unable to contain a flood of reposts for about 24 hours show systemic flaws in the industry’s safeguards, he said.
[Related: Gun control advocates cheer New Zealand’s push to ban the AR-15: ‘Not prayers. Action’]
“You must do better,” Thompson added. “Your companies must prioritize responding to these toxic and violent ideologies with resources and attention. If you are unwilling to do so, Congress must consider policies to ensure that terrorist content is not distributed on your platforms.”
Extremists have a documented history of using social media to connect with each other, publicize their crimes, and recruit new followers, said Josh Lipowsky, a senior research analyst for the international advocacy group Counter Extremism Project who called tech platforms’ response so far “pitifully inadequate.”
Enhanced government oversight would likely spur companies to step up their prevention efforts, which would ideally include both computer algorithms and human reviewers, he told the Washington Examiner.
“It is the social responsibility of these companies to put the public good first,” he said. “What we need to see is either for them to self-regulate or for other regulations to be imposed.”
The Homeland Security Committee briefing is a positive step, Lipowsky added, as is the pressure from New Zealand and Australia. “I hope that we will continue to see that pressure mount,” he said.
Facebook, which has invested heavily in the past year in systems to block harmful content and prevent its spread, said it took the mosque attacker’s video down within minutes of being contacted by New Zealand police. The live broadcast was viewed fewer than 200 times, said Chris Sonderby, the Menlo Park, Calif.-based company’s deputy general counsel.
Including replays watched afterward, the video was seen about 4,000 times before its removal, he added. In the first 24 hours after the attack, Facebook removed 1.5 million more recordings of the scene, including 1.2 million that were blocked at upload, Sonderby said.
[Opinion: How to respond to the New Zealand shooting]
Facebook’s policies ban anything that “glorifies violence or celebrates the suffering or humiliation of others,” including images that show visible internal organs and charred or burning people, founder Mark Zuckerberg has said. Last year, the company dedicated a team of people to identify and delete content that promoted violence against Muslims in Myanmar, though it was criticized for acting too slowly.
“Over the course of many years, as we were building the business, we didn’t put enough resources and enough investment into preventing harm,” Chief Operating Officer Sheryl Sandberg said at a Morgan Stanley conference in late February. “We didn’t foresee some of the ways the platform could be abused, and that’s on us.”
Going forward, she said, Facebook will keep making “big investments to try to prevent harm on the platform and see better around corners to prevent future harm.”
Violent content without any news context is prohibited on YouTube, according to a spokesperson, who said the platform removed tens of thousands of videos after the New Zealand attack and terminated hundreds of accounts created to promote or praise the shooter.
“The volume of related videos uploaded to YouTube in the 24 hours after the attack was unprecedented both in scale and speed, at times as fast as a new upload every second,” the spokesperson said. “In response, we took a number of steps, including automatically rejecting any footage of the violence, temporarily suspending the ability to sort or filter searches by upload date, and making sure searches on this event pulled up results from authoritative news sources.”
The Mountain View, Calif.-based company continues to use a digital flagging system to direct questionable content to human teams for further review. About 70 percent of the more than 8 million videos the service removed in the last three months of 2018 were originally spotted by algorithms.
Twitter, which declined to comment on Thompson’s request, said earlier that it had suspended an account associated with the attack and has detailed processes that use both staff and technology to handle such cases. Microsoft received Thompson’s request and will work with him and other committee members to address their concerns, a spokesperson said.
“It is unacceptable to treat the Internet as an ungoverned space,” Australian Prime Minister Scott Morrison wrote in a March 18 letter to his Japanese counterpart, requesting that Shinzo Abe include time for international leaders to discuss terrorists’ leverage of the web at the G-20’s June summit in Osaka.
“We know that violent extremists use the Internet for recruitment, radicalization and to carry out their evil acts,” Morrison wrote. “That they will continue to try to use any means at their disposal does not mean governments and technology firms should abrogate their responsibility to keep our communities safe. We need to take an holistic view of these channels and the impact they can have on our communities, particularly our young people.”