While Facebook, Twitter, and Google’s YouTube all took down videos of deadly shooting sprees at two New Zealand mosques, they weren’t fast enough to keep the graphic footage from going viral.
The spread of the footage from attacks that left 49 people dead provides the latest illustration of how digital platforms that enable anyone with a smartphone to instantaneously reach an audience of millions can be wielded by terrorist and hate groups. It’s prompting immediate scrutiny of available safeguards and how readily they can be deployed.
“Reports say Facebook needed 17 minutes to remove the livestream. It is now emerging that the video is still circulating online,” said David Ibsen, director of the international advocacy group Counter Extremism Project. “The technology to prevent this happening is available. Social media firms have made a decision not to invest in adopting it.”
[Related: New Zealand’s gun laws last updated in 1992]
The livestream footage from the Al Noor mosque in Christchurch appeared to originate from one of the gunmen, who left behind a 74-page document on social media in which he said he hoped to survive the attack to better spread his ideas, according to the Associated Press. Three men and a woman have since been arrested, according to New Zealand Police Commissioner Mike Bush.
Explosives found on one vehicle were disarmed, Bush added, and initial checks found none of the suspects’ names were included on terrorism watch lists in New Zealand or Australia.
Twitter said it suspended an account associated with the attack and is working to remove the video content from its platform. The San Francisco-based company has detailed processes that use both staff and technology to handle such emergencies.
“Our hearts go out to the victims of this terrible tragedy,” said a YouTube spokesperson. “Shocking, violent and graphic content has no place on our platforms, and is removed as soon as we become aware of it. As with any major tragedy, we will work cooperatively with the authorities.”
Violent content without any news context is prohibited on YouTube, and the company said it’s working quickly to review and remove any new uploads of the New Zealand video.
Facebook, which didn’t immediately respond to a message seeking comment, has invested heavily in the past year in systems to block harmful content and prevent its spread.
The Menlo Park, Calif.-based company’s policies ban anything that “glorifies violence or celebrates the suffering or humiliation of others,” including images that show visible internal organs and charred or burning people, founder Mark Zuckerberg has said.
Last year, the company dedicated a team of people last year to identify and delete content that promoted violence against Muslims in Myanmar, thought it was criticized for acting too slowly.
“Over the course of many years, as we were building the business, we didn’t put enough resources and enough investment into preventing harm,” Chief Operating Officer Sheryl Sandberg said at a Morgan Stanley conference in late February. “We didn’t foresee some of the ways the platform could be abused, and that’s on us.”
Going forward, she said, Facebook will keep making “big investments to try to prevent harm on the platform and see better around corners to prevent future harm.”
Still, white nationalists and other extremists rely on social media, the Counter Extremism Project’s Ibsen said. Inaction “in addressing this problem serves to perpetuate it,” he added. “The online spread of extremist ideologies can no longer be ignored and should be tackled at the root.”
