In the hours before a gunman opened fire on worshipers at a Southern California synagogue, killing one and wounding three, an anti-Semitic manifesto was uploaded to an online message board.
The screed, posted under the name of the 19-year-old shooting suspect, John Earnest, praised the perpetrators of the recent attacks at a mosque in New Zealand and a synagogue in Pittsburgh while advocating for white supremacy.
Shared on 8chan, the post garnered the attention of other users of the far-right message board, who reported it to the FBI only minutes before the shooting at Chabad of Poway. In the aftermath, it also heightened scrutiny of whether the site and other social media platforms, largely untouched by government regulation, are taking sufficient steps to combat the proliferation of abusive content and extremism.
New Zealand’s prime minister promised earlier this year to examine the role of tech platforms in extremist behavior after livestreamed video from mass shootings at two Christchurch mosques went viral, and Sri Lankan authorities temporarily shut down social media sites in that country after Easter Sunday bombings killed more than 300. The British prime minister, meanwhile, has proposed creating a new internet regulator, and members of Congress have grilled Silicon Valley executives about harmful content.
[Related: ‘I’m going to kill you’: Veteran saves lives by charging synagogue shooter, forcing him to flee]
“There’s no question that the string, if you want to follow the trail, leads back to social media,” said Rabbi Abraham Cooper, director of global social action at the Simon Wiesenthal Center. “It’s definitely the front lines in many ways. We have now the emergence of a new phenomenon of domestic lone wolves, and that means it could be someone who is an Islamist, someone who is a neo-Nazi, a white supremacist.”
The 17-minute video posted on Facebook by the Christchurch shooting suspect was flagged by a user, but remained on the platform for roughly 30 minutes before it was taken down. The Menlo Park, Calif.-based company subsequently removed 1.5 million videos of the massacre in the first 24 hours, of which 1.2 million were blocked at upload.
The man charged with opening fire on congregants at the Tree of Life synagogue in Pittsburgh in October, killing 11, made anti-Semitic comments on the platform Gab beforehand, authorities said. “I can’t sit by and watch my people get slaughtered. Screw your optics, I’m going in,” Robert Bowers wrote, according to law enforcement.
“Here’s what social media and the internet provide,” Cooper said. “It provides validation, it provides encouragement, and even though we use the term ‘lone wolf,’ it also provides a sense of community. There’s no question of the copy-cat nature of what’s been taking place. You don’t need mass movements anymore.”
[Also read: California police investigate hate-filled 8chan manifesto that could link synagogue shooting to mosque attack]
Tech companies have “an individual and collective responsibility to do what we didn’t do with the [Islamic State] online,” he added. “ISIS lost on the battlefield and won on the Internet because no one went to war against them.”
Some of the most prominent social media companies are already taking steps to address the issue.
Twitter has purged hundreds of thousands of accounts for spreading terrorism-related content and rolled out new policies in 2017 to crack down on hate speech and violence. Facebook, meanwhile, relies on artificial intelligence to block harmful content and employs 15,000 people worldwide to review it.
Silicon Valley acknowledges more has to be done, and Facebook CEO Mark Zuckerberg has called for the government to enact new regulations. But while brand-name platforms often bear the brunt of Congressional attention, lesser-known sites like Gab and 8chan can be hotbeds of extremism.
“For those companies that don’t try to pretend that they want to address issues of hate on their platform, like Gab or 8chan, this is where we have to ask ourselves, ‘If they are not going to take steps to disrupt extremists and hate on their platforms, is it time for policymakers to step in and regulate those platforms?’” said Oren Segal, director of the Center on Extremism at the Anti-Defamation League.
Often, users on 8chan and Gab will celebrate extremist violence perpetrated by a fellow user, which creates a “disgusting cycle,” Segal said.
“Those platforms are essentially acting as digital hate groups with 24-7 extremist rallies around the globe every single day,” he said. “It’s not unreasonable for users and even nonusers to expect that companies will do everything they possibly can to prevent people from reaching, recruiting, and radicalizing extremists.”
Indeed, both Gab and 8chan were implicated in the rise of white nationalism during a House Judiciary Committee hearing in April, where representatives of Facebook and Google also testified.
Rep. Max Rose, D-N.Y., acknowledged in an interview with MSNBC on Monday that harmful speech on social media will migrate elsewhere if the big players succeed in rooting out such content. Still, they “have a responsibility that they are not perfectly fulfilling right now,” he said.
“We’ve got to stay vigilant,” Rose added. “This is a fight that we’ll be conducting for the rest of our lives. But it’s not just a threat from overseas. There’s a threat now of domestic terrorism that we cannot afford to ignore any longer.”
Combating the threat successfully requires a collective effort, said Cooper, of the Simon Wiesenthal Center.
“It’s not just the best-known names,” he said. “It’s about a culture, and Silicon Valley culture, which takes billions, they now have to put their heads together like only they can and help us put these extremists back in the margins.”