Tech giants Google, Facebook, Twitter, and Microsoft admit removing terrorist or extremist content within hours of its posting presents significant technological and scientific hurdles.
Google’s general counsel, Kent Walker, is slated to address world leaders on behalf of the companies on the sidelines of the United Nation’s General Assembly Wednesday. Many of those world leaders are calling on the companies to develop a system to remove extremist content within one or two hours.
European leaders from France, Britain, and Italy are pushing social media companies to remove “terrorist content” from the Internet within one to two hours of it appearing, arguing that’s the period when most material is likely to be most rapidly spread.
“We are making significant progress, but removing all of this content within a few hours — or indeed stopping it from appearing on the internet in the first place — poses an enormous technological and scientific challenge,” Walker will say in a speech on behalf of the Global Internet Forum to Counter Terrorism, according to Reuters. The group was established by the four tech giants to work together in an effort to remove extremist content.
Social media providers have faced intense pressure globally and from the U.S. to do more to ensure terror groups like ISIS are not able to recruit and spread propaganda through their outlets.
While many of the companies have taken efforts to hire more staff and heighten their efforts to monitor offensive content, they argue that, despite their best efforts, there is no sure-fire way to identify and remove extremist content entirely.
“There is no silver bullet when it comes to finding and removing this content, but we’re getting much better,” Walker will say. “Of course, finding problematic material in the first place often requires not just thousands of human hours but, more fundamentally, continuing advances in engineering and computer science research. The haystacks are unimaginably large and the needles are both very small and constantly changing.”
Walker will argue more human reviewers will be needed to combat the evolving ways terror groups are utilzing social media while distinguishing between legitimate material such as news reports.
A joint database was set up last year for the companies to share unique digital fingerprints they assign to extremist content, called “hashes,” which enable the companies to detect and quickly remove similar content.
Twitter announced Tuesday it had suspended nearly 300,000 accounts linked to terrorism contributing to an 80 percent drop in the number of accounts that governments reported to Twitter in the first six months of the year, compared to the last six months of 2016.