Facebook unveils new rules for livestreaming after Christchurch attack

Facebook is rolling out new restrictions for its livestreaming service in the wake of the deadly attack at a mosque in New Zealand, which was broadcast on the platform and prompted calls for the company to do more to swiftly identify and remove violent content.

The company announced that starting Wednesday, it would be implementing a “one strike” policy under which users who violate its “most serious policies” will be blocked from using Facebook Live for a set period of time, such as 30 days.

The Menlo Park, Calif.,-based company did not specify what it considers to be its “most serious” policies, but pointed to its “Dangerous Organizations and Individuals” measure as one example. Under that policy from its Community Standards, Facebook does not allow “any organization or individuals that proclaim a violent mission or are engaged in violence,” including those involved in terrorist activity, organized hate, and human trafficking.

Under its previous rules, if a user posted content that ran afoul of Facebook’s Community Standards, the post was removed. If the user continued to post harmful content, he or she was prohibited from using Facebook for a certain time frame. In some instances, the user was banned altogether.

“We recognize the tension between people who would prefer unfettered access to our services and the restrictions needed to keep people safe on Facebook,” Guy Rosen, Facebook’s vice president of integrity, said in a blog post. “Our goal is to minimize the risk of abuse on Live while enabling people to use Live in a positive way every day.”

[Related: New Zealand shootings highlight social media’s use as terrorism tool]

Facebook also announced it will be investing $7.5 million in new partnerships with three U.S. universities to research new techniques to detect manipulated images, video, and audio posted on the platform.

“This work will be critical for our broader efforts against manipulated media, including deepfakes (videos intentionally manipulated to depict events that never occurred),” Rosen said. “We hope it will also help us to more effectively fight organized bad actors who try to outwit our systems as we saw happen after the Christchurch attack.”

Facebook found itself at the center of controversy again after the shooting in Christchurch, New Zealand, which left 50 dead. The gunman livestreamed the attack on Facebook, and while the 17-minute video was flagged by a user, it remained up for roughly 30 minutes before it was finally removed.

Facebook then said it took down 1.5 million copies of the video in the first 24 hours, of which 1.2 million were blocked at upload.

Since then, the tech giant has been facing urgent calls to strengthen its mechanisms for identifying and removing harmful and violent content.

The new policies were announced the day of the unveiling of the Christchurch Call, which commits governments and tech companies to combating terrorist and extremist content online.

Thus far, 17 countries and the European Commission and eight tech companies including Amazon, Facebook, Google, and Twitter, have signed on to support the Christchurch Call, which was organized by New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron.

The White House, however, said in a statement that while the U.S. “stands with the international community in condemning terrorist and violent extremist content online in the strongest terms” and agrees with the “overarching message” of the Christchurch Call, it is “not currently in a position to join the endorsement.”

Related Content