A group of the world’s biggest tech companies, including Facebook, Google, and Twitter, joined forces Thursday to establish an industry framework for handling harmful content and conduct on their platforms.
The new coalition, known as the Digital Trust & Safety Partnership, will commit its member companies to making their rules for user content more clear and consistent and ensure their policies are effective at limiting harmful content.
The move comes as social media giants face increased scrutiny from lawmakers across the political spectrum. Legislatures are considering regulating content and tech company liability.
Democrats and Republicans in Congress are weighing bills aimed at reforming Section 230 of the Communications Decency Act, a provision that protects social media companies from liability for content posted by their users.
Both parties say that they’re interested in working together to reform Section 230 because social media giants have too much power in terms of content moderation and that the law gives them undue protections.
The member companies within the partnership will now be expected to conduct internal and external reviews of their online safety practices and together put out a “state of the industry report” outlining their efforts to reduce harmful content later this year, according to a press release.
Other top tech companies who are involved in the partnership include Microsoft, Reddit, Discord, Pinterest, Shopify, and Vimeo.