Meta will automatically bar teenagers’ Instagram and Facebook accounts from viewing content promoting self-harm or explicit content, a change preemptively made in response to a push from states and federal lawmakers to impose new regulations on the tech giant.
The social media company announced Tuesday that it plans to place accounts of under-18 users under the most restrictive content settings in the next few weeks, a policy meant to protect them from seeing content that many believe is damaging to teenagers.
The changes are being made as the company faces litigation by states for the damages that it has done to teenagers, as well as federal legislation meant to require new protections for minors.
“Meta is evolving its policies around content that could be more sensitive for teens, which is an important step in making social media platforms spaces where teens can connect and be creative in age-appropriate ways,” Rachel Rodger, an associate professor in the Department of Applied Psychology at Northeastern University, said in a blog post posted by Meta on the changes.
Users aged 16 or under will be restricted by the newly established Sensitive Content Control setting on Instagram and the Reduce setting on Facebook. The setting will bar the users from seeing or searching for content that the platform deems harmful to them. If a friend posted something about dieting, for example, the post would be hidden from all adolescent users. The user would be able to see posts about recovering from an eating disorder, however. It’s unclear how Meta’s content moderation software will make note of such differentiations.
The new policy changes will go into effect this week, starting with all under-18 accounts on the platform.
Over 40 states sued Meta in the U.S. District Court for the Northern District of California in October, alleging that Meta hid the amount of damage that its apps had caused to teenagers through the promotion of addictive behavior and promotion of harmful content. New Jersey also filed a suit against Meta in December, accusing it of hosting a “marketplace of predators” and failing to do enough to crack down on the sale of child sexual abuse material.
At least four states have attempted to restrict teenage access to social media by requiring the platforms to verify a user’s age through copies of IDs or other means. The tech advocacy group NetChoice filed suits against age verification laws in California, Arkansas, Ohio, and Utah and succeeded in obtaining preliminary holds in the first two states.
The policy update was announced weeks before several Big Tech CEOs, including Meta CEO Mark Zuckerberg, are scheduled to appear before the Senate Judiciary Committee and comment on their approach to teenagers on their platforms.
CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER
Sen. Marsha Blackburn (R-TN), a top congressional critic of Meta, dismissed the changes. “Make no mistake: last minute announcements of new kids online safety features by tech companies are nothing but a distraction from historic efforts to hold them accountable for the young lives they have taken,” Blackburn posted on Tuesday.
Meta is “failing to address the online safety issues,” and the policy changes are just to provide cover at the Jan. 31 hearing, Sen. Richard Blumenthal (D-CT) told the Washington Examiner.
Blackburn and Blumenthal are the authors of the Kids Online Safety Act, a bill that would require platforms to take steps to prevent the promotion of suicide, substance abuse, sexual exploitation, and drug or alcohol use to minors. It would also require social media companies to implement controls for users, including options for limiting screen time, restricting addictive features, and limiting access to user profiles. The act was moved out of committee but has yet to get a vote on the floor.