Members of Congress make renewed push for proposals aimed at protecting children online

Against the backdrop of pending federal legislation and a flurry of regulatory activity at the state level, social media companies are preemptively introducing changes they hope will satisfy lawmakers and keep young users safer online.

A recent Pew Research Center study found that 42% of parents of children age 12 and younger who were surveyed felt they could be doing a better job at managing their time. The same study found that among parents who did not allow their child to use a smartphone, 88% listed concerns about content harmful to their child as a reason.   

Lawmakers and social media companies are taking note.

Earlier this month, Meta Platforms, owner of Facebook and Instagram, announced that its photo- and video-sharing service will begin defaulting users under 18 into a “PG-13” style viewing mode. Parents can opt out of the offering for their child. The filtered environment hides and does not recommend posts with coarse language, certain risky stunts, and other content that could encourage harmful behavior, including some drug-related subject matter.

“This means that teens will see content on Instagram that’s similar to what they’d see in a PG-13 movie,” the company explained while announcing the most significant update to the Teen Accounts program the company launched last year, adding, “Because we know that all families are different, we’re also introducing a new, stricter setting for parents who prefer a more restrictive experience for their teen.”

Parents with national advocacy group ParentsTogether Action deliver a 100,000-signature petition to Congress in Washington, D.C. on September 12, 2024 urging lawmakers to pass the Kids Online Safety Act. (Eric Kayne/AP Images for ParentsTogether Action)
Parents with national advocacy group ParentsTogether Action deliver a 100,000-signature petition to Congress in Washington, D.C. on September 12, 2024 urging lawmakers to pass the Kids Online Safety Act. (Eric Kayne/AP Images for ParentsTogether Action)

Likewise, YouTube has moved to identify better which users are minors and make content delivery decisions accordingly. This summer, the Alphabet-owned company announced a twofold system to identify users under 18 by employing both self-declared age and an age estimation model that uses machine learning to ascertain if the user is a minor based on the types of videos searched for, categories of content watched, and the longevity of the account’s existence.

If the user is incorrectly found to be a minor, he or she has the option of using a government-issued ID or a credit card to prove he or she is of age and not be subjected to the child safety setting. But if the user is indeed a minor, YouTube automatically applies child safety measures, including disabling personalized ads, turning on digital well-being tools, and adding safeguards to recommendations like limiting repetitive viewing of some content. 

TikTok, a widely used and beloved social media platform, especially among teenagers, also recently updated its parenting controls, called Family Pairing. New features include allowing parents to block their teenagers from being on TikTok for certain periods, like at school or during family activities. There’s also a feature that allows parents to see who their teenager is following on the platform, who follows them, and whom their teenager has blocked.  

All these efforts from industry come as lawmakers continue to press for stricter regulation of how social media platforms treat their minor-aged users.

The Kids Online Safety Act was reintroduced in the Senate this year after passing in the last legislative term but stalling in the House of Representatives due to First Amendment concerns. KOSA would establish a “duty of care” for users under 18, including mandating curtailed content and blocking personalized algorithmic recommendation systems for minors. Because of the age-based nature of the obligations, critics worry the measure would necessitate age verification.

Also pending in the Senate is the Children and Teens’ Online Privacy Protection Act, known as COPPA 2.0, in reference to a decades-old law that regulates data from children 13 and under. COPPA 2.0 seeks to raise the protected age to 16, ban targeted advertising, and mandate options for data deletion.  

Both federal efforts face an uncertain future in Congress.

There are also proposals for age verification in app stores at the state and federal levels, among other legislative efforts to restrict and regulate what children can access online. Many states have already passed measures, but several are being challenged in court.  

GAZA CEASEFIRE ‘RENEWED’ AFTER STRIKES ON HAMAS, IDF SAYS

It remains to be seen if large social media platforms can self-regulate fast enough to stave off federal regulation.

“Social media services like TikTok, Instagram, and YouTube have competed aggressively to give parents choices on how to control their kids’ online experience,” Zach Lilly, director of government affairs for tech industry group NetChoice, told the Washington Examiner. “Government standardization threatens to end the private competition that drives safety innovation and that would be a disaster for families and kids.”

Jessica Melugin is the director of the Center for Technology and Innovation at the Competitive Enterprise Institute and a 2025 Innovators Network Foundation antitrust and competition policy fellow.

Related Content