There is a well-meaning instinct in public policy that seems to suggest that the best way to keep children safe from new technology is to build a wall. As a father of three, I understand that instinct. I practice caution at my own kitchen table by screening their apps and only using AI chatbots alongside my kids.
But what works for a single household often makes for disastrous national policy. This is the fundamental flaw in the GUARD Act, Sen. Josh Hawley’s (R-MO) proposal to restrict minors’ access to AI companions and mandate age verification for chatbots.
Recommended Stories
The bill, which the Senate Judiciary Committee is considering on Thursday, responds to heartbreaking concerns about digital safety that deserve serious attention, but Congress must avoid the temptation to legislate for tomorrow’s technology based on yesterday’s failures while usurping parental rights in the process.
THE KIDS ACT TREATS EVERYONE LIKE A KID
The GUARD Act attempts to differentiate AI “chatbots” and “companions,” but its definition for the latter includes the former. For general chatbots, the bill requires all users to provide government-issued identification or other “verifiable” age data before they can even log in. But for “AI companions,” the bill goes further, issuing a flat prohibition on use by anyone under 18. The problem lies in the bill’s broad definition of a companion: Any tool designed to provide “adaptive, human-like responses” or facilitate “interpersonal or emotional interaction.”
Because modern AI is inherently conversational and adaptive, this definition is wide enough to catch almost any useful tool a student might use today. A math tutor AI that offers encouraging feedback, a language bot that practices conversational Spanish, or a creative writing coach could all be legally classified as a “companion.” With fines of up to $100,000 per violation, companies will not split hairs — they will simply not offer those beneficial services at all.
Even if the distinction is confined to Hawley and his cosponsors’ intent, a de facto ban on any AI system could just exclude teens from the safest systems. Only responsible, U.S.-based companies will comply with these mandates, walling off teens and disabling safety features to avoid liability. Meanwhile, more dangerous foreign services that remain unreached by our laws, unmoderated open-source models, and fly-by-night platforms will become more attractive because they are accessible. By banning children from the safer AI tools, Congress isn’t protecting them, but effectively inviting them to use unregulated AI.
Beyond the safety paradox, the bill creates a massive privacy and free speech crisis for the rest of the country. To “protect” children, the GUARD Act would require virtually every adult American to hand over sensitive identification data to third-party vendors just to use these computers. This normalizes a “papers, please” digital culture and creates a massive honey pot of identity data for hackers to target. Age verification systems have already been breached repeatedly. Also of extreme concern is that age verification imposes a burden on speech.
As FIRE pointed out in their letter to the Senate Judiciary Committee, the GUARD Act would restrict “AI design and user speech” and restrict “access to, and anonymous use of, AI.” These are but quick summaries of the privacy and speech-related problems the bill would impose.
Another point often missed in the rhetoric is specificity about what type and how old a model might be that caused the harm. Policy discussions often occur months to years after an incident, making it especially challenging to regulate in the area of child safety and rapidly advancing AI.
For example, AI models are improving in their ability to recognize and respond to conversations that could be inappropriate for minors. KORA, a new benchmark for child safety, notes a score improvement of 37% for ChatGPT 4o, released in May 2024, to 71% for ChatGPT 5.4, released in March 2026. (ChatGPT 5.5 was released April 23 and has not been scored yet.)
If trends continue, it’s possible to foresee a world in which AI systems provide a safer environment for minors than any regulation could. Foreclosing that future, while improvements and parental controls from all major AI chatbots are regularly being built, could also create a more dangerous future for our kids.
Ultimately, this digital gatekeeping targets the sovereignty of the family. The GUARD Act removes the parent — the person who actually knows the child — from the equation and replaces parental discretion with a federal mandate. This is particularly damaging because AI literacy is no longer a luxury, but a foundational skill for the 2026 economy.
CONGRESS WANTS TO PROTECT KIDS ONLINE. ITS SOLUTION MAKES THEM MORE VULNERABLE
Denying children access to responsible, supervised AI tools doesn’t protect them. Rather, it handicaps them in a future that will demand these skills and removes the possibilities for all the ways AI can help children and teens. The goal should not be less AI for kids — it should be safer AI.
If Congress wants to protect children, it should focus on helping parents and companies know how to use and build better technology and safer tools, not pass a law based on outdated technology that violates constitutional rights and would ultimately create a more dangerous future for our children.
Taylor Barkley is the Director of Federal Government Affairs with the Abundance Institute.
