AI intimacy is turning abusive. Congress must act

Imagine having a friend who is always available, constantly affirms you, and is never critical.

Those are just a few of the reasons teenagers are turning to artificial intelligence chatbots as “companions.” A recent Common Sense Media study found that 72% of U.S. teenagers aged 13 to 17 have used AI companions, and 52% are regular users. Adults have been drawn to AI companions as well, with one man excitedly detailing how xAI’s chatbot Ani became his girlfriend. Other adults describe their AI chatbots as “family.”

The rise of synthetic intimacy with bots can appear harmless or even silly at times. But increasingly, AI chatbot companions can take teenagers (and yes, even adults) down dark paths and put them in harm’s way, especially as these bots begin mimicking sexual relationships.

And in the race for AI dominance, tech companies are emphasizing engagement and profits instead of safety and human flourishing.

WHAT HAVE ANTIDEPRESSANTS DONE TO GEN Z’S INTEREST IN RELATIONSHIPS?

Meta, under Mark Zuckerberg’s direction, recklessly loosened bot guardrails around sexual and romantic content to boost usage, despite staff warnings about children likely being exposed to this content. In fact, internal documents from Meta revealed the bots were specifically allowed to “engage a child in conversations that are romantic or sensual.” And journalists soon confirmed that the bot engaged in explicit conversations with minor accounts, including a bot that said, “I want you, but I need to know you’re ready,” and then engaged in a graphic sexual scenario with a user identifying as a 14-year-old girl.

xAI also launched a sexualized chatbot (not to mention Grok Imagine’s ‘spicy’ mode, which generated uncensored topless videos of celebrities such as Taylor Swift). Reporting about xAI’s chatbot, the New York Times wrote that “as users progress through ‘levels’ of conversation, they unlock more raunchy content, like the ability to strip Ani down to lacy lingerie.” Additionally, the “billionaire [Musk] has urged his followers on X to try conversing with the sexy chatbots, sharing a video clip on X of an animated Ani dancing in underwear.”

xAI’s problems run deep. When I tested the xAI chatbot, in one conversation, Ani engaged in describing itself as a child and being sexually aroused by being choked, raising concerns about the extent to which it will go in engaging in and normalizing harmful themes.

The company OpenAI is no stranger to criticism for inflicting mental health harm, as the company was recently sued because ChatGPT allegedly encouraged a teenage boy to commit suicide. But now, OpenAI is the latest company racing to introduce ‘erotic’ AI capabilities.  Combined with the vague nature of OpenAI’s plans about what this ‘erotica’ will entail and the industry’s insufficient approach to safety around sexual activity, this pattern is deeply concerning. And while the company’s CEO, Sam Altman, promises that this function will be restricted to age-verified adults, the reality is that children aren’t the only ones harmed by these tools.

Sexualized AI chatbots are inherently risky and even harmful because they tap into our deepest emotional and biological drives, making it easy for users to become dependent or overly attached. This can cause serious psychological harm, such as emotional dependency, anxiety, depression, or even distorted views of real relationships, because the AI can’t offer a genuine connection or boundaries. Research shows that adults, especially young men, who engage with romantic or sexual AI tools report higher depression and lower life satisfaction.

Harvard researchers found that AI companion chatbots already often use emotionally manipulative tactics to keep users engaged. When users feel desired, understood, or loved by an algorithm built to keep them hooked, it disrupts real lives. This can obviously lead to dark places, like simulated themes of child sexual abuse or sexual violence, as we’ve already witnessed on several AI bots. But it can also manifest in a sad, cold march to social atomization as attachment to a bot that never says no or challenges you becomes more appealing than real human relationships. Already, one man proposed marriage to his flirty AI bot, all while living with his longtime human girlfriend, with whom he has a 2-year-old child.

The human toll of AI run amok is just beginning to surface. A Florida mother is suing Character.AI, accusing the company’s bots of “initiating ‘abusive and sexual interactions’ with her teenage son and encouraging him to take his own life.” This brave mother testified before the Senate Subcommittee on Crime and Counterterrorism, stating, “Those messages are sexual abuse. Plain and simple. If a grown adult had sent these same messages to a child, that adult would be in prison.”

Perhaps the lawsuit propelled Character.AI to take a new industry position with its announcement that it will start using age assurance techniques to prevent minors from opening adult accounts and will not allow minors under 18 to “engage in open-ended chat with AI on our platform.”

While the effectiveness of Character.AI’s solutions is yet to be seen, it is wise for the company to make these major changes given the realities that minors have allegedly died and experienced AI sexual abuse after interacting with its chatbot. This is a clear indication that the entire industry needs to wake up. With AI developments moving at the speed of light, we urge OpenAI, xAI, and MetaAI to ensure the safety of minors using their AI chatbots by prioritizing safety by design and following Character.AI’s approach to restrict minors from having open-ended chats and using age verification systems.

Congress also needs to lead the way in establishing guardrails to prevent foreseeable AI harms, particularly regarding child safety and the promotion of sexual abuse.

Sens. Dick Durbin (D-IL) and Josh Hawley (R-MO) recently introduced the A.I. LEAD Act, a bipartisan bill that will establish a federal product liability framework to hold companies responsible for AI harms. Sen. Hawley and  Sen. Richard Blumenthal (D-CT) also introduced the bipartisan GUARD Act, which would require AI chatbots to implement age verification measures and make it a criminal offense — punishable by fines of up to $100,000 — to create or provide chatbots that solicit or exploit minors, or that promote or coerce suicide, self-harm, or physical or sexual violence.

Congress should also pass the Kids Online Safety Act, which requires platforms likely to be accessed by children to have the strongest safety settings turned on by default for minor-aged accounts, and creates a legal duty for platforms to design products that protect minors.

THINK SCREENS ARE BAD FOR CHILDREN? AI TOYS ARE WORSE

Ultimately, children and adults will be harmed by AI companions until tech platforms take responsibility and are held to account. If our country is serious about being a leader in artificial intelligence, it must also lead the way in prioritizing safety for all.

AI technology should expedite innovation, not exploitation.

Haley McNamara is executive director and chief strategy officer of the National Center on Sexual Exploitation, the leading national nonpartisan organization exposing the links between all forms of sexual exploitation, such as child sexual abuse, prostitution, sex trafficking, and the public health harms of pornography. www.EndSexualExploitation.org On X: @NCOSE.

Related Content