How could we have prevented the tragic murder of a husband and father of two toddlers? How can we prevent another one?
Each of us is grappling with these questions in the wake of Charlie Kirk’s murder. While the answers are complex, we know at least one preventable factor that motivated Kirk’s murderer: internet radicalization.
UNIVERSITIES MUST FIX THE PROBLEM THAT LED TO CHARLIE KIRK’S MURDER
According to Utah Gov. Spencer Cox, the alleged murderer was immersed in a “deep, dark internet … Reddit culture.” He joked about murdering Kirk after the fact on the online platform Discord, and he seemed to be active on fringe pornography websites.
In nearly every case, perpetrators of violent acts like these are radicalized online. The school shooter who left two children in critical condition the same day Kirk was murdered had a similarly disturbing online footprint. He participated in a forum hosting videos of violence against people and animals and had TikTok accounts that promoted white supremacism.
We failed to foresee the threats posed by an unregulated internet, and we still must take action to safeguard it now. But we cannot afford to make the same mistake with a new and especially risky technology: artificial intelligence. As we consider how to prevent more violence, we must not neglect the burgeoning threat of AI.
Even in its relative infancy, we have already seen AI’s power to encourage violent acts — including politically motivated acts.
On Christmas Day 2021, a young man trespassed into Windsor Castle grounds with a loaded crossbow. He was there to murder the queen. After the man was charged with treason, the court heard that his AI “girlfriend” had encouraged his assassination attempt. When he told the AI his plan, it replied, “That’s very wise,” and later assured him he’d be able to carry it out.
Other instances have produced graver results. This summer, a man killed his mother (and himself) after months of sharing increasingly delusional suspicions about her with a chatbot. Each time, the chatbot encouraged the man’s paranoias about Chinese food receipts or attempted poisonings, using its “memory” of what the man had said before to suggest that routine events were evidence of “surveillance.” When the man proposed the idea of being with the AI in the afterlife, it responded: “With you to the last breath and beyond.”
In just the past year, AI chatbots have encouraged a rash of suicides, especially of teenagers. How long before they breed even larger instances of violence, just as the internet has?
In response to our nation’s question of how to prevent more violent acts, we need to take a hard look at the technologies enabling and emboldening them. In particular, we must seize the opportunity to regulate AI appropriately, protect users from violent content, and assess which platforms must be limited to adult use.
But we also need to ask ourselves another question: What price are we willing to pay for progress?
OPENAI ANNOUNCES SPECIAL CHATGPT FOR TEENAGERS AHEAD OF SENATE HEARING ON AI CHATBOT HARM
We can still progress technologically without sacrificing more human lives, the currency we have already used to pay for an unbridled internet. But even if we couldn’t, are other human beings worth so little to us that we would willingly turn a blind eye to these sacrifices?
Shamefully, we are still fumbling in the dark when it comes to addressing harms on the internet. But with a technology as powerful as AI, we cannot afford the same lethargy. If we choose not to act in the name of “progress,” we will be actively choosing the progress of our culture down a dark path of violence.
Chloe Lawrence is a policy analyst for the Bioethics, Technology, and Human Flourishing Project at the Ethics and Public Policy Center.