Artificial Intelligence is increasingly embedded in online gambling platforms, promising earlier detection of risky behavior while raising concerns that the same technology could intensify addiction if left unchecked.
Multimodal AI models, which are systems that analyze multiple types of data at once, like OpenAI’s ChatGPT, are emerging as a promising safeguard, but experts caution that the same tools can also intensify harm if misused.
Dr. Kasra Ghaharian, the director of research at the University of Nevada, Las Vegas, International Gaming Institute, told the Washington Examiner that with greater access comes greater risk for players to engage in problem gambling behavior.
“You no longer have to get in your car to drive to a casino. You just sit on your couch and download an app,” Ghaharian said. “I don’t think there’s anything (I can’t say about) the potentially more engaging or addictive nature of the product.”
Research has shown that the rise of online access has increased gambling exposure and intensity, particularly among younger users — a shift that has made AI models simultaneously more capable of detecting risky behavior and amplified it through personalized engagement.
As gambling shifts deeper into digital, app-based ecosystems, AI is no longer a separate function that can be used as a personal assistant, such as asking ChatGPT about wagers, it is being embedded into gambling platforms to flag behavior. At the same time, behavior flagging-AI could be used to drag gamblers deeper into addiction.
AI models to identify risky gambling behavior
Online gambling platforms generate continuous streams of behavioral data, from bets placed and deposits made to withdrawals canceled and messages sent to customer support. Multimodal AI models analyze those signals together, allowing platforms to interpret not just what a user does, but how their behavior changes over time.
Dr. Zahra Shakeri, an assistant public health professor at the University of Toronto, told the Washington Examiner that the shift has altered how risk gambling emerges and how it is detected.
“To compare this technique with traditional techniques, I would say traditional techniques see the bets, but multimodal techniques would see this story around the bets and that’s where harm shows up early,” Shakeri said.
Shakeri said traditional safeguards rely on static thresholds such as spending limits or session length. She said older tools struggle to keep pace with AI-mediated gambling environments where betting options, promotions, and prompts are constantly adapting to user behavior.
Multimodal AI models allow risk gambling intervention, such as freezing a player’s account, to happen before casual wagering ventures into problem behavior.
“If we only look at transactions using traditional techniques, we catch the harm too late,” Shakeri said. “It’s like diagnosing lung disease only by checking how often someone buys cough drops. For example, with multimodal, you’re looking at data coming from … different resources, and it would give you a better insight about the context and the problem.”
The risk of AI models in gambling
Researchers warn that the same AI systems capable of identifying vulnerability are built on engagement-optimized platforms. This means risk detection and risk creation can stem from the same multimodal systems.
“We know that the application of AI in this field also is a double-edged sword,” Shakeri said. “There should definitely be policy regulations … to target both how AI is being used and how AI can be used to address the problem.”
Shakeri, who has worked on research teams building multimodal systems, said models designed to detect risk behavior could also be repurposed to identify which users are most responsive to incentives or promotions, potentially nudging vulnerable players toward higher engagement.
The growing use of large language AI models raises additional concerns, particularly as players increasingly turn to AI tools to guide betting decisions.
As AI continues to grow, Ghaharian predicted that players are going to see the potential to use AI large language to aid in their gambling and fine-tune so-called strategies, even if those tools are implemented as a safeguard for players.
“We’re really going to start to see lots of betting assistance and LLM-based products,” Ghaharian said. “I do think it’s going to be important for … (gambling regulators) to start thinking about how to communicate how this technology works, so that the people using these tools know the risks, know the boundaries, and can use it in a safe and responsible manner.”
Essentially, if a gambling app has an AI chatbot that intervenes when players engage in risk behavior, players could also ask the bot about wagers, strategy, or betting odds.
Online gambling companies are using AI now
Against the duality of multimodal AI systems for risk gambling intervention, gambling companies are beginning to use AI not only to analyze behavior, but to change how and when interventions occur.
Fanatics Sportsbook has integrated AI-driven tools into its responsible-gaming framework alongside traditional safeguards.
Anthony D’Angelo, the head of responsible gaming at Fanatics, told the Washington Examiner the company follows New Jersey’s responsible-gaming best practices nationwide, even in states where such monitoring is not mandated.
Fanatics has onboarded an AI-based risk-scoring system, Neccton, developed by gambling researcher Dr. Michael Auer, D’Angelo said. The system evaluates dozens of behavioral indicators and assigns players a risk score, allowing responsible gaming teams to prioritize reviews and intervene earlier when necessary.
“Neccton has a proprietary algorithm, which we don’t touch … we have (Auer’s) model to identify what he believes are markers of harm,” D’Angelo said. “Let’s say you’re somebody who cancels withdrawals often or has a lot of insufficient funds … that would drive your score up, which means that maybe somebody should get involved and do a review of your account.”
Rather than reacting to isolated triggers, such as a large deposit or a spree of wagers, the system surfaces accounts showing sustained or accelerating behavioral change, allowing responsible gaming teams to intervene earlier in a player’s betting cycle.
“We are to do some type of an intervention, whether it’s a communication, whether it’s a deep review by the RG operations team to determine if we need … go through with a suspension,” D’Angelo said. “We could shut a player down for … a set amount of time or permanent right, depending if there’s some clear indicator.”
D’Angelo said the AI does not diagnose addiction or act autonomously. Instead, it standardizes reviews and reduces subjectivity, helping human teams identify concerning trends more consistently.
Large language models are also used internally to analyze customer communications for sentiment that may indicate distress, triggering escalation to responsible gambling specialists.
Policy decisions around AI’s impact
As AI becomes more deeply embedded in gambling platforms, researchers said its impact on gambling behavior will ultimately depend on regulation.
Ghaharian said the next phase of digital gambling will likely include more AI-driven betting assistance tools, further blurring the line between guidance and influence.
“I don’t think gambling operators might even be aware that these technologies are being used and what the risks are,” Ghaharian said. “I think there might be an opportunity for gambling regulators to specifically step in and be like, ‘Oh, you need to start doing specific safety evaluations on (large language models) if they use customer-facing scenarios.’ That’s something I would like to see.”
AI WON’T CAUSE A SUDDEN JOB APOCALYPSE, BUT A WAR OF ATTRITION
Shakeri said the question facing policymakers is not whether AI will shape gambling behavior, but rather how it will.
“The question is not whether AI will be used,” she said. “It’s what objective we allow it to optimize.
