AI and Problem Gambling
Let’s agree to skip the pleasantries and fast-forward through the classic buzzwords and information everyone knows. Artificial intelligence is here, and it reaches its tendrils into every area of modern life, including—of course—gambling.
Dr. Kasra Ghaharian, director of research at the University of Nevada, Las Vegas International Gaming Institute, helms research projects that analyze AI’s relationship to gambling. His new study, co-authored with fellow UNLV members and collaborators at the University of Waterloo, addresses large language models (LLMs) like OpenAI’s ChatGPT and Meta’s Llama and how they respond to potential problem gamblers.
The results, as one might expect, are mixed. Chatbots are not equipped to treat problem gambling, and it shows in their responses. However, some glimmers of hope emerged, as revealed by Ghaharian and the study.
When asked to summarize the research findings into a short quip a la movie taglines, Ghaharian delivers. “Guardrails for problem gambling responses in large language models need alignment with human values,” he says.
There’s a lot more to unpack, though. The study focused on two LLMs: ChatGPT and Llama. They were chosen in order to test a proprietary model (ChatGPT) and an open-source one (Llama).
Ghaharian has followed the progression and widespread growth of LLMs for years. When the opportunity for a direct study into chatbots and their responses about problem gambling arose, he took it. A grant from the Sports Innovation Institute at UNLV made it possible, and he was off to the races, so to speak.
“What if a sports bettor asked about gambling addiction or problem gambling?” Ghaharian mused, sparking the idea for the study. It snowballed from there into a full-scale project. “I wanted to look at how large language models could help with sports betting education.”
His focus then shifted to the responsible gambling space. “If we’re making a chatbot specifically for sports betting education, how will it react to someone asking about problem gambling habits?” The study took shape around the Problem Gambling Severity Index, a nine-question survey that helps measure problem gambling behaviors.
“We constructed some questions informed by the PGSI,” Ghaharian says, “then entered them into LLMs to see how they responded.”
Ghaharian and his colleagues then engaged problem gambling treatment professionals who had over 17,000 hours of collective treatment experience to evaluate LLM responses to problem gambling inquiries.
The study kept things high-level. The professionals didn’t have a rubric or a specific template against which to grade the LLM responses to problem gamblers. Instead, they were allowed to evaluate the responses based on their overall experience. And, surprise, chatbots have some issues in their responses to common problem gambling signs.
Llama slightly edged out ChatGPT in terms of responses preferred by professionals, but the difference was negligible overall. Both LLMs presented similar problems, according to Ghaharian and the experts involved in the research.
“The most interesting findings were around the gambling counselors’ critique of LLM responses,” says Ghaharian. “There were many red flags they brought up. Overly verbose responses (which is not surprising with LLMs) made key information hard to find. Our treatment professionals also called out the use of misleading language.”
‘Tough Break’
For example, the LLMs would characterize problem gambling or adjacent behaviors as a “tough break” or similar. “That downplays very real and very impactful issues,” Ghaharian says. “If I’m a gambling addict and someone says ‘Tough luck, try again next time,’ that’s not the best advice, especially if I’m in a distressed situation.”
One of the most egregious examples Ghaharian notes is chatbots ignoring the gambling aspect entirely and recommending turning off the lights when leaving a room to save money (presumably to gamble more).
How did we get here, though? Ghaharian says it’s crucial to understand how LLMs have proliferated and how they function.
“The developers of these models don’t stop to think about the consequences of scraping the entire internet and training the model based on that. If we—the tech industry—had backed up and thought about that beforehand, we might have spent more time curating a high-quality dataset that we knew didn’t have too much garbage in it. But we’re in this situation now. We’ve got these models, and they’ve been trained on the whole of the internet.”
The result is an amalgamation of… pretty much everything. When someone asks an LLM about gambling, the response could be a Frankenstein’s monster of ideas from Reddit conversations, affiliate articles, academic research and any number of other sources. None of these things is inherently bad, but the possibility of erroneous or even harmful feedback is incredibly high.
Ghaharian’s ideal solution? Guardrails.
“You can fine-tune how a model responds to certain inputs,” he says. “A lot of companies are starting to craft datasets that might involve question-answer pairs. They might also include research papers or use prompt engineering to tweak how a platform reacts.”
If you ask an LLM how to make a bomb, it will almost certainly not tell you. That type of response exists because of something the developers did after training the model.
Training the Chatbot
LLMs should answer problem gambling questions, of course, but Ghaharian thinks they can be better trained with ideal responses reviewed by professionals.
The perfect-world scenario doesn’t seem to involve that heavy a lift, at least at the start. Small changes can make a big impact, and Ghaharian already has a few ideas. “Based on the research, we could look at what treatment professionals suggest and weave that into responses. They suggest opening with empathy and providing verified and trusted resources like the problem gambling helpline.”
He also mentions that LLMs can “hallucinate,” so it’s important to ensure they’re providing actual phone numbers and contact information for external resources.
Another strategy Ghaharian wants to see is better education around LLMs and how they work.
“I don’t think people realize, necessarily, that these are just next-word prediction models,” he said. “They don’t know what the pre-trained data is. People could be going to a chatbot for health advice, medication guidance or other big aspects of their lives. I’m not decrying that as a concept, but it’s essential to know that there are caveats. These models are predicting the next word of their response based on incredibly wide datasets that don’t always have the best information.”
Of course, that doesn’t all fall on the end user, much like playing responsibly isn’t all on the gambler. Just as gambling operators need to offer responsible gambling tools, so too do chat models need to contribute to public messaging about how they work and how they are best used.
Additionally, responsible gambling advocacy and resource groups can stay on top of AI’s growth by providing public guidance on when and how it can be most helpful in that area. Part of the challenge here is that regulators don’t have jurisdiction over AI models. If a gambling operator taps a model to build their own chatbot, however, regulators might be able to step in.
“A regulator could request rigorous verification of a chatbot’s capacity to answer certain questions,” says Ghaharian. “If that were to happen, creators of chatbots might take notice and start to better tailor how their products respond to gambling questions.”
A cautionary tale from the world of air travel paints this picture. Air Canada was forced to honor a refund based on bunk policies hallucinated by the company’s chatbot (which it later shut down). It’s easy to look at that example and laugh, but it could be a whole lot more serious if problem gambling is involved.
“AI literacy is priority number one,” Ghaharian said. “Companies need to brush up on LLMs and how they work. Even a mandatory yearly training could move the needle. Further, you need to consider who makes the final decision to implement a chatbot and who will work with it and manage it. I can’t speak to the AI literacy of gambling stakeholders, but I would recommend involving customer support, marketing, communications, tech teams and others. Make sure your business has sufficient AI knowledge to make a call as to whether the model will work for you.”
Keeping AI Safe
There’s also the big picture to consider. On a national level for U.S. tech companies, gamblers and stakeholders, what can be done to establish guardrails and keep AI both useful and safe for everyone?
Ghaharian references the EU AI Act as a possible framework that could inform U.S. policy. “It’s probably the only robust AI policy that makes a decent effort at regulating the technology,” he says.
The industry is already entrenched in the world of AI, thanks to the explosive growth of LLMs and other AI tools. As much as possible, Ghaharian recommends being proactive and taking AI seriously from the jump.
“We don’t want anyone to just wait for something bad to happen. I think companies should start to think about hiring people who really know AI. Or, if there are already people who know about AI, forming some sort of working group to tackle any issues and stay ahead of potential problems. Upskill. Provide training. Teach your employees about the benefits, sure, but also about the potential downsides.”
Consider AI a can of worms. It’s open, and the worms are loose. They aren’t all bad, of course. They’ll fertilize the technological soil and help industries flourish. But some of the worms should be understood before they’re allowed to wriggle into the dirt willy-nilly. Which ones should be the top priorities?
“The first thing we need to do,” Ghaharian says, “is to understand the breadth of use cases. How is AI being used across the sector? I think we all have our ideas, but we need a robust and comprehensive way of understanding and really tracking the use cases.”
The UNLV International Gaming Institute recently launched AiR HUB, a research hub dedicated to the intersection of gambling and AI. Ghaharian co-founded it, and one of its first projects is an AI registry.
“The registry is something a regulator could use in the future to track how their licensees are using AI tools,” he says. “You go online and fill out your use cases on a six-month or yearly basis. That way, regulators have a good view of how AI is being used, which could help identify or quell any issues.”
Ghaharian also emphasizes the need for prioritization as potential risks arise.
“Going back to which worms we need to address first,” he says, “every case can be different. An employee using AI to help write emails more efficiently—probably not a big issue, if at all. But what about a company that has an LLM for sports betting research with access to user data or, further, the ability to place bets for them? That’s an impactful, customer-facing issue that needs addressing right away.
“Additionally, it would be nice to have a set of questions and answers that are known benchmarks. I’d like to explore how AI can be aligned with certain goals to help operators make decisions on which companies to partner with. You’d be able to use that alignment data to be sure a tool works for your brand.”
There’s still a lot to be done, and the work will be ongoing to properly understand, integrate and regulate AI into the gambling industry, particularly with regard to responsible gambling. Ghaharian says the study is now under peer review, which is a big step toward further research. Now, with AiR HUB opening up new research avenues, the team at UNLV and beyond can continue to analyze the impact of AI on the gambling arena.
