The Dark Side of Online Gambling

In March, onetime Las Vegas bookie and sportsbook consultant Robert Walker published The House Always Wins: How Three Men Built America’s Gambling Machine and Called it Entertainment. It’s an insider’s look at the rise of DraftKings from bright idea to fantasy sports startup to post-PASPA sports betting pioneer.

From its origins in co-founder’s spare bedroom, DraftKings has grown into the United States’ top sportsbook by revenue, and continues to duke it out with FanDuel for market share. It now operates in 28 U.S. states and the District of Columbia, and has a market value of nearly $13 billion.

But lawsuits allege predatory marketing practices that skimp on Know Your Customer protocols, reduce individuals to data points, and contribute to addiction. In this excerpt, Walker tells the story of one such player.

Prologue: The Notification

The following account is a composite narrative. The individual designated here as “Sarah Martinez” does not exist as a single person. She is drawn from documented patterns preserved in class-action filings, state regulatory complaints, customer service records produced through legal discovery, and the testimony of responsible gaming researchers who have studied the behavioral targeting practices of online sports betting platforms. Every algorithmic capability described in this prologue is grounded in patent filings, public statements by DraftKings executives, expert analysis of observable platform behaviors, and documentation produced through litigation. The composite method is used to protect the identities of real individuals while rendering, with fidelity, the systemic patterns their experiences reveal.

The system assigned her a number before it assigned her an offer.

User Classification 47291-A. The designation lived in a database partition reserved for accounts whose behavioral signatures matched a specific cluster profile: deposit frequency declining over a twenty-one-day window, session duration increasing during late-night hours, bet sizing oscillating between conservative and aggressive in patterns that the model’s training data associated with financial stress. The classification carried no name. Names were stored in a different table, linked by account ID but irrelevant to the process that had flagged her for intervention. The model did not need to know that she was a thirty-four-year-old pediatric nurse who worked the overnight shift at a hospital in the suburbs of a mid-Atlantic city. It needed to know that her account exhibited a deposit-to-loss ratio of 1:0.87 over the trailing ninety days, that her average session began at 11:12 p.m. and ended at 1:47 a.m., and that her responsiveness to promotional offers increased by 34% during weeks when her deposit amounts dropped below her rolling thirty-day average.

The system had been monitoring her for seventy-three days.

Not monitoring in the way a person monitors. The system did not peer through a window or sit in a parked car. It ingested data points. Login timestamps. Bet types. Deposit methods. Withdrawal requests, of which there had been none in sixty-one days. Session lengths. Click paths through the app’s interface. The speed at which she scrolled past responsible gaming disclosures. The frequency with which she opened the promotional offers tab before making a deposit. Each data point fed a model that had been trained on the behavioral residue of millions of prior users, and the model’s output was a probability: the likelihood that a specific intervention, delivered at a specific time, in a specific format, at a specific dollar amount, would produce a specific response.

The probability, on this particular Tuesday, was 73.2%.

The intervention was a push notification. Forty-seven characters of text, the product of eight hundred and forty-seven separate A/B tests conducted over eighteen months. The tests had evaluated message length. Shorter outperformed longer by 12% in click-through rate. Tone. Empathetic outperformed transactional by 9%. Personalization. A first-name greeting outperformed generic by 7.4%. Dollar amount. The optimal range for users in her cluster profile was $25 to $75, with $50 producing the highest deposit conversion rate at the lowest promotional cost. The winning variant read: Sarah, we’ve noticed you’re having a tough week. Here’s $50 in bonus credits to turn things around. Sometimes one good bet changes everything.

The delivery time was 11:47 p.m.

That number was not arbitrary. The system had cross-referenced her historical login data with circadian rhythm research indicating that cortisol levels peak during late-night hours in individuals with disrupted sleep patterns, and that decision-making capacity degrades measurably after 11:00 p.m. in users whose session histories indicate chronic late-night engagement. Her historical response rate to promotional offers delivered between 11:30 p.m. and midnight was 41% higher than her response rate to offers delivered during daytime hours. The system routed the notification to the deployment queue at 11:43 p.m. and held it for four minutes to avoid clustering with other scheduled notifications that might dilute its impact.

At 11:47 p.m., the notification crossed the gap between server and screen. Sarah Martinez’s phone sat on the nightstand beside a glass of water, a bottle of ibuprofen, and a framed photograph of her daughter, Lily, at age four, standing in an inflatable pool with a garden hose raised like a scepter. The photograph was seven years old. Lily was eleven now, asleep in the bedroom down the hall, and the inflatable pool had been replaced by a travel soccer schedule that cost $3,200 per season in registration fees, equipment, and tournament travel. Sarah had memorized the figure because she had calculated it against her monthly take-home pay of $4,870 and found that it consumed 16.4% of her gross household budget.

The phone lit up.

She had been lying in the dark for twenty minutes, not sleeping. The shift at Children’s had ended at 7:00 p.m. She had driven home through the kind of rain that turned windshield wipers into metronomes, picked up Lily from her sister’s apartment, microwaved two plates of leftover pasta, supervised homework, packed tomorrow’s lunch, and sat on the edge of Lily’s bed reading three pages of a book about a girl detective before Lily’s breathing slowed and her grip on Sarah’s wrist went slack. The routine was a scaffolding she had built around the parts of the day that resisted scaffolding. The nine-year-old in Bed 4. Burns covering 40% of his body. Parents who could not stop apologizing to the nurses for crying. Pain medication dosage recalculated three times because his weight had dropped since admission and the original prescription was now 15% above the adjusted threshold.

She carried the dosage numbers home the way she carried everything from the ward: in the body, not the mind. The mind had professional boundaries. The body did not know about professional boundaries. The body stored the numbers alongside the grocery list and the electric bill and the $47,000 in outstanding medical debt from her mother’s oncology treatments. The body kept a running ledger that no one asked to see and no one could shut off.

She picked up the phone.

The notification sat on the lock screen between a text from her sister (“Lily was an angel, don’t worry about the $20”) and a calendar reminder for Thursday’s parent-teacher conference. The $50 bonus offer glowed in the standard DraftKings green. She had installed the app eleven months earlier, during a weekend when a coworker had mentioned a promotion offering a $1,000 deposit match for new customers. The coworker had described it as “free money.” The deposit match required wagering the bonus amount a specified number of times before any withdrawal was permitted. The condition was disclosed in the terms of service that the coworker had not mentioned and that Sarah, in the parking lot after a twelve-hour shift, had not read.

Her first deposit had been $100. She remembered the amount because it matched the copay she had just paid for her mother’s oncology appointment, and the symmetry had struck her as either ironic or appropriate, depending on which version of herself was doing the accounting. Her mother’s insurance had denied the latest round of treatment as “not medically necessary.” The phrase was one Sarah could recite in her sleep. She had spent nine years watching insurance companies define medical necessity in terms that bore no relationship to the bodies in the beds. The outstanding bills were $47,000. The amount grew by approximately $2,800 each month. Sarah’s savings account held $1,211.

The $50 bonus credit sat on the screen.

The system that delivered the notification did not know about the boy in Bed 4. It did not know about the $47,000. It did not know that Sarah had applied for a second credit card the previous week and been denied, that the denial letter was folded inside her purse between a gas receipt and a coupon for Lily’s soccer cleats, that the denial had arrived on the same day as a phone call from the collections agency representing the oncology practice. The system tracked different variables. Deposit frequency. Session timing. Bet sizing patterns. Behavioral cluster assignment. Response rates to prior promotional offers. These variables, in aggregate, matched a profile whose members responded to promotional interventions at statistically elevated rates during periods the model classified as “high engagement potential.”

The language was the system’s own. “High engagement potential” appeared in documentation produced through legal discovery in a related proceeding. The phrase occupied a cell in a spreadsheet that also contained columns labeled “Deposit Probability,” “Expected LTV Impact,” and “Promotional ROI.” The spreadsheet did not contain a column labeled “Financial Distress.” It did not contain a column labeled “Psychological Vulnerability.” These were not categories the model had been designed to track. They were, according to the testimony of data science professionals who reviewed the system’s architecture in regulatory proceedings, categories the model had learned to predict as a byproduct of optimizing for the categories it was designed to track.

The distinction mattered in court. It did not matter at 11:47 p.m.

Sarah opened the app.

The interface loaded in 1.3 seconds. Edge-caching protocols prioritized promotional content above standard navigation elements, ensuring that the bonus offer appeared before the user’s account balance. The $50 credit was visible in the promotional balance field. To use it, she needed only to deposit matching funds. The minimum qualifying deposit was $10. The system had calculated that for users in her cluster profile, a $10 minimum produced a higher conversion rate than a $25 minimum, because the lower threshold reduced the psychological friction associated with the deposit decision while generating functionally equivalent downstream behavior: users who deposited $10 to unlock a $50 bonus deposited, on average, an additional $67 within the same session.

She deposited $50.

The amount matched the bonus because the number felt symmetrical, and because the $50 withdrawal from her checking account would leave a balance of $312. Enough for the electric bill at $127. Lily’s school lunch account refill at $45. Gas for the week at $60. That left $80. She did not run this calculation consciously. The numbers occupied a partition of her awareness the way the medication dosages occupied a partition of her professional attention. Always running. Always updated. Always wrong by the time the next bill arrived.

The $100 combined balance appeared on the screen.

She placed the first bet at 11:52 p.m. A same-game parlay: three legs, odds of +487. The parlay builder had been designed by a product team whose internal performance metrics measured, among other indicators, the percentage of users who added a third leg to a two-leg parlay after being shown the potential payout. The team had optimized this metric from 23% to 38% through interface modifications that displayed the potential winnings in a progressively larger font as legs were added. The design made the number grow. The growing number made the bet feel more valuable. The mathematical reality, that each additional leg reduced the probability of winning by a factor roughly equivalent to the increased payout, was available to any user who calculated it. The interface did not calculate it. It showed the growing number.

The bet lost. The third leg, a player receiving yards prop, missed by eleven yards.

She placed a second bet at 12:14 a.m. A live in-game wager, odds shifting in real time, the interface updating every three seconds with new lines that the live betting engine generated from a proprietary model processing over 200 data inputs per second from the game’s statistical feed. The speed of the updates created a perceived urgency that the responsible gaming research literature describes as “continuous play reinforcement”: the sensation that a favorable line might disappear if the user does not act in the current refresh cycle. The interface did not pause between bets. It did not ask whether she wanted to continue. The session remained open, the wallet remained accessible, and the live odds continued to refresh.

The second bet won. The payout was $34.

She placed a third bet. And a fourth. The parlay builder appeared again. The potential payout numbers grew in their progressively larger font. She added a third leg. She added a fourth. The implied probability of a four-leg parlay hitting was approximately 6.25%. The potential payout was $312. The number on the screen was larger than her checking account balance.

The bet lost.

At 12:31 a.m., her combined balance was $84. She had deposited $50 of her own money, received $50 in promotional credits, wagered $100 across two bets, won $34, and lost $66. She was $16 poorer than she had been forty-four minutes earlier. The session was not over.

She deposited another $25. The checking account balance dropped to $287. The electric bill was still $127. The school lunch account was still $45. The gas was still $60. The math no longer worked, but the math was in the partition that kept running without permission, and the screen was showing a new line, a new opportunity, a live odds adjustment that the interface displayed as a flashing green arrow indicating favorable movement, and the arrow was pointing up.

At 1:17 a.m., her combined balance was $12.

At 1:43 a.m., her balance was zero.

She closed the app. She placed the phone face-down on the nightstand, beside the glass of water and the ibuprofen and the photograph of Lily in the inflatable pool. The room was dark. The house was quiet. Down the hall, Lily slept with one arm wrapped around a stuffed elephant whose name was Gerald and whose left ear had been chewed to a nub during a period of nighttime anxiety that the pediatrician had attributed to “adjustment difficulties” following the separation.

Sarah lay on her back in the dark. The ceiling was the same ceiling she had stared at during the nights after the separation, the nights after her mother’s diagnosis, the nights after the insurance denial. The ceiling offered nothing. The phone, face-down on the nightstand, offered the promise of something. The something was a number. The number was a line. The line was an opportunity. The opportunity was a chance to recover what had been lost. The chance was available at any hour, in any amount, from an interface that never closed and never asked why she was still awake at 1:43 in the morning on a Tuesday in a dark house in the suburbs of a mid-Atlantic city with $287 in her checking account and a daughter down the hall whose soccer season cost $3,200 and whose stuffed elephant had a chewed ear.

The phone stayed face-down. This time.