Gambler’s Ruin
A gambler starts with $i and bets $1 each round, winning with probability p. The game ends at $0 (ruin) or $N (target). Even a tiny house edge dramatically shifts the odds toward ruin.
How it works
The Gambler’s Ruin is one of the oldest problems in probability theory, studied by Pascal, Huygens, and Bernoulli. A gambler starts with $i and repeatedly bets $1 on a coin flip, winning with probability p and losing with probability q = 1 − p. The game ends when the gambler either goes broke ($0) or reaches a target ($N).
When the game is fair (p = 0.5), the probability of ruin is simply (N − i)/N. With starting money $10 and target $25, there is a 60% chance of ruin. But when the game is even slightly unfair — say p = 0.49 — the ruin probability jumps dramatically. The formula uses the ratio q/p; when q > p, this ratio exceeds 1, and the exponential terms make ruin almost certain for large N.
This has profound implications for real-world gambling. Even a 1% house edge (p = 0.49) means that the longer you play, the more certain your ruin becomes. A player who starts with $10 and wants to reach $25 at a game with p = 0.49 will go broke about 67% of the time. Increase the target to $100 and the ruin probability exceeds 99%. The house always wins not because of any single bet, but because the mathematics of random walks with drift guarantee it over time.
The batch simulation lets you verify these theoretical predictions empirically. Run 1000 simulations and compare the observed ruin rate with the formula. The agreement is typically excellent, illustrating one of the most elegant connections between theory and experiment in probability.