1999icml Jonathan Schaeffer, Darse Billings, Lourdes Peña, Duane Szafron, Learning to Play Strong Poker, Workshop on Machine Learning in Game Playing at the Sixteenth International Conference on Machine Learning (ICML-99), Bled, Slovenia, June 30, 1999. abstract or pdf.

Poker is an interesting test-bed for artificial intelligence research. It is a game of imperfect knowledge, where multiple competing agents must deal with risk management, opponent modeling, unreliable information, and deception, much like decision-making applications in the real world. Opponent modeling is one of the most difficult problems in decision-making applications and in poker it is essential to achieving high performance. This paper describes and evaluates the implicit and explicit learning in the poker program Loki. Loki implicitly "learns" sophisticated strategies by selectively sampling likely cards for the opponents and then simulating the remainder of the game. The program has explicit learning for observing its opponents, constructing opponent models and dynamically adapting its play to exploit patterns in the opponents' play. The result is a program capable of playing reasonably strong poker, but there remains considerable research to be done to play at a world-class level.