No-Regret Learning in Extensive-Form Games with Imperfect Recall

Marc Lanctot, Richard Gibson, Neil Burch, and Michael Bowling. No-Regret Learning in Extensive-Form Games with Imperfect Recall. In Proceedings of the Twenty-Ninth International Conference on Machine Learning (ICML), pp. 65–72, 2012. A longer version is available as a University of Alberta Technical Report, TR12-04.

Download

[PDF] 

Abstract

Counterfactual Regret Minimization (CFR) is an efficient no-regret learning algorithm for decision problems modeled as extensive games. CFR's regret bounds depend on the requirement of perfect recall: players always remember information that was revealed to them and the order in which it was revealed. In games without perfect recall, however, CFR's guarantees do not apply. In this paper, we present the first regret bound for CFR when applied to a general class of games with imperfect recall. In addition, we show that CFR applied to any abstraction belonging to our general class results in a regret bound not just for the abstract game, but for the full game as well. We verify our theory and show how imperfect recall can be used to trade a small increase in regret for a significant reduction in memory in three domains: die-roll poker, phantom tic-tac-toe, and Bluff.

BibTeX

@InProceedings(12icml-ir-w-tr,
  Title = "No-Regret Learning in Extensive-Form Games with Imperfect Recall",
  Author = "Marc Lanctot and Richard Gibson and Neil Burch and Michael Bowling",
  Booktitle = "Proceedings of the Twenty-Ninth International Conference on Machine Learning (ICML)",
  Pages = "65--72",
  Note = "A longer version is available as a University of Alberta Technical Report, TR12-04.",
  Year = "2012",
  AcceptRate = "27\%",
  AcceptNumbers = "243 of 890"
)

Generated by bib2html.pl (written by Patrick Riley) on Fri Feb 13, 2015 15:54:27