Incremental Least-Squares Temporal Difference Learning

Alborz Geramifard, Michael Bowling, and Richard S. Sutton. Incremental Least-Squares Temporal Difference Learning. In Proceedings of the Twenty-First National Conference on Artificial Intelligence (AAAI), pp. 356–361, 2006.

Download

[PDF] 

Abstract

Approximate policy evaluation with linear function approximation is a commonly arising problem in reinforcement learning, usually solved using temporal difference (TD) algorithms. In this paper we introduce a new variant of linear TD learning, called incremental least-squares TD learning, or iLSTD. This method is more data efficient than conventional TD algorithms such as TD(0) and is more computationally efficient than non-incremental least-squares TD methods such as LSTD (Bradtke & Barto 1996; Boyan 1999). In particular, we show that the per-time-step complexities of iLSTD and TD(0) are O(n), where n is the number of features, whereas that of LSTD is O(n2). This difference can be decisive in modern applications of reinforcement learning where the use of a large number features has proven to be an effective solution strategy. We present empirical comparisons, using the test problem introduced by Boyan (1999), in which iLSTD converges faster than TD(0) and almost as fast as LSTD.

BibTeX

@InProceedings(06aaai-ilstd,
  title = "Incremental Least-Squares Temporal Difference Learning",
  author = "Alborz Geramifard and Michael Bowling and Richard S. Sutton",
  booktitle = "Proceedings of the Twenty-First National Conference on Artificial Intelligence (AAAI)",
  year = "2006",
  pages = "356--361",
  AcceptRate = "30\%",
  AcceptNumbers = "236 of 774"
)

Generated by bib2html.pl (written by Patrick Riley) on Fri Feb 13, 2015 15:54:29