iLSTD: Eligibility Traces and Convergence Analysis

Alborz Geramifard, Michael Bowling, Martin Zinkevich, and Richard S. Sutton. iLSTD: Eligibility Traces and Convergence Analysis. In Advances in Neural Information Processing Systems 19 (NIPS), pp. 441–448, 2007.

Download

[PDF] 

Abstract

We present new theoretical and empirical results with the iLSTD algorithm for policy evaluation in reinforcement learning with linear function approximation. iLSTD is an incremental method for achieving results similar to LSTD, the data-efficient, least-squares version of temporal difference learning, without incurring the full cost of the LSTD computation. LSTD is O(n^2), where $n$ is the number of parameters in the linear function approximator, while iLSTD is O(n). In this paper, we generalize the previous iLSTD algorithm and present three new results: (1) the first convergence proof for an iLSTD algorithm; (2) an extension to incorporate eligibility traces without changing the asymptotic computational complexity; and (3) the first empirical results with an iLSTD algorithm for a problem (mountain car) with feature vectors large enough (n=10,000) to show substantial computational advantages over LSTD.

Additional Information

The proofs of the paper's theorems are in the University of Alberta Technical Report, TR06-25, by Martin Zinkevich.

BibTeX

@InProceedings(06nips-ilstd,
  Title = "{iLSTD}: Eligibility Traces and Convergence Analysis",
  Author = "Alborz Geramifard and Michael Bowling and Martin Zinkevich and Richard S. Sutton",
  Booktitle = "Advances in Neural Information Processing Systems 19 (NIPS)",
  Year = "2007",
  Pages = "441--448",
  AcceptRate = "24\%",
  AcceptNumbers = "204 of 833"
)

Generated by bib2html.pl (written by Patrick Riley) on Fri Feb 13, 2015 15:54:28