Bayesian Sparse Sampling for On-line Reward Optimization

Tao Wang, Daniel Lizotte, Michael Bowling, and Dale Schuurmans. Bayesian Sparse Sampling for On-line Reward Optimization. In Proceedings of the Twenty-Second International Conference on Machine Learning (ICML), pp. 961–968, 2005.

Download

[PDF] 

Abstract

We present an efficient “sparse sampling” technique for approximating Bayes optimal decision making in reinforcement learning, addressing the well known exploration versus exploitation tradeoff. Our approach combines sparse sampling with Bayesian exploration to achieve improved decision making while controlling computational cost. The idea is to grow a sparse lookahead tree, intelligently, by exploiting information in a Bayesian posterior--- rather than enumerate action branches (standard sparse sampling) or compensating myopically (value of perfect information). The outcome is a flexible, practical technique for improving action selection in simple reinforcement learning scenarios.

BibTeX

@InProceedings(05icml-bayes,
  Title = "Bayesian Sparse Sampling for On-line Reward Optimization",
  Author = "Tao Wang and Daniel Lizotte and Michael Bowling and Dale Schuurmans",
  Booktitle = "Proceedings of the Twenty-Second International Conference on Machine Learning (ICML)",
  Pages = "961--968",
  Year = "2005",
  AcceptRate = "27\%",
  AcceptNumbers = "134 of 492"
)

Generated by bib2html.pl (written by Patrick Riley) on Tue Aug 06, 2013 00:49:40