Tractable Objectives for Robust Policy Optimization

Katherine Chen and Michael Bowling. Tractable Objectives for Robust Policy Optimization. In Advances in Neural Information Processing Systems 25 (NIPS), pp. 2078–2086, 2012.


[PDF] [Supplemental Material] 


Robust policy optimization acknowledges that risk-aversion plays a vital role in real-world decision-making. When faced with uncertainty about the effects of actions, the policy that maximizes expected utility over the unknown parameters of the system may also carry with it a risk of intolerably poor performance. One might prefer to accept lower utility in expectation in order to avoid, or reduce the likelihood of, unacceptable levels of utility under harmful parameter realizations. In this paper, we take a Bayesian approach to parameter uncertainty, but unlike other methods avoid making any distributional assumptions about the form of this uncertainty. Instead we focus on identifying optimization objectives for which solutions can be efficiently approximated. We introduce percentile measures: a very general class of objectives for robust policy optimization, which encompasses most existing approaches, including ones known to be intractable. We then introduce a broad subclass of this family for which robust policies can be approximated efficiently. Finally, we frame these objectives in the context of a two-player, zero-sum, extensive-form game and employ a no-regret algorithm to approximate an optimal policy, with computation only polynomial in the number of states and actions of the MDP.


  Title = "Tractable Objectives for Robust Policy Optimization",
  Author = "Katherine Chen and Michael Bowling",
  Booktitle = "Advances in Neural Information Processing Systems 25 (NIPS)",
  Pages = "2078--2086",
  Year = "2012",
  AcceptRate = "25\%",
  AcceptNumbers = "370 of 1467"

Generated by (written by Patrick Riley) on Fri Feb 13, 2015 15:54:27