Curriculum Vitae
Last updated March 20, 2017:
[PDF]
top
Highlights
Last udpated March 25, 2013
Publications
- 85 fully refereed journal articles and conference papers.
- 2x JAIR, 1x AIJ, 1xJMLR, 13x AAAI, 11x NIPS, 11x ICML, 8x IJCAI, 9x
AAMAS, 2x UAI, 2x ICRA
- 2 paper prizes, 1 dissertation award
AI Competitions
- Led the team which for the first time ever defeated top professional
poker players in a man versus machine poker competition (2008).
This and the previous match (2007) were featured on BBC Radio, NPR,
The New York Times, Christian Science Monitor, The Guardian, Times
Online, EE Times, The Globe and Mail, Science News, and a variety of
local and Canadian television programs.
- Led the team that has won of 20 of 30 events in the first seven years
(2006-2012) of the AAAI Computer Poker Competitions.
- Led the team (CMU) that won the world championship in the RoboCup
Small-Size League (1998).
Awards
- Head instructor in the teaching team that won the University of Alberta's Teaching Unit Award (2009) and an Honourable Mention in the Canada-wide Alan Blizzard Award for collaborative teaching (2011).
- Won Department Research Award (2010).
- Video submission nominated for the Best Education Video in the AI
Video Awards Competition (2008).
- PI in AICML (funding of $1.8 million per year on average), which won
the ASTech (Alberta Science and Technology) award in the
"Outstanding Leadership in Alberta Technology" in 2006.
- Finalist for ASTech (Alberta Science and Technology) award in
"Leaders of Technology" category (2005).
- Co-winner of CMU School of Computer Science's outstanding
dissertation award (2003).
Significant Research Contributions
- Computational Game Theory and Poker
- Algorithms for computing game theoretic solutions to extremely
large extensive games.
Over two years, increased the state of the art by four
orders of magnitude (from 10^8 to 10^12 game states).
- Range of Skill: AAAI, 2007. Solves games with 10^10 states.
- Counterfactual Regret: NIPS, 2008. Solves games with 10^12
states and only requires memory and time linear in the number of
information sets (typically the square root of the number of
game states). This was the basis for our success in the man
versus machine competition in 2008.
- Monte Carlo CFR: NIPS, 2009. Extends our poker-specific
optimizations to form a general family of extensive game solvers
that can efficiently solve a wide rante of zero-sum imperfect
information games.
- Algorithms for building and exploiting opponent models.
- Unbiased estimates of agent performance from very small sample
sizes
This work has been key for our progress toward
defeating top human poker players, and both breakthroughs below
were critical parts of our man versus machine victory in 2008.
- Unbiased, low variance estimates of skill (DIVAT): AAAI, 2006.
- Unbiased, low variance off-policy estimates of skill: ICML, 2008.
- Learning custom variance reducing estimators from data
(MIVAT): IJCAI, 2009.
- Subjective Mapping
- Fundamental Reinforcement Learning
- Incremental techniques for making data-efficient least-squares
techniques computationally tractable:
AAAI, 2006; NIPS, 2007.
- Techniques for combining models with linear function
approximation: AAMAS, 2008; UAI, 2008.
- Dual analysis of RL with function approximation: NIPS, 2008;
ADPRL, 2007.
- Approximate planning in POMDPs using quadratic programming: AAAI,
2006.
- Older Research (Before 2004)
- Cooperating and Competing Teams of Robots
Between 1998 and 2003, I was a member (and often leader) of the
CMUnited and CMDragons robot soccer teams. The team was world
champions in the RoboCup Small-Size League in 1998 and won the
American Open in 2003, as well as being consistently among the top
teams throughout these years.
These teams have made advancements in motion control and
navigation (CIRA, 1999), object tracking and prediction (ICRA,
2002), adapting team strategy (IJCAI, 2003), coordination in
impromptu teams (AAAI, 2005), as well as integration of these
components (Advanced Robotics; ICRA, 2003; J of Sys & Control
Eng., 2005). Many of these techniques have become the league
standard and remain a central part of the current CMDragons team
(world champions in 2006 and 2007) long after I left CMU.
- Multiagent Learning and Planning
I developed WoLF ("Win or Learn Fast"), a technique for
reinforcement learning in multiagent (possibly adversarial)
environments: AIJ, 2002. Developments include both theoretical
work (ICML, 2001, NIPS, 2004) and practical work (IJCAI, 2001;
IJCAI, 2003), including a demonstration of adversarial learning on
real robots.
This combined body of work has been cited over 600 times
according to Google Scholar.
Academic Service
- Conference organization (chair or co-chair).
- AI video awards (IJCAI, 2009)
- Intelligent systems demonstrations (AAAI, 2008)
- Educational track of the AI video awards (AAAI, 2008)
- Volunteer (ICML, 2006)
- Exhibitions (IROS, 2005).
- Associate Editor for JAIR.
- Member of the Machine Learning journal's editorial board.
- Senior program committee member: AAAI, NIPS, ICML, AAMAS.
- Program committee member: AAAI, IJCAI, NIPS, ICML, AAMAS, RSS, ISAIM.
- Extensively participated in RoboCup organization from 2000-2004,
including serving as the chair of the Small-Size League technical
committee, chairing several competitions, and helping to organize a
special interest group on multiagent learning.