In contrast to Koller and Pfeffer's aim, Loki is not an optimal player. Our goal is to create a maximal player, which uses opponent modeling to exploit patterns in its opponents' play, with the intention of winning the most money it can in every situation. Furthermore, since it does not seem feasible to determine an optimal player for real multi-player poker, a program to play real-world poker in the near future most likely will not be a game-theoretic optimal player.
Nevertheless, Koller and Pfeffer have suggested that an alternative approach to deal with less-than-perfect players is to learn the type of mistake that a player is prone to make. This approach can be used when there is a long-term interaction with the same player. The authors point out that the ability of the Gala language to capture regularities in the game may be particularly useful in this context, since the high-level description of a game state can provide features for the learning algorithm. One can see this learning algorithm as a potential opponent modeling component for a program based on the Gala system.