The First Man-Machine
Poker Championship

Computers master the game board

They reign supreme in checkers and chess. Poker may be next. What other areas will artificial intelligence soon dominate?

Chris Gaylord, Christian Science Monitor Staff Writer

Polaris, a rising star in the poker world, has professional card players fretting. The 16-year-old has a perfect poker face, can shift strategies in an instant, and never gets tired. Well, you do have to recharge the laptop batteries from time to time.

This crafty computer program is one in a long line of codes designed to compete with humans. And for many games, machines now surpass even the best human opponents. They've dominated chess. They've cracked checkers. And they're homing in on poker.

On July 24, Polaris lost a close match against two top poker players. After two days and 4,000 hands of limit Texas hold em, the computer was behind by only about 30 bets.

"It was a tough opponent," says Ali Eslami, one of the two poker pros who beat Polaris. "To tell you the truth, if I had the chance to face it again right now for money, I wouldn't. There are easier humans out there. I'll stick with them."

The encroachment of lifeless data crunchers into our favorite pastimes marks more than just a countdown until computers are better at virtually everything. Any tabletop defeat is also milestones in the advancement of artificial intelligence.

Sure, checkers and chess players have little hope of even achieving a draw against the best computers. But the more crushing the loss for humans, the more useful the technology is in the real world.

"Games are one of the best ways to test computers and our ability to program them," says Murray Campbell, who worked on the IBM team that created Deep Blue, the first computer to defeat a world chess champion in a six-round match. "As we get better at this kind of artificial intelligence, we'll find more and more applications in other fields -- more serious fields: the military, medicine, business."

This ability to flip technology is one Mr. Campbell knows well. His team programmed Deep Blue to master chess through brute force processing. The now retired supercomputer could consider 200 million chess positions a second, mapping out many moves into the future to find the best path to checkmate. (Most human chess players can only handle two or three moves a second, Campbell says. We rely on intuition and other nonquantitative skills.)

Shortly after Deep Blue defeated world champ Garry Kasparov in 1997, IBM integrated the research into corporate and government hardware, Campbell says. IBM borrowed from both the name and architecture of Deep Blue to create the world's most powerful publicly known computer: the Blue Gene/L. Located at the Department of Energy's national laboratory in Livermore, Calif., this powerhouse machine twirls through 280 trillion calculations a second to simulate complex biomolecular processes such as protein folding. In June, IBM beat its own record with a prototype Blue Gene machine capable of 3 quadrillion calculations per second.

Chalk that up as a win for science achieved through a loss at the chessboard.

Superhuman and perfect AIs

The Deep Blue style of simply throwing processing power at a problem has led to breakthroughs in most of America's popular board games. Often programmers don't even need supercomputers to claim victories. Normal laptops will do just fine.

  • The same year Deep Blue battled Mr. Kasparov, an applications running on a regular PC took down the world champion in Othello in a six-game sweep.
  • A Scrabble world champ fell in January to a nasty program written by Eyal Amir and Mark Richards at the University of Illinois at Urbana-Champaign. The code anticipates what letter tiles its opponent holds and finds ways to block possible high-scoring moves.
  • Researchers at University of Alberta announced last month that they broke down the game of checkers into every possible position -- all 500 quintillion of them (that's 5 followed by 20 zeros). The team had already written a checkers code that was superhuman, now their program is perfect. The computer cannot be defeated. The best an opponent can do is tie.
  • Simpler games such as Connect Four and Tic-Tac-Toe were "solved" years ago.

Except for Scrabble, these examples are what artificial intelligence researchers call games of "perfect information."

"That means games where everything is right there in front of you," says Jonathan Schaeffer, chairman of the computer science department at the University of Alberta. "There are no hidden moves, no secret information."

That's why Mr. Schaeffer is happy to put checkers behind him (he led the 18-year effort to crack the game) and focus on poker. His new project, Polaris, poses a much more interesting challenge, he says.

After all, how do you teach a computer to predict an opponent's hand?

Man versus machine

For the July tournament against Mr. Eslami and poker celebrity Phil Laak, the University of Alberta team developed three different computer personalities for Polaris. The first script played it safe, calculating and betting in the hopes of coming out even. The next was aggressive, putting pressure on its opponents. The team programmed the last personality to learn as it went, reading opponents and acting accordingly.

Polaris faced off against the pros at the same time but in separate games. To minimize the luck of the draw, Eslami and Mr. Laak received mirrored hands -- so if one player received a great hand it meant that in the other match Polaris was dealt that great hand. At the end of each of the four rounds, the humans' scores were combined.

The pair came out a little behind against the careful, rational program, but the score was close enough that both sides agreed to call it a tie. The aggressive code crushed the humans in the second match. Polaris's learning program failed, handing the humans a solid third-round win. ("It was our fault it didn't work," admits Schaeffer.) For the final round, Eslami and Laak wanted a rematch against the play-it-safe bot. Better prepared, the professionals defeated Polaris.

"It was a really strong, savvy opponent and that has me very excited," says Laak in a phone interview after the tournament. "Life is a myriad of puzzles and this is the first step in some thousands ahead where computers will get better and better."

In six months, Polaris will be much stronger, Schaeffer says. He hopes to fix the learning code and possibly throw in a coaching mechanism, where Polaris can switch between rational and aggressive strategies.

Strengths and weaknesses

Some games are still too complicated for computers to master. The Japanese game of Go stands as the usual example. With a 19 by 19 grid, Go has an astronomical number of possible positions -- think 1 followed by 100 zeros. Such a massive scale means computers don't know where to focus.

"They've done eye tracking on Go experts," says Susan Epstein, a computer science professor at Hunter College in New York City. "The studies found that while there are hundred of good moves in front of them, the best [human] players only see three or four."

So how do you teach computers to "see" what humans see? For one, stop relying on programs that simply map out a single game, suggests Michael Genesereth, director of the Logic Group at Stanford University in Stanford, Calif.

Yes, Deep Blue dominates chess. But the supercomputer is a one-trick pony. Without plenty of prep time, it'd be helpless in a game of Othello, Mr. Genesereth says.

Instead, he researches general gaming, where machines learn patterns and principles that work in a variety of puzzles. At the same conference where Polaris battled human opponents, Genesereth held his annual machine-on-machine championship. The competition pitted general-gaming programs against one another in a series of board game mash-ups. May the best code win.

The Air Force Research Laboratory in Rome, N.Y., is even researching time-critical reasoning through asynchronous chess, where two competing computers don't have to wait their turns. They can move any piece at any time they want.

These code-versus-code styles of play are harder to program, but also much easier to translate into real-life situations, he says.

The Logic Group works with firms such as SAP, the world's largest business software designer, to create versatile programs that are ready to shift gears with any new change in interstate law or corporate policy.

"It's impractical to go back to your programmers and say, 'OK, well, here is another change and another one. Start rewriting all the programs,' " Genesereth says. "It's better to change the rules, and let the program figure out how to maximize efficiency under the new conditions."