The Global Poker Index announced some small changes to its formula recently. A co-invention of Annie Duke, the GPI has a decent chance to go down in history as the only thing she ever did that didn’t generate some kind of controversy. Created along with the Epic Poker League to give it a basis for awarding membership cards, the GPI factored in more variables than any previous formula for determining the world’s best tournament poker players. It has outlived the ill-fated EPL by a couple of years already, providing a more detailed picture of poker excellence than ever before in a world where an already tough call – comparing players’ short-term results in a game that rewards long-term patience – has only become tougher as the years have gone by. And yet it’s still not enough. That’s not the GPI’s fault, though. The fact is that there has never really been a reliable way to rank poker players.
Poker is such a nuanced game, particularly at high levels, that it made sense for the acclaim of one’s peers to be the original standard for declaring who was the game’s true king when the World Series of Poker first began. Johnny Moss was voted the first WSOP champion in 1970, the only year the winner has ever been determined by anything other than a freeze out tournament. (The story goes that he won only on the second ballot, after every player had voted for himself on the first and then been explicitly instructed to choose somebody else.) In the long run it was better to have a winner by competition than to have Moss win every year because of the respect the other players had for the grand old man. But by setting aside reputation – poker’s only commodity outside of the cash on the table, however ephemeral it might be – this also created a situation where the only thing anyone really knew at the end of the game was who had run good at the right time of year.
Such ambiguity was great for the winner, who could argue that the results showed who was best, and great for the game’s PR, especially when a rank amateur like Hal Fowler could manage a win. What it wasn’t so great for was figuring out whether the best player really won in something approaching an objective manner. In the days when tournament poker was still limited mostly to the WSOP and a handful of other events with top-heavy payouts spread throughout the year, a player’s total number of tournament wins and position on the all-time money list were enough to distinguish the best from the rest. The explosion in both attendance at the WSOP and the number of tournaments on the annual calendar from the late 1990s into the 2000s spawned almost as many ranking systems as there were poker magazines by the boom’s peak – and with their idiosyncratic formulas they never seemed to agree on who was the best player of the year. No doubt it was better business for old-school Card Player and upstart Bluff to have different players on their year-end covers, but it didn’t do much to clarify who was really the best player of each year.
Nowadays we have the GPI and a full array of different lists to give our quest for objective measurement of poker accomplishments a better semblance than ever before of actual utility. Whether we’re adjusting for inflation, excluding high-roller buy-ins, counting up cashes in tournaments of all sizes, or weighting results by how recent they are, we’ve certainly reached a new level of sophistication. But even with all these new analytics, we still have two major problems to face when trying to compare poker players.
The first is that we don’t yet have a way to account for how much of each player’s action actually belongs to the player himself. There should be no easy comparison between somebody who wins a tournament on his own buy-in, someone who wins after earning a buy-in through a satellite, someone who’s one of many stake horses for a single investor, and a sponsored pro. And yet we lump them all together out of necessity because there’s no way to pry this information out of players short of 24/7 surveillance. Unless players organize and create a professional association of some sort that requires disclosure on backing agreements as a prerequisite to membership, this will remain out of reach. (In other words, don’t hold your breath on this one.)
More importantly, all tournament results are the product of the short run. Once you realize that you’re dealing with minor differences in skill among the players in most major tournaments, it’s clear just how little running good for a few days has to do with being better than your opponents. Even a few hundred tournaments – the buy-ins for which could cost millions even without factoring in travel costs and other expenses – isn’t really enough to return a result worth investing with confidence. Knowing who wins is very rarely the same thing as knowing who was the best player during the tournament. Perhaps someday we’ll have the ability to track enough statistics to make objective statements about who plays well even when they don’t win. But in the absence of such a better system, relying on the short run is the only way we can talk about who’s really the best player in the game.