The Beauty of Data

Carrie Fowle, Data Scientist

Optimal Ranking of Multi-Competitor Games

How can we rank when we can't measure the competition?

There is a lot of research of how to rank players who participate in head-to-head, zero-sum games (think of chess), but far less exists in multi-competitor games. In these games, many competitors are competing against each other under the same conditions toward the same goal.

Obviously sports with these sorts of competition (sailing, golf, horse racing, etc.) care a lot about how to rank players in these games. But by simply reframing the problem, the applications of ranking in multi-competitor games in business settings.

For example, consider a firm with several retail outlets in a geographic area. Each month, it looks at which stores performed best on various metrics, but over time, it can become difficult to have a clear idea of which stores are the best. On a small scale, we can make a judgement call about how stores A, B, and C rank relative to one another. But in the context of a larger organization, an algorithmic approach is necessary.

This comes from the fact that as the series of competitons becomes more and more complex, it is challenging for us to capture the difficulty of each competition. To get around this issue, we create an optimization based approach which measures the skill of each player off their performance and the difficulty of each competition by its attendents.

An Optimization Approach

To find the ability of each player, we predict a player's performance to be the number of people at a competition he has a higher skill than and minimize the error of these predicition. This formulation would find the skill level of each competitor which best predicts performance which we can then order to obtain a competitor ranking

The challenge with this approach, however, it that is compares every player in a competition to every other player in a competition. This high level of comparisons leads to the problem being too slow to be useful in the current implementation.

To improve the runtime, we simplify the method, so it predicts a players performance only using the player's skill and the size of the competition. In the new method, we predict a competitior's performance to be their skill times the average place in competition (one half competition size). The model, like the original, finds the skill level which best fits the data which we can then use to rank competitors.

Evolving performance over time

Our first two formulations considers skill a single immovable value in time; however, in reality this value would evolve. We address this aspect of skill by breaking up the competitions into periods, finding the skil of competitiors in the first period, and then optimizing the amount it changes from the orginal value in subsequent periods.

In Summary

  • We rank players in multicomptetor games by using a measure of skill to predict competition results
  • We optimize those predictions over the measure of skill
  • We order competitors by this measure to obtain a ranking