Introduction

As demonstrated in the introductory vignette, Elo and other methods that rely on a single parameter that describe an agents ability cannot accomodate scenarios with cyclical or non-transitive interactions amongst agents or players. In situations where these behaviours are expected, a mELO model is likely to achieve far better results than an Elo model provided the outcomes aren’t too noisy and the agents abilities are locally stationary.

Note: Stationarity is important. If agents true abilities change too fast, statistical methods are not likely to yield useful estimates or predictions.

In this document we will use the mELO package to see if a mELO model can outperform a regular Elo model for the task of predicting the outcome of AFL matches. We do not expect the performance of the mELO model to be amazing. Other methods for predicting match outcomes that are more highly parameterised (such as Glicko or Steph as implemented in the PlayerRating package) or are able to make predictions conditional on additional information (other statistical or machine learning methods) would likely yield better results.

Method

We will apply the following procedure to build models and assess their performance:

  1. Split data in to the following segments
    • Initialisation data. Seasons 2012–2014 inclusive.
    • Training data. Seasons 2015–2018 inclusive.
    • Test data. Seasons 2019–2020.
  2. Model fitting
    • Select hyperparamters: learning rates & home ground advantage.
    • Fit model to initialisation data
    • Fit model to training data, initialised with the estimates obtained from the initialised model.
  3. Model optimisation
    • Repeat step 2 to find hyperparamters that minimises the chosen loss f unction on the training data.
  4. Get final estimate of model error
    • Using the optimal model from step 3, calculate error on the test data.

The initialisation step is necessary because the initial ratings for all agents will be set to the same value and the \(\textbf{C}\) matrix will have been initialised randomly. The first segment of the AFL match data is used to initialise the model so that reasonable estimates of \(\textbf{r}\) and \(\textbf{C}\) can be obtained and plugged in to the training model before we start calculating estimated model errors.

The outcomes we are modelling will be \(y \in \{0, 0.5, 1\}\) representing a loss, draw and win respectively.

We will use the logarithmic loss function (equivalent to cross-entropy in this setting) to evaluate model fits, calculated as \[ \mathcal{L(\Theta)} = - \frac{1}{n} \sum_{i=1}^n [y_{i} \log \, p_{i} + (1 - y_{i}) \log \, (1 - p_{i})].\] where \(i\) is the match index, \(n\) is the total number of matches we are calculating the loss over and \(\Theta\) is the set of model hyperparamters.

Note: Alternatively for step 2, because these are iterative procedures we can fit a model with the specified hyperparameters to the initialisation data + training data, and then in step 3 we only count the errors from the training data portion.

The data

A large sample of AFL match data has been sourced using the excellent fitzRoy package and made available in the afl_df object.

match_index home_team away_team outcome date home_score away_score
1 Richmond Melbourne 1 2000-03-08 94 92
2 Essendon Port Adelaide 1 2000-03-09 156 62
3 North Melbourne West Coast 0 2000-03-10 111 154
4 Adelaide Footscray 0 2000-03-11 108 131
5 Fremantle Geelong 0 2000-03-11 107 129
6 St Kilda Sydney 0 2000-03-12 100 134

Split the data in to the required segments

The models

For the sake of simplicity we will apply the homeground advantage to all teams playing at home. A more rigorous experiment would be to identify in which matches the away team had a travel disadvantage, and then set the home team to have an advantage in that match.

ELO

We begin by writing a function to help us perform step 2. This function returns the logloss over the training data for an Elo model fit with the specified hyperparameters, \(\Theta = (\eta, \gamma)\), the learning rate and home ground advantage respectively.

Note: The ELO() and mELO() functions return the true outcomes and the predictions for time \(t+1\) made at time \(t\) which make the loss calculations straightforward and removes the requirement to write a function to loop through matches.

Now we can use an optimisation procedure to find the hyperparameters \(\Theta\) such that the logloss error is minimised.

The best model is obtained when \(\eta=\) 45.88 and \(\gamma=\) 78.29 (where the logarithmic loss \(\mathcal{L}=\) 0.61). Now we can fit a model with these parameters and calculate the error on the test set.

Results:

  • Test logloss \(\mathcal{L}=\) 0.6176086.
  • Test classification rate = 67.6%.

The figure below gives the evolutions of the Elo ratings. The vertical lines indicate the cut offs of the initialisation, training and test sets.

plot(optim_ELO_model)
abline(v=c(n_init, n_init+n_train), lty=2)

mELO

Similarly to the Elo model, we build a function to help us optimise the loss function.

Now we can use an optimisation procedure to find the hyperparameters \(\Theta\) such that the logloss error is minimised.

The best model is obtained when \(\eta_1=\) 46.5, \(\eta_2=\) 0.1 and \(\gamma=\) 75.13 (where the logarithmic loss \(\mathcal{L}=\) 0.61). Now we can fit a model with these parameters and calculate the error on the test set.

Results:

  • logloss \(\mathcal{L}=\) 0.6253702.
  • Classification rate = 65.5%.

The figure below gives the evoltions of the mELO ratings, they are very similar to the Elo paths. The vertical lines indicate the cut offs of the initialisation, training and test sets.

plot(optim_mELO_model)
abline(v=c(n_init, n_init+n_train), lty=2)

We can also inspect the advantage matrix and plot the \(\textbf{c}\) vectors for each team.

Conclusion

Let’s gather our results

results_df <- data.frame(
  model = c("Elo", "mELO"),
  `Test logloss` = c(test_logloss_ELO, test_logloss_mELO) %>% round(3),
  `Test Accuracy` = paste0(
    (100*c(test_class_rate_ELO, test_class_rate_mELO)) %>% round(1), "%"
  )
)

results_df %>% knitr::kable()
model Test.logloss Test.Accuracy
Elo 0.618 67.6%
mELO 0.625 65.5%

Based on the results in the table above, a mELO model (with \(k=1\)) is inferior to the Elo model for predicting the outcome of AFL matches in the test set spanning seasons 2018–2019 as measured by both logloss and accuracy (classification rate) metrics.

If one was interested, there are a number of things to try to improve performance, including but not limited to:

  • Improving the home ground advantage indicator.
  • Replace the \(\{0, 0.5, 1\}\) outcome with a value that is some function of the match margin.
  • Find an improved initialisation of the \(\textbf{C}\) matrix.

Interestingly, the mELO model obtains inferior performance even on the training data. Adjusting the base team ratings with the \(\textbf{A}\) matrix when making predictions hurts performance if the \(\textbf{c}\) cannot be learned properly.

If we look at the \(\textbf{c}\) vector plots for the rock-paper-scissors-fire-water example in the introductory vignette, we observe that even in that much simpler game, with fewer teams and no noise in the outcomes, it still takes ~75 mathes before the \(\textbf{c}\) stablise and the predictions become accurate and we can be confident that the dynamics of the game have been captured. In contract, the AFL match data is far noisier1, with far fewer reptitions of matches and where we actually expect the true ability of teams to actually change in time due to changes to rosters, coaching and other factors.

For the AFL data, we suspect that the \(\textbf{c}\) vectors cannot be learned adequately as they cannot keep up with the noisy dynamics of the environment and thus the resulting adjustments to predictions only hurt performance. If the learning rate (\(\eta_2\)) is too low, the model won’t learn much about the current regime before it changes, if it is too high, it will overfit to the most recent matches.


  1. In the sense that two teams could play twice and results of the two matches could be very different. This doesn’t happer in rock-scissors-paper and its variants. The impact of noise in outcomes for mELO models is explored in the this vignette.