©2018 by Fat Stats Blog. Proudly created with Wix.com

Search
  • Liam Webb

ChaRlie 2019

Its September! That can only mean one thing. The Brownlow. Also finals footy and my friend Zombo’s birthday, but mostly the Brownlow. For those who haven’t been here before, every year we attempt to predict the Brownlow medal using statistics and a machine learning model called chaRlie. Last year we built an app so people could go a bit more in-depth with the stats and generally waste time at work. This year we have polished that app a little but to be honest left it mostly the same. The model itself has slightly updated, adding in a few more statistics and some 2018 learnings. In general though, it wasn’t really a full overhaul but more of a tune-up.


For those who just want to see the 2019 results,click here.


2018


One of the main drivers for putting a little bit of time into chaRlie this year was that we weren’t entirely happy with how it went last year. Umpires are very difficult to predict at the best of times, and the 2018 Brownlow was no different. No other public models performed as well as they would have hoped that we could see, but ours seemed to have some particularly biases which generated some pretty bad errors (Clayton Oliver or Jack Macrae in particular hurt me).


The current model has Tom Mitchell as potentially getting votes in 17-19 of the 23 games. Note that the games where he received 0 appear correlated with the high uncertainty in the predictions.

Tom Mitchell was predicted to get 41 votes in last years model and 38 in this years revised model for 2018. Deep down, we knew that was very unlikely - he would need to break the record and his year wasn't that good. Unfortunately, with machine learning, our model had not seen a year like his before and it did not handle it well. Mitchell racked up possessions for fun, and in general when players have done that in previous years they have got votes. Bottom line: machine learning base models are always going to struggle with players of the type they have not seen before.


Error

Grey: Error from model released last year, Red: Error from updated model.

The above plot shows the subset of the 25 worst players from last year. If anyone gambled using last years numbers, its possible these predictions were not popular. The plot shows that for most players, the error reduced with the new revised model (9 of the 25 got worse, but mostly minimally). Overall, last year’s model achieved a mean error of 0.41 for the top 100 predicted players, a mean absolute error of 1.97 votes, and had an error range of -11 (Max Gawn) to +13.5 (Tom Mitchell). Practically, this means that every player was on average 4 votes out, and in general the error was positive which meant that we were allocating the votes to the “better” players more than we should.


For most players the model is actually quite consistent, however, with a few noticeable improvements. The mean error is -0.13, which means we have switched from an over prediction, to a slight under prediction. The error range of the new model has reduced to a minimum of -10.7 (Jack Steven) and a max over-prediction of +11.1 (Jack Macrae - keep that in mind when you see his numbers for 2019). The model has improved in the ruck category considerably, with a mean error of -0.15 votes, and a maximum error of -5.61 (the model, like most people, underestimated Ben McEvoy). Max Gawn was actually only underestimated by 3 votes, an improvement of 8 votes from our released numbers last year. The changes to the predictions for ruckmen was based on the availability of better statistics, as well as a revamped approach to treating positional data.


In general, the mean absolute error for the top 100 predicted players has improved from 4.1 to 3.5 votes - a 15% reduction. That being said, it still performed badly on some players over the last few years, and it’s worth checking how badly it went when you are looking at specific players.


The model performance tab allows us to see how well the most recent model has performed in past years.

APP


As mentioned, the app has been given a mild overhaul, with the aim of simplifying and reducing the alarmingly high crash rate from last year. One of the key things we have changed is the addition of error bars - these are particularly useful on the Player tab assessing the model certainty for any given round. The Overall and Team rankings can now be filter by rounds, and the plots have tool tips. Please let us know if you can crash it!


As usual, we just do this for fun if you choose to use the numbers for punting don't hold us responsible if you lose cash. Similar to last year, there are tools for de-risking we advise you use! Happy Brownlowing!


182 views