In this post we use a simple formula described by [Michael Mauboussin](https://twitter.com/mjmauboussin) in his book, [The Success Equation](http://success-equation.com/) to calculate the role of luck in [AFL](https://en.wikipedia.org/wiki/Australian_Football_League) matches, and make some comparisons to other sports.
If you work in data/statistics/DS/ML/AI and are active on LinkedIn you've probably seen this image. It typically recieves a large number of positive comments. This makes sense, data by itself isn't informative without a layer of interpretation. However, without a valid model of the world, 'telling a story from data' typically results in an erroneous post-hoc rationalisation of events obtained from noisy, incomplete data that is not representative of the wider population
[Zillow](https://www.zillow.com/) announced it would use 'algorithms' to identify cheap properties that it could later flip for a handsome profit. Sounds obvious enough right? Surely with a big-data advantage, they will be able to do better in the housing market than the average punter? Turns out they couldn't.
Typical bandits are only capable of adapting to new data that are being observed, they are not 'really' adapting to changes in the environment. Bandit methods typically assume stationarity and that the environment or underlying phenomenon we are trying to understand doesn't change in time!
In this post we will explore how playing positive expected value games is not always a good idea, even if you can play them repeatedly. This runs counter to common beliefs regarding expected values.
I've written an R package implementing DeepMind’s multidimensional Elo rating approach for evaluating agents. The mELO rating system has the desirable property of being able to handle cyclic, non-transitive interactions (meaning it can handle rock-paper-scissors style dynamics). It is also better behaved in the presence of redundant copies of agents or tasks.