Replicable Evaluation of Recommender Systems
by Alan Said (Recorded Future, Sweden) and Alejandro Bellogín (Universidad Autónoma de Madrid, Spain)
Recommender systems research is by and large based on comparisons of recommendation algorithm’s predictive accuracy: the better the evaluation metrics (higher accuracy scores or lower predictive errors), the better the recommender algorithm. Comparing the evaluation results of two recommendation approaches is however a difficult process as there are very many factors to be considered in the implementation of an algorithm, its evaluation, and how datasets are processed and prepared.
This tutorial will show how to present evaluation results in a clear and concise manner, while ensuring that the evaluation results are comparable, replicable and unbiased. These insights are not limited to recommender systems research alone, but are also valid for experiments with other types of personalized interactions and contextual information.
Wednesday, Sept 16, 2015, 09:00-10:30