Workshop on Reproducability and Replication in Recommender Systems Evaluation

Even when a set of publicly available resources (data and algorithm implementations) exists in the community, very often research studies do not report comparable results for the same methods under the same conditions. This is due to the high number of experimental design parameters in recommender system evaluation, and the huge impact of the experimental design on the outcomes.

In order to seek reproducibility and replication several strategies can be considered, such as source code sharing, standardization of agreed evaluation metrics and protocols, or releasing public experimental design software, all of which have difficulties of their own. Similarly, for online evaluation, an extensive analysis of the population of test users should be provided. While the problem of reproducibility and replication has been recognized in the community, the need for a solution remains largely unmet. This, together with the need for further discussion, methodological standardization in both reproducibility as well as replication motivates the workshop.

Organizers
Workshop Date

Oct 12, 2013 (08:30 – 16:15)

Room

LT-16

Web site

http://repsys.project.cwi.nl

Gold Supporters
Silver Supporters
Bronze Supporter
Technical Supporters