Workshop on Reinforcement and Robust Estimators for Recommendation

State-of-the-art recommender systems are notoriously hard to design, due to their interactive and dynamic nature, since they involve a multi-step decision-making process, where a stream of interactions occurs between the user and the system. To make the problem tractable, these interactions are often viewed as independent, but in order to plan for long-term user satisfaction, the models will need to take into account the delayed effects of each recommendation. Due to their interactive nature, recommender systems are also notoriously hard to evaluate. Unlike online methods, such as A/B testing, offline evaluation provides a scalable way of comparing recommender systems. Recent research on recommender systems makes the link with counterfactual inference for offline A/B testing that reuses logged interaction data, as well as the use of simulators.

REVEAL ’19 revisits the problem of designing and evaluating recommender systems. We invite contributions on the following topics:

  • Reinforcement learning and bandits for recommendation
  • Robust estimators and counterfactual evaluation
  • Using simulation for recommender systems evaluation
  • Open datasets and new offline metrics

The purpose of the workshop is to ensure that the community, spanning academic and industrial backgrounds, is working on the right problem: find for each user, the most impactful recommendation.

  • Thorsten Joachims, Cornell University, USA
  • Adith Swaminathan, Deep Learning Technology Center, Microsoft Research, USA
  • Maria Dimakopoulou, Netflix R&D, USA
  • Yves Raimond, Netflix R&D, USA
  • Olivier Koch, Criteo R&D, France
  • Flavian Vasile, Criteo R&D, France


Friday, Sept 20, 2019, 09:00-17:30


Room 204+205

Diamond Supporters
Platinum Supporters
Gold Supporters
Silver Supporters
Special Supporter