Replicable Evaluation of Recommender Systems
by Alan Said (Recorded Future, Sweden) and Alejandro Bellogín (Universidad Autónoma de Madrid, Spain)
Recommender systems research is by and large based on comparisons of recommendation algorithm’s predictive accuracy: the better the evaluation metrics (higher accuracy scores or lower predictive errors), the better the recommender algorithm. Comparing the evaluation results of two recommendation approaches is however a difficult process as there are very many factors to be considered in the implementation of an algorithm, its evaluation, and how datasets are processed and prepared.
This tutorial will show how to present evaluation results in a clear and concise manner, while ensuring that the evaluation results are comparable, replicable and unbiased. These insights are not limited to recommender systems research alone, but are also valid for experiments with other types of personalized interactions and contextual information.
Slides
http://www.slideshare.net/abellogin/replicable-evaluation-of-recommender-systems
Date
Wednesday, Sept 16, 2015, 09:00-10:30
Location
HS 5
Real-time Recommendation of Streamed Data
by Frank Hopfgartner (University of Glasgow, UK), Benjamin Kille (TU Berlin, Germany), Tobias Heintz (plista GmbH, Germany) and Roberto Turrin (ContentWise, Italy)
This tutorial addresses two trending topics in the field of recommender systems research, namely online evaluation in the form of A/B testing and offline evaluation of streamed-based recommendation techniques.
A/B testing aims to benchmark varieties of a recommender system by a larger group of users. It is increasingly adopted for the evaluation of commercial systems with a large user base as it provides the advantage of observing the efficiency of recommendation algorithms under real conditions. However, while online evaluation is the de-facto standard evaluation methodology in Industry, university-based researchers often do not have access to either infrastructure or user base to perform online evaluation on a larger scale. Addressing this deficit, participants will learn in this tutorial how they can join a living lab on news recommendation that allows them to perform A/B testing.
Offline evaluation allows for the evaluation of research hypotheses that center around modeling recommendation as user-specific selection from static collections of items. While this might be suitable in some domains where the content does not change too often, it fails in more dynamic domains where items continuously emerge and extend collections, and where existing items become less and less relevant. Examples include news, microblog, or advertisement recommendations where content comes in the form of a constant stream of data. Streamed data triggers specific challenges for recommender systems. For example, it challenges collaborative filtering as this imposes that the sets of users and items rapidly fluctuate. This tutorial focuses on stream-based recommenders that reflect these dynamics.
Slides
http://www.slideshare.net/fraho/recsys15-tutorial-on-realtime-recommendation-of-streamed-data
Date
Wednesday, Sept 16, 2015, 09:00-10:30
Location
HS 6
Scalable Recommender Systems: Where Machine Learning Meets Search!
by Joaquin A. Delgado (Verizon, US) and Diana Hu (Verizon, US)
This tutorial gives an overview of how search engines and machine learning techniques can be tightly coupled to address the need for building scalable recommender or other prediction based systems. Typically, most of them architect retrieval and prediction in two phases. In Phase I, a search engine returns the top-k results based on constraints expressed as a query. In Phase II, the top-k results are re-ranked in another system according to an optimization function that uses a supervised trained model. However this approach presents several issues, such as the possibility of returning sub-optimal results due to the top-k limits during query, as well as the prescence of some inefficiencies in the system due to the decoupling of retrieval and ranking.
To address this issue the authors created ML-Scoring, an open source framework that tightly integrates machine learning models into Elasticsearch, a popular search engine. ML-Scoring replaces the default information retrieval ranking function with a custom supervised model that is trained through Spark, Weka, or R that is loaded as a plugin in Elasticsearch. This tutorial will not only review basic methods in information retrieval and machine learning, but it will also walk through practical examples from loading a dataset into Elasticsearch to training a model in Spark, Weka, or R, to creating the ML-Scoring plugin for Elasticsearch. No prior experience is required in any system listed (Elasticsearch, Spark, Weka, R), though some programming experience is recommended.
Slides
https://speakerdeck.com/sdianahu/recsys-2015-tutorial-scalable-recommender-systems-where-machine-learning-meets-search
Date
Wednesday, Sept 16, 2015, 11:00-12:30
Location
HS 5
Interactive Recommender Systems
by Harald Steck (Netflix Inc., US), Roelof van Zwol (Netflix Inc., US) and Chris Johnson (Spotify Inc., US)
Interactive recommender systems enable the user to steer the received recommendations in the desired direction through explicit interaction with the system. In the larger ecosystem of recommender systems used on a website, it is positioned between a lean-back recommendation experience and an active search for a specific piece of content. Besides this aspect, we will discuss several parts that are especially important for interactive recommender systems, including the following: design of the user interface and its tight integration with the algorithm in the back-end; computational efficiency of the recommender algorithm; as well as choosing the right balance between exploiting the feedback from the user as to provide relevant recommendations, and enabling the user to explore the catalog and steer the recommendations in the desired direction.
In particular, we will explore the field of interactive video and music recommendations and their application at Netflix and Spotify. We outline some of the user-experiences built, and discuss the approaches followed to tackle the various aspects of interactive recommendations. We present our insights from user studies and A/B tests.
The tutorial targets researchers and practitioners in the field of recommender systems, and will give the participants a unique opportunity to learn about the various aspects of interactive recommender systems in the video and music domain. The tutorial assumes familiarity with the common methods of recommender systems.
Slides
http://www.slideshare.net/MrChrisJohnson/interactive-recommender-systems-with-netflix-and-spotify
Date
Wednesday, Sept 16, 2015, 11:00-12:30
Location
HS 6