Tutorials

  • Neural Re-ranking for Multi-stage Recommender Systems
    by Weiwen Liu (Huawei Noah’s Ark Lab, China), Jiarui Qin (Shanghai Jiao Tong University, China), Ruiming TANG (Huawei, China), and Bo Chen (Huawei, China)

    Re-ranking is one of the most critical stages for multi-stage recommender systems (MRS), which re-orders the input ranking lists by modeling the cross-item interaction. Recent re-ranking methods have evolved into deep neural architectures due to the significant advances in deep learning. Neural re-ranking, therefore, has become a trending topic and many of the improved algorithms have demonstrated their use in industrial applications, enjoying great commercial success. The purpose of this tutorial is to explore some of the recent work on neural re-ranking, integrating them into a broader picture and paving ways for more comprehensive solutions for future research. In particular, we provide a taxonomy of current methods according to the objectives and training signals. We examine and compare these methods qualitatively and quantitatively, and identify some open challenges and future prospects. Detailed information about the tutorial can be found at https://librerank-community.github.io/.

    This tutorial is intended for the researchers and practitioners who

    • are new and interested in re-ranking for recommendation and looking for a tutorial to fast step into this field;
    • have been working on neural re-ranking and willing to explore new challenges and open issues in this field;
    • are building re-ranking models for large-scale industrial recommender systems.

    The only prerequisite for this tutorial is the elementary knowledge of recommender systems and deep learning.

  • Hands-on Reinforcement learning for recommender systems – From Bandits to SlateQ to Offline RL with Ray RLlib
    by Kourosh Hakhamaneshi (Anyscale, USA) and Christy Bergman (Anyscale, USA)

    Traditional supervised ML techniques are efficient at capturing detailed patterns for recommender systems (RecSys) , but the models are static and do not easily adapt to users with changing preferences and behaviors. It is natural to model recommender systems as repeated decision-making processes. Each user action (for example clicking on a search or recommendation result) has an impact on immediate next actions and the user’s long-term satisfaction or LTV. Each action in the sequence may yield an immediate (short-term) engagement, but the more interesting (longer-term) reward is not known until the user completes their interaction cycle.

    Reinforcement learning (RL) is gaining traction as a complementary approach to supervised learning for RecSys due to RL’s sequential decision-making process and its ability to learn from delayed rewards. Recent advances in offline reinforcement learning, off-policy evaluation, and more scalable, performant system design with the ability to run code in parallel, have made RL more tractable for the RecSys real time use cases.

    In this hands-on Tutorial, you will learn about RLlib, which is the most comprehensive open-source reinforcement learning framework, built for production workloads. RLlib is built on top of Ray, an easy-to-use, open-source, distributed computing framework for Python that can handle complex, heterogeneous applications. Ray and RLlib run on compute clusters on any cloud without vendor lock. Since Ray is open-source, it is possible to bring new innovations faster to your members.

    Using Colab notebooks, we will combine theoretical concepts of RL with practical exercises. You will leave with a complete, working example of parallelized Python RL code using Ray RLlib for RecSys on a github repo.

    LEARNING OBJECTIVES. IN THIS TUTORIAL YOU WILL:

    • Customize a RecSys RL environment (Recsim with Long Term Satisfaction) using OpenAI Gym APIs.
    • Train and hyperparameter tune a RLlib algorithm (SlateQ, MARWIL) using Ray Tune
    • Checkpoint, Save, and Load the RL model using RLlib
    • Use RL techniques in offline learning to initialize a RL policy and keep training and evaluating it
    • Use Python decorators to Deploy and Serve the trained recommender model using Ray Serve
    • Visualize results

    PREPARE FOR YOUR HANDS-ON TUTORIAL TRAINING

    This tutorial is aimed for an audience with introductory to intermediate experience in Python, deep learning, and reinforcement learning who are interested in using RL methods in their recommender systems.

    • Bring your own laptop. All software will be pre-installed for you in Colab notebooks.
    • Have a Google account, such as the one you use for gmail.

    To get the most from your hands-on learning experience, following is a recommended reading list, if you need a quick refresher, prior to taking the tutorial class:

    • Basic knowledge of Python (Intro to Python)
    • Deep learning using either PyTorch or TensorFlow
    • Reinforcement learning (Intro to RL)

  • Offline Evaluation for Group Recommender Systems
    by Francesco Barile (Maastricht University, The Netherlands), Amra Delić (University of Sarajevo, Bosnia and Herzegovina), and Ladislav Peška (Charles University, Czech Republic)

    Group Recommender Systems (GRSs), unlike recommendations for individuals, provide suggestions for groups of people. Clearly, many activities are often experienced by a group rather than an individual (visiting a restaurant, traveling, watching a movie, etc.) hence the requirement for such systems. The topic is gradually receiving more and more attention, with an increased number of papers published at significant venues, which is enabled by the predominance of online social platforms that allow their users to interact in groups, as well as to plan group activities. However, the research area lacks certain ground rules, such as basic evaluation agreements. We believe this is one of the main obstacles to make advances in the research area, and to enable researchers to compare and continue each other’s works. In other words, setting the basic evaluation agreements is a stepping-stone towards reproducible Group Recommenders research. The goal of this tutorial is to tackle this problem, by providing the basic principles of the GRSs offline evaluation approaches.

    The tutorial is planned for 150 minutes. After introducing the theoretical background, a major part of the allocated time will be dedicated to the interactive participation (group discussions, hands on). In particular, the tutorial cover evaluation with a synthetic data set (a data set that does not contain real information about groups, but the groups are created artificially), i.e., the MovieLens data set, as well as evaluation with a data set containing information about individual as well as group preferences in the tourism domain.

    The primary target audience of the tutorial will be academic researchers whose research interest involves group recommendation systems and group decision support systems. Furthermore, the tutorial also aims to attract industry researchers whose research initiatives could potentially be extended to group recommendations.

    Participants are assumed to have a basic knowledge of the Python language and machine learning, and familiarity with recommendation systems and group recommendation systems. In addition, it is recommended to bring a personal laptop for the practical session.

  • Training and Deploying Multi-Stage Recommender Systems
    by Ronay Ak (NVIDIA, USA), Benedikt Schifferer (NVIDIA, Germany), Sara Rabhi (NVIDIA, Canada), and Gabriel de Souza Pereira Moreira (NVIDIA, Brazil)

    Industrial recommender systems are made up of complex pipelines requiring multiple steps including feature engineering and preprocessing, a retrieval model for candidate generation, filtering, a feature store query, a ranking model for scoring, and an ordering stage. These pipelines need to be carefully deployed as a set, requiring coordination during their development and deployment. Data scientists, ML engineers, and researchers might focus on different stages of recommender systems, however they share a common desire to reduce the time and effort searching for and combining boilerplate code coming from different sources or writing custom code from scratch to create their own RecSys pipelines.

    This tutorial introduces the Merlin framework which aims to make the development and deployment of recommender systems easier, providing methods for evaluating existing approaches, developing new ideas and deploying them to production. There are many techniques, such as different model architectures (e.g. MF, DLRM, DCN, etc), negative sampling strategies, loss functions or prediction tasks (binary, multi-class, multi-task) that are commonly used in these pipelines. Merlin provides building blocks that allow RecSys practitioners to focus on the “what” question in designing their model pipeline instead of “how”. Supporting research into new ideas within the RecSys spaces is equally important and Merlin supports the addition of custom components and the extension of existing ones to address gaps.

    In this tutorial, participants will learn: (i) how to easily implement common recommender system techniques for comparison, (ii) how to modify components to evaluate new ideas, and (iii) deploying recommender systems, and bringing new ideas to production- using an open source framework Merlin and its libraries.

    This tutorial will be a combination of lectures (25 min) and hands-on tutorial coding live with Jupyter notebooks (125 min). The audience will be able to follow all hands-on labs in their dedicated environment via Jupyter notebooks and participate by running the code themself and solving the exercises. NVIDIA Deep Learning Institute will provide infrastructure for the tutorial. Each participant will get access to a dedicated compute instance with a GPU attached, which has the dataset and Jupyter notebooks. Participants are required to bring the laptop and have an internet connection. They can start the instance and access the examples via their web browser.

    PREPARE FOR YOUR HANDS-ON TUTORIAL TRAINING

    To get the most from your hands-on learning experience, please complete these steps prior to getting started:

    1. Create or log into your NVIDIA Developer Program account. This account will provide you with access to all of the training materials during the tutorial.
    2. Visit websocketstest.courses.nvidia.com and make sure all three test steps are checked “Yes.” This will test the ability for your system to access and deliver the training contents. If you encounter issues, try updating your browser. Note: Only Chrome and Firefox are supported.
    3. Check your bandwidth. 1 Mbps downstream is required and 5 Mbps is recommended. This will ensure consistent streaming of audio/video during the tutorial to avoid glitches and delays.

  • Improving Recommender Systems with Human-in-the-Loop
    by Dmitry Ustalov, PhD (Toloka, Switzerland), Natalia Fedorova (Toloka, Switzerland), Nikita Pavlichenko (Toloka, Switzerland)

    Today, most recommender systems employ Machine Learning to recommend posts, products, and other items, usually produced by the users. Although the impressive progress in Deep Learning and Reinforcement Learning, we observe that recommendations made by such systems still do not correlate with actual human preferences. In our tutorial, we will share more than six years of our crowdsourcing experience and bridge the gap between crowdsourcing and recommender systems communities by showing how one can incorporate human-in-the-loop into their recommender system to gather the real human feedback on the ranked recommendations. We will discuss the ranking data lifecycle and run through it step-by-step. A significant portion of tutorial time is devoted to a hands-on practice, when the attendees will, under our guidance, sample and annotate recommendations on real crowds, build the ground truth dataset, and compute the evaluation scores.

    Outline:

    1. Introduction: Recommender Systems, Crowdsourcing, Online and Offline Evaluation
    2. Ranking and Its Quality: Problem of Learning-to-Rank, Pointwise/Pairwise/Listwise Approaches, Evaluation Criteria
    3. Human-in-the-Loop Essentials: Core Concepts in Crowdsourcing and Quality Control
    4. Hands-On Practice Session
    5. From Human Labels to Ground Truth: Problem of Answer Aggregation, Pairwise Comparisons, Crowd-Kit Library
    6. Conclusion: Discussion of Results, References

    All the demonstrated methodology is platform-agnostic and can be freely adapted to a variety of applications. One can gather the judgments on any data labeling platform, from in-house setups till MTurk and Toloka. A related tutorial was previously presented at NAACL-HLT ’21, WWW ’21, CVPR ’20, SIGMOD ’20, WSDM ’20, and KDD ’19.

    We expect the attendees to understand the core concepts in recommender systems and are able to write short scripts in Python, while we do not require any knowledge of crowdsourcing. We will provide all the necessary definitions and icebreakers to accommodate a wider audience. We recommend the attendees to bring their laptops for the hands-on practice session.

  • Hands on Explainable Recommender Systems with Knowledge Graphs
    by Giacomo Balloccu (University of Cagliari, Italy), Ludovico Boratto (University of Cagliari, Italy),
    Gianni Fenu (University of Cagliari, Italy), and Mirko Marras (University of Cagliari, Italy)

    Regulations, such as the European General Data Protection Regulation (GDPR), call for a right to explanation, meaning that, under certain conditions, it is mandatory by law to generate awareness for the users on how a model behaves. Explanations have been also proved to have benefits from a business perspective, by increasing trust in the system, helping the users make a decision faster, and persuading a user to try and buy. A notable class of decision-support systems that urges supporting explanations are recommender systems, which often act as black boxes. Concerted efforts have been devoted to challenge these black boxes to make recommendation a transparent social process, by augmenting traditional models representing user-product interactions with external knowledge, often modeled as knowledge graphs, about the products and the users.

    The goal of this tutorial is to present the RecSys community with recent advances on explainable recommender systems with knowledge graphs. We will first introduce conceptual foundations, by surveying the state of the art and describing real-world examples of how knowledge graphs are being integrated into the recommendation pipeline. This tutorial will continue with a systematic presentation of algorithmic solutions to model, integrate, train, and assess a recommender system with knowledge graphs, with particular attention to the explainability perspective. A practical part will then provide a series of concrete implementations, leveraging open-source tools and public datasets; in this part, tutorial participants will be engaged in the design of explanations accompanying the recommendations and in articulating their impact. We conclude this tutorial by analyzing emerging open issues and future directions in this vibrant research area.

    Website: https://explainablerecsys.github.io/recsys2022/

  • Psychology-informed Recommender Systems
    by Elisabeth Lex (Graz University of Technology, Austria) and Markus Schedl (Johannes Kepler University Linz, Austria)

    Recommender systems are essential tools to support human decision-making. Many state-of-the-art recommender systems adopt advanced machine learning techniques to model and predict user preferences from behavioral data. While such systems can provide useful recommendations, their algorithm design commonly neglects the underlying psychological mechanisms that shape user preferences and behavior.

    In this tutorial, we will offer a comprehensive review of the state-of-the-art and progress in psychology-informed recommender systems, i.e., recommender systems that incorporate human cognitive processes, personality, and affective cues into recommendation models, along with definitions, strengths and weaknesses. We will show how such systems can improve the recommendation process in a user-centric fashion.

    The tutorial will cover the following subject matters:

    • Overview of traditional recommender systems: Content-based, collaborative filtering, context-aware, and hybrid recommender systems.
    • Taxonomy of psychology-informed recommender systems: Categorization of recommendation approaches that utilize psychological models to improve the recommendation process into cognition-inspired, personality-aware, and attention-aware recommender systems.
    • Cognition-inspired recommender systems: Cognitive models and cognitive architectures and how to exploit cognitive models of memory, stereotypes, and attention for recommender systems.
    • Personality-aware recommender systems: Modeling personality, acquiring personality traits, personality and item preferences, approaches that leverage personality traits for recommendation.
    • Affect-aware recommender systems: Definition of major affective cues (mood and emotion), modeling mood and emotion, acquiring affective cues, approaches that leverage affective cues for recommendation.
    • Grand challenges in the area of psychology-informed recommender systems

    The tutorial is based on a recent survey article published by the presenters, available from: http://www.cp.jku.at/people/schedl/research/Publications/pdf/lex_fntir_2021.pdf
    Additional tutorial materials, including the slides, are made available at: https://socialcomplab.github.io/pirs-psychology-informed-recsys/

    Variants of this tutorial were previously presented at The Web Conference 2022 and the ACM SIGIR Conference on Human Information Interaction and Retrieval (CHIIR) 2022.

  • Conversational Recommender System Using Deep Reinforcement Learning
    by Omprakash Sonie (DeepThinking.AI, India)

    Deep Reinforcement Learning (DRL) uses the best of both Reinforcement Learning and Deep Learning for solving problems which can not be addressed by them individually. Deep Reinforcement Learning has been used widely for games, robotics etc. Limited work has been done for applying DRL for Conversational Recommender System (CRS). Hence, this tutorial covers the application of DRL for CRS.
    We give conceptual introduction to Reinforcement Learning and Deep Reinforcement Learning and cover Deep Q-Network, Dyna, REINFORCE and Actor Critic methods.
    We then cover various real life case studies with increasing complexity starting from CRS, deep CRS, adaptivity, topic guided CRS, deep and large scale CRSs.

Diamond Supporter
 
Platinum Supporters
 
 
 
Gold Supporters
 
 
 
 
 
 
Challenge Sponsor
 
Special Supporters
In-Cooperation