Paper Session P8: Novel Machine Learning Approaches II

Session A: 18:3020:00, chaired by Linas Baltrunas and Michael Ekstrand. Attend in Whova
Session B: 5:307:00, chaired by Zhenhua Dong and Linas Baltrunas. Attend in Whova

  • LPImRec: Learning Reciprocal Preferences Using Images
    by James Neve (University of Bristol), Ryan McConville (University of Bristol)

    Reciprocal Recommender Systems are recommender systems for social platforms that connect people to people. They are commonly used in online dating, social networks and recruitment services. The main difference between these and conventional user-item recommenders that might be found on, for example, a shopping service, is that they must consider the interests of both parties. In this study, we present a novel method of making reciprocal recommendations based on image data. Given a user’s history of positive and negative preference expressions on other users images, we train a siamese network to identify images that fit a user’s personal preferences. We provide an algorithm to interpret those individual preference indicators into a single reciprocal preference relation. Our evaluation was performed on a large real-world dataset provided by a popular online dating service. Based on this, our service significantly improves on previous state-of-the-art content-based solutions, and also out-performs collaborative filtering solutions in cold-start situations. The success of this model provides empirical evidence for the high importance of images in online dating.

    Full text in ACM Digital Library

  • LPCascading Hybrid Bandits: Online Learning to Rank for Relevance and Diversity
    by Chang Li (University of Amsterdam), Haoyun Feng (Bloomberg), Maarten de Rijke (University of Amsterdam)

    Relevance ranking and result diversification are two core areas in modern recommender systems. Relevance ranking aims at building a ranked list sorted in decreasing order of item relevance, while result diversification focuses on generating a ranked list of items that covers a broad range of topics. In this paper, we study an online learning setting that aims to recommend a ranked list with K items that maximizes the ranking utility, i.e., a list whose items are relevant and whose topics are diverse. We formulate it as the cascade hybrid bandits (CHB) problem. CHB assumes the cascading user behavior, where a user browses the displayed list from top to bottom, clicks the first attractive item, and stops browsing the rest. We propose a hybrid contextual bandit approach, called CascadeHybrid, for solving this problem. CascadeHybrid models item relevance and topical diversity using two independent functions and simultaneously learns those functions from user click feedback. We conduct experiments to evaluate CascadeHybrid on two real-world recommendation datasets: MovieLens and Yahoo music datasets. Our experimental results show that CascadeHybrid outperforms the baselines. In addition, we prove theoretical guarantees on the n-step performance demonstrating the soundness of CascadeHybrid.

    Full text in ACM Digital Library

  • LPContextual and Sequential User Embeddings for Large-Scale Music Recommendation
    by Casper Hansen (University of Copenhagen), Christian Hansen (University of Copenhagen), Lucas Maystre (Spotify), Rishabh Mehrotra (Spotify), Brian Brost (Spotify), Federico Tomasi (Spotify), Mounia Lalmas (Spotify)

    Recommender systems play an important role in providing an engaging experience on online music streaming services. However, the musical domain presents distinctive challenges to recommender systems: tracks are short, listened to multiple times, typically consumed in sessions with other tracks, and relevance is highly context-dependent. In this paper, we argue that modeling users’ preferences at the beginning of a session is a practical and effective way to address these challenges. Using a dataset from Spotify, a popular music streaming service, we observe that a) consumption from the recent past and b) session-level contextual variables (such as the time of the day or the type of device used) are indeed predictive of the tracks a user will stream—much more so than static, average preferences. Driven by these findings, we propose CoSeRNN, a neural network architecture that models users’ preferences as a sequence of embeddings, one for each session. CoSeRNN predicts, at the beginning of a session, a preference vector, based on past consumption history and current context. This preference vector can then be used in downstream tasks to generate contextually relevant just-in-time recommendations efficiently, by using approximate nearest-neighbour search algorithms. We evaluate CoSeRNN on session and track ranking tasks, and find that it outperforms the current state of the art by upwards of 10% on different ranking metrics. Dissecting the performance of our approach, we find that sequential and contextual information are both crucial.

    Full text in ACM Digital Library

  • LPExploiting Performance Estimates for Augmenting Recommendation Ensembles
    by Gustavo Penha (Delft University of Technology), Rodrygo L. T. Santos (CS Dept., UFMG)

    Ensembling multiple recommender systems via stacking has shown to be effective at improving collaborative recommendation. Recent work extends stacking to use additional user performance predictors (e.g., the total number of ratings made by the user) to help determine how much each base recommender should contribute to the ensemble. Nonetheless, despite the cost of handcrafting discriminative predictors, which typically requires deep knowledge of the strengths and weaknesses of each recommender in the ensemble, only minor improvements have been observed. To overcome this limitation, instead of engineering complex features to predict the performance of different recommenders for a given user, we propose to directly estimate these performances by leveraging the user’s own historical ratings. Experiments on real-world datasets from multiple domains demonstrate that using performance estimates as additional features can significantly improve the accuracy of state-of-the-art ensemblers, achieving nDCG@20 improvements by an average of 23% over not using them.

    Full text in ACM Digital Library

Back to Program

Select timezone:

Current time in :

Diamond Supporter
Platinum Supporters
Gold Supporters
Silver Supporter
Special Supporter