Accepted Contributions

List of all long papers accepted for RecSys 2019 (in alphabetical order).
Proceedings will be available in the ACM Digital Library.

No matches were found!
  • LPA Comparison of Calibrated and Intent-Aware Recommendations
    by Mesut Kaya, Derek Bridge

    Calibrated and intent-aware recommendation are recent approaches to recommendation that have apparent similarities. Both try, to a certain extent, to cover the user’s interests, as revealed by her user profile. In this paper, we compare them in detail. On two datasets, we show the extent to which intent-aware recommendations are calibrated and the extent to which calibrated recommendations are diverse. We consider two ways of defining a user’s interests, one based on item features, the other based on subprofiles of the user’s profile. We find that defining interests in terms of subprofiles results in highest precision and the best relevance/diversity trade-off. Along the way, we define a new version of calibrated recommendation and three new evaluation metrics.

  • LPA Deep Learning System for Predicting Size and Fit in Fashion E-Commerce
    by Abdul Saboor Sheikh, Romain Guigourès, Evgenii Koriagin, Yuen King Ho, Reza Shirvany, Roland Vollgraf, Urs Bergmann

    Personalized size recommendations bear crucial significance for any e-commerce fashion platform. Predicting the right size (or fit) drives customer satisfaction, and benefits the business by reducing overhead and costs that are incurred due to size-related returns. Traditional collaborative filtering algorithms seek to model customer preferences based on their previous orders and purchases. A typical challenge for these methods stems from the extreme sparsity of the purchase matrix: customers often only have a handful number of transactions in their purchase histories. To alleviate this problem, we propose a hybrid content-collaborative deep learning based methodology for personalized size recommendation. Our proposed method does not require any a priori knowledge about underlying size systems (e.g. EU or American size) and it can ingest arbitrary customer and article data. It is also equipped with the capacity to model multiple individuals or intents behind a single customer account. The model we employ optimizes a global set of parameters to learn a population-level abstraction of size and fit from observed customer-article interactions. By employing entity-specific latent variables, we further enable our model to represent implicit properties of customers and articles for predicting size and fit. We then derive personalized size recommendations by mapping both content-based information as well as entity-specific representations of customers and articles into a semantic-free latent space. We provide experimental results and demonstrate that our approach outperforms state-of-the-art methodologies on two recent public datasets (ModCloth and RentTheRunWay), and two large-scale in-house datasets.

  • LPA Pareto-Efficient Algorithm for Multiple Objective Optimization in E-Commerce Recommendation
    by Xiao Lin, Hongjie Chen, changhua pei, Fei Sun, Xuanji Xiao, Hanxiao Sun, Yongfeng Zhang, Wenwu Ou, Peng Jiang

    Recommendation with multiple objectives is an important but difficult problem, where the coherent difficulty lies in the possible conflicts between objectives. In this case, multi-objective optimization is expected to be Pareto efficient, where no single objective can be further improved without hurting the others. However existing approaches to Pareto efficient multi-objective recommendation still lack good theoretical guarantees. In this paper, we propose a general framework for generating Pareto efficient recommendations. Assuming that there are formal differentiable formulations for the objectives, we coordinate these objectives with a weighted aggregation. Then we propose a condition ensuring Pareto efficiency theoretically and a two-step Pareto efficient optimization algorithm. Meanwhile the algorithm can be easily adapted for Pareto Frontier generation and fair recommendation selection. We specifically apply the proposed framework on E-Commerce recommendation to optimize GMV and CTR simultaneously. Extensive online and offline experiments are conducted on the real-world E-Commerce recommender system and the results validate the Pareto efficiency of the framework. To the best of our knowledge, this work is among the first to provide a Pareto efficient framework for multi-objective recommendation with theoretical guarantees. Moreover, the framework can be applied to any other objectives with differentiable formulations and any model with gradients, which shows its strong scalability.

  • LPA Recommendation System for Heterogeneous and Time Sensitive Environment
    by Meng Wu, John Kolen, Bhargav Rajendra, Yunqi Zhao, Navid Aghaie, Kazi Zaman

    The digital game industry has recently adopted recommender systems to provide the most suitable content and next best activity suggestions to players. The recommender system needs to work in a highly heterogeneous and time sensitive environment, because of the diverse game designs and dynamic experience. In this paper, we describe a recommender system at a digital game company which aims for providing recommendations in as many areas as possible with minimal efforts to integrate and operate. The system leverages a unified data platform, standardized context and tracking information, robust contextual multi-armed algorithms, and experimentation platform for extensibility as well as flexibility. Several games and applications have launched successfully with the recommender system with significant improvements.

  • LPAddressing Delayed Feedback for Continuous Training with Neural Networks in CTR prediction
    by Sofia Ira Ktena, Alykhan Tejani, Lucas Theis, Pranay Kumar Myana, Deepak Dilipkumar, Ferenc Huszar, Steven Yoo, Wenzhe Shi

    One of the challenges in display advertising is that the distribution of features and click through rate (CTR) can exhibit large shifts over time due to seasonality, changes to ad campaigns and several other factors. The predominant strategy to keep up with these shifts is to train predictive models continuously, on fresh data, in order to prevent them from becoming stale. However, in many ad systems positive labels are only observed after a possibly long and random delay. These delayed labels pose a challenge to data freshness in continuous training: fresh data may not have complete label information at the time they are ingested by the training algorithm. Naive strategies which consider any data point a negative example until a positive label becomes available tend to underestimate CTR, resulting in inferior user experience and suboptimal performance for advertisers. The focus of this paper is to identify the best combination of loss functions and models that enable large-scale learning from a continuous stream of data in the presence of delayed labels. In this work, we compare five different loss functions, three of them applied to delayed feedback problem for the first time. We benchmark their performance in offline settings on both public and proprietary datasets in conjunction with shallow and deep model architectures. We also discuss the engineering cost associated with implementing each loss function in a production environment. Finally, we carried out online experiments with the top performing methods, in order to validate their performance in a continuous training scheme. While training on 668 million in-house data points with neural networks offline, our proposed methods outperform previous state-of-the-art by 3% RCE. During online experiments, we observed 55% RPMq gain against naive log loss.

  • LPAdversarial Attacks on an Oblivious Recommender
    by Konstantina Christakopoulou, Arindam Banerjee

    Can machine learning models for recommendation be easily fooled? Despite the recent surge of interest in learned adversarial attacks in other domains (e.g. classification, graphs), in the context of recommendation systems this question has mainly been answered using hand-engineered fake user profiles. This paper attempts to reduce this gap. We provide a formulation for learning to attack a recommender as a repeated general-sum game between two players, i.e., an an adversary and a recommender oblivious to the adversary’s existence. We consider the challenging case of poisoning attacks, which focus on the training phase of the machine learning (recommender) model. We generate adversarial user profiles targeting subsets of users and/ or items, or generally the top-K recommendation quality. Moreover, we ensure that the adversarial user profiles remain unnoticeable by preserving proximity of the real user rating distribution with the adversarial fake user distribution. To cope with the challenge of the adversary not having access to the gradient of the recommender’s objective with respect to the fake user profiles, we provide a non-trivial algorithm building upon zero-order optimization techniques. We offer a wide range of experiments, instantiating the proposed method for the case of the classic popular approach of a low-rank recommender, and illustrating the extent of the recommender’s vulnerability to a variety of adversarial intents. Importantly, we posit that these results can serve as a motivating point for more research into recommender defense strategies against machine learned attacks.

  • LPAre We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches
    by Maurizio Ferrari Dacrema, Paolo Cremonesi, Dietmar Jannach

    Deep learning techniques have become the method of choice for researchers working on algorithmic aspects of recommender systems. With the strongly increased interest in machine learning in general, it has, as a result, become difficult to keep track of what represents the state-of-the-art at the moment, e.g., for top-n recommendation tasks. At the same time, several recent publications point out problems in today’s research practice in applied machine learning, e.g., in terms of the reproducibility of the results or the choice of the baselines when proposing new models. In this work,we report the results of a systematic analysis of algorithmic proposals for top-n recommendation tasks. Specifically, we considered 18 algorithms that were presented at top-level research conferences in the last years. Only 7 of them could be reproduced based on the provided code. For these methods, it however turned out that 6 of them can be often outperformed with comparably simple heuristic methods based on nearest-neighbor techniques. The remaining one clearly outperformed the baselines but did not consistently outperform a well-tuned non-neural linear ranking method. Overall, our work sheds light on a number of potential problems in today’s machine learning scholarship and calls for improved scientific practices in this area.

  • LPAttribute-Aware Non-Linear Co-Embeddings of Graph Features
    by Ahmed Rashed, Josif Grabocka, Lars Schmidt-Thieme

    In very sparse recommender data sets, attributes of users such as age, gender and home location and attributes of items such as, in the case of movies, genre, release year, and director can improve the recommendation accuracy, especially for users and items that have few ratings. While most recommendation models can be extended to take attributes of users and items into account, their architectures usually become more complicated. While attributes for items are often easy to be provided, attributes for users are often scarce for reasons of privacy or simply because they are not relevant to the operational process at hand. In this paper, we address these two problems for attribute-aware recommender systems by proposing a simple model that co-embeds users and items into a joint latent space in a similar way as a vanilla matrix factorization, but with non-linear latent features construction that seamlessly can ingest user or item attributes or both (GraphRec). To address the second problem, scarce attributes, the proposed model treats the user-item relation as a bipartite graph and constructs generic user and item attributes via the Laplacian of the user-item co-occurrence graph that requires no further external side information but the mere rating matrix. In experiments on three recommender datasets, we show that GraphRec significantly outperforms existing state-of-the-art attribute-aware and content-aware recommender systems even without using any side information.

  • LPCB2CF: A Neural Multiview Content-to-Collaborative Filtering Model for Cold Item Recommendations
    by Oren Barkan, Noam Koenigstein, Eylon Yogev, Ori Katz

    In Recommender Systems research, algorithms are often characterized as either Collaborative Filtering (CF) or Content Based (CB). CF algorithms are trained using a dataset of user preferences while CB algorithms are typically based on item profiles. These approaches harness different data sources and therefore the resulting recommended items are generally very different. This paper presents the CB2CF, a deep neural multiview model that serves as a bridge from items content into their CF representations. CB2CF is a “real-world” algorithm designed for Microsoft Store services that handle around a billion users world-wide. CB2CF is demonstrated on movies and apps recommendations, where it is shown to outperform other existing models on cold items for which usage data is not available.

  • LPCollective Embedding for Neural Context-Aware Recommender Systems
    by Felipe Soares da Costa, Peter Dolog

    Context-aware recommender systems consider contextual features as additional information to predict user’s preferences. For example, the recommendations could be based on time, location, or the company of other people. Among the contextual information, time became an important feature because user preferences tend to change over time or be similar in the near future. Researchers have proposed different models to incorporate time into their recommender system, however, the current models are not able to capture specific temporal patterns. To address the limitation observed in previous works, we propose Collective embedding for Neural Context-Aware Recommender Systems (CoNCARS). The proposed solution jointly model the item, user and time embeddings to capture temporal patterns. Then, CoNCARS use the outer product to model the user-item-time correlations between dimensions of the embedding space. The hidden features feed our Convolutional Neural Networks (CNNs) to learn the non-linearities between the different features. Finally, we combine the output from our CNNs in the fusion layer and then predict the user’s preference score. We conduct extensive experiments on real-world datasets, demonstrating CoNCARS improves the top-N item recommendation task and outperform the state-of-the-art recommendation methods.

  • LPDeep Generative Ranking for Personalized Recommendation
    by Huafeng Liu, Jingxuan Wen, Liping Jing, Jian Yu

    Recommender systems offer critical services in the age of mass information. Personalized ranking have been attractive both for content providers and customers due to its ability of creating a user-specific ranking on the item set. Although the powerful factor-analysis methods including latent factor model and deep neural network models have achieved promising results, they still suffer from the challenging issues, such as sparsity of recommendation data, uncertainty of optimization, and etc. To enhance the accuracy and generalization of recommender system, in this paper, we propose a deep generative ranking (DGR) model under the Wasserstein auto-encoder framework. Specifically, DGR simultaneously generates the pointwise implicit feedback data (via a Beta-Bernoulli distribution) and creates the pairwise ranking list by sufficient exploiting both interacted and non-interacted items for each user. DGR can be efficiently inferred by minimizing its penalized evidence lower bound. Meanwhile, we theoretically analyze the generalization error bounds of DGR model to guarantee its performance in extremely sparse feedback data. A series of experiments on four large-scale datasets (Movielens (20M), Netflix, Epinions and Yelp in movie, product and business domains) have been conducted. By comparing with the state-of-the-art methods, the experimental results demonstrate that DGR consistently benefit the recommendation system in ranking estimation task, especially for the near-cold-start-users (with less than five interacted items).

  • LPDeep Language-based Critiquing for Recommender Systems
    by Ga Wu, Kai Luo, Scott Sanner, Harold Soh

    Critiquing is a method for conversational recommendation that adapts recommendations in response to user preference feedback regarding item attributes. Historical critiquing methods were largely based on items with a fixed set of known attributes and constraint- and utility-based methods for modifying recommendations w.r.t. these critiqued attributes. In this paper, we revisit the critiquing approach from the lens of deep learning based recommendation methods and language-based interaction. Concretely, we propose an end-to-end deep learning framework with two variants — one deterministic and one probabilistic — that extend the Neural Collaborative Filtering architecture with explanation and critiquing components; these architectures not only predict personalized keyphrases for a user and item but also embed language-based feedback in the latent space that in turn modulates subsequent critiqued recommendations. We evaluate the proposed framework on two recommendation datasets containing user reviews. The empirical results show that our modified NCF approach not only provides a strong baseline recommender and high-quality personalized item keyphrase suggestions, but that it also properly suppresses items predicted to have a critiqued keyphrase. We further note that the variational probabilistic approach we propose yields the most compatible co-embeddings of user and item preferences with language-based critiques as evidenced by our results. In summary, this paper provides a first step to unify deep recommendation and language-based feedback in what we hope to be a rich space for future research in deep critiquing for conversational recommendation.

  • LPDeep Social Collaborative Filtering
    by Wenqi Fan, Yao Ma, Dawei Yin, Jianping Wang, Jiliang Tang, qing li

    Recommender systems are crucial to alleviate the information overload problem in online worlds. Most of the modern recommender systems capture users’ preference towards items via their interactions based on collaborative filtering techniques. In addition to the user-item interactions, social networks can also provide useful information to understand users’ preference as suggested by the social theories such as homophily and influence. Recently, deep neural networks have been utilized for social recommendations, which facilitate both the user-item interactions and the social network information. However, most of these models cannot take full advantage of the social network information. They only use information from direct neighbors, but distant neighbors can also provide helpful information. Meanwhile, most of these models treat neighbors’ information equally without considering the specific recommendations. However, for a specific recommendation case, the information relevant to the specific item would be helpful. Besides, most of these models do not explicitly capture the neighbor’s opinions to items for social recommendations, while different opinions could affect the user differently. In this paper, to address the aforementioned challenges, we propose DSCF, a Deep Social Collaborative Filtering framework, which can exploit the social relations with various aspects for recommender systems. Comprehensive experiments on two-real world datasets show the effectiveness of the proposed framework.

  • LPDomain Adaptation in Display Advertising: An Application for Partner Cold-Start
    by Karan Aggarwal, Pranjul Yadav, Sathiya Keerthi

    Digital advertisement industry connects partners to potentially interested online users through advertisements. Within the digital advertisement domain, there are multiple platforms, e.g., user retargeting and prospecting. Re-targeting refers to a scenario when advertisements of the partner are displayed to users who have been recently shown that partner’s advertisement. Prospecting, on the other hand, refers to advertisement displays to users who have never been exposed to the partner’s advertisements. Partners usually start with re-targeting campaigns and later employ prospecting campaigns to reach out to untapped customer base. For any prospecting platform, recommending several thousands of users from a pool of billion of users to a partner specific campaign is a challenging problem. There are two major challenges involved. The first challenge is successful on-boarding of a new partner on the prospecting platform, referred to as partner cold-start problem. The second challenge revolves around the ability to leverage large amounts of re-targeting data for partner cold-start problem. This paper is the first work that studies domain adaptation for the partner cold-start problem. To this end, we propose two domain adaptation techniques, SDA-DANN and SDA-Ranking. SDA-DANN and SDA-Ranking extend domain adaptation techniques for partner cold-start by incorporating sub-domain similarities (product category level information). Through rigorous experiments, we demonstrate that our method SDA-DANN outperforms baseline domain adaptation techniques on real-world dataset, obtained from a major online advertiser. Furthermore, we show that our proposed technique SDA-Ranking outperforms baseline methods for low CTR partners.

  • LPEfficient Privacy-preserving Recommendations based on Social Graphs
    by Aidmar Wainak, Tim Grube, Jörg Daubert, Max Mühlhäuser

    Recommender systems use association rules mining, a technique that captures relations between user interests and recommends new potential ones accordingly. Applying association rule mining causes privacy concerns as user interests may contain sensitive personal information (e.g., political views). This potentially even inhibits the user from providing information in the first place. Current distributed privacy-preserving association rules mining (PPARM) approaches use cryptographic primitives that come with high computational and communication costs, rendering PPARM unsuitable for large-scale applications such as social networks. We propose improvements on the efficiency and the privacy of PPARM approaches by minimizing the required data. We propose and compare sampling strategies to sample the data based on social graphs in a privacy-preserving manner. The results on real-world datasets show that our sampling-based approach can achieve a high average precision score with as low as 50% sampling rate and, therefore, with a 50% reduction of communication cost.

  • LPExplaining and exploring job recommendations: a user-driven approach for interacting with knowledge-based job recommender systems
    by Francisco Gutiérrez, Robin De Croon, Nyi Nyi Htun, Katrien Verbert

    The economic context in which we find ourselves is characterized by rapid changes in the field of business as such, but also regarding the technologies used and the organization of work. The dynamics of the labor market and the tasks with which jobs are being composed are continuously evolving. Job seekers are expected to be able to deal with these changes in an efficient manner, and to move easily within this transitional labor market. Such mobility is not evident, and providing effective recommendations in this context has also been found to be particularly challenging. In this paper, we present Labor Market Explorer, an interactive dashboard that enables job seekers to explore the labor market in a personalized way. Through a user-centered design process involving job seekers and job mediators, we developed this dashboard to enable job seekers to explore job recommendations and their required competencies, as well as how these competencies map to their profile. Job seekers can engage with various overview visualization components as well as diverse kinds of filters to explore recommendations and to gain actionable insights. Evaluation results indicate the dashboard empowers job seekers to explore, understand, and find relevant vacancies, mostly independent of their background and age.

  • LPEfficient Similarity Computation for Collaborative Filtering in Dynamic Environments
    by Olivier Jeunen, Koen Verstrepen, Bart Goethals

    The problem of computing all pairwise similarities in a large collection of vectors is a well-known and common data mining task. As the number and dimensionality of these vectors keeps increasing, however, currently existing approaches are often unable to meet the strict efficiency requirements imposed by the environments they need to perform in. Real-time neighbourhood-based collaborative filtering (CF) is one example of such an environment in which performance is critical. In this work, we present a novel algorithm for efficient and exact similarity computation between sparse, high-dimensional vectors. Our approach exploits the sparsity that is inherent to implicit feedback data-streams, entailing significant gains compared to other methods. Furthermore, as our model learns incrementally, it is naturally suited for dynamic real-time CF environments. We propose a MapReduce-inspired parallellisation procedure along with our method, and show how even more speed-up can be achieved. Additionally, in many real-world systems, many items are actually not recommendable at any given time, due to recency, stock, seasonality, or enforced business rules. We exploit this fact to further improve the computational efficiency of our approach. Experimental evaluation on both real-world and publicly available datasets shows that our approach scales up to millions of processed user-item interactions per second, and well advances the state-of-the-art.

  • LPFiBiNET: Combining Feature Importance and Bilinear feature Interaction for Click-Through Rate Prediction
    by Tongwen Huang, Zhiqi Zhang, Junlin Zhang

    Advertising and feed ranking are essential to many Internet companies such as Facebook and Sina Weibo. Among many real-world advertising and feed ranking systems, click through rate (CTR) prediction plays a central role. There are many proposed models in this field such as logistic regression, tree based models, factorization machine based models and deep learning based CTR models. However, many current works calculate the feature interactions in a simple way such as Hadamard product and inner product and they care less about the importance of features. In this paper, a new model named FiBiNET as an abbreviation for Feature Importance and Bilinear feature Interaction NETwork is proposed to dynamically learn the feature importance and fine-grained feature interactions. On the one hand, the FiBiNET can dynamically learn the importance of features via the Squeeze-Excitation network (SENET) mechanism. On the other hand, it is able to effectively learn the feature interactions via bilinear function. We conduct extensive experiments on two real-world datasets and show that our shallow model outperforms other shallow models such as factorization machine(FM) and field-aware factorization machine(FFM). In order to improve performance further, we combine a classical deep neural network(DNN) component with the shallow model to be a deep model. The deep FiBiNET consistently outperforms the other state-of-the-art deep models such as DeepFM and extreme deep factorization machine(XdeepFM).

  • LPHybridSVD: When Collaborative Information is Not Enough
    by Evgeny Frolov, Ivan Oseledets

    We propose a new hybrid algorithm that allows to incorporate both user and item side information within the standard collaborative filtering approach. One of its key features is that it naturally extends a simple PureSVD approach and inherits its unique advantages such as highly efficient Lanczos-based optimization procedure, simplified hyper-parameter tuning and a quick folding-in computation for generating recommendations instantly even in highly dynamic online environments. The algorithm exploits a generalized formulation of the singular value decomposition, which adds flexibility to the solution and allows to impose the desired structure on its latent space. The resulting model also admits an efficient and straightforward solution for the cold start scenario. We evaluate our approach on a diverse set of datasets and show its superiority over similar classes of hybrid models.

  • LPLatent Factor Models and Aggregation Operators for Collaborative Filtering in Reciprocal Recommender Systems
    by James Neve, Ivan Palomares Carrascosa

    Online dating platforms help to connect people who might potentially be a good match for each other. They have exerted a significant societal impact over the last decade, such that about one third of new relationships in the US are now started online, for instance. Recommender Systems are widely utilized in online platforms that connect people to people in e.g. online dating and recruitment sites. These recommender approaches are fundamentally different from traditional user-item approaches (such as those operating on movie and shopping sites), in that they must consider the interests of both parties jointly. Latent factor models have been notably successful in the area of user-item recommendation, however they have not been investigated within user-to-user domains as of yet. In this study, we present a novel method for reciprocal recommendation using latent factor models. We also provide a first analysis of the use of different preference aggregation strategies, thereby demonstrating that the aggregation function used to combine user preference scores has a significant impact on the outcome of the recommender system. Our evaluation results report significant improvements over previous nearest-neighbour and content-based methods for reciprocal recommendation, and show that the latent factor model can be used effectively on much larger datasets than previous state-of-the-art reciprocal recommender systems.

  • LPLeveraging Post-click Feedback for Content Recommendations
    by Hongyi Wen, Longqi Yang, Deborah Estrin

    Implicit feedback (e.g., clicks) is used widely in content recommendations. However, click signals only reflect users’ preferences according to their first impression. They do not capture the extent to which users actually engage with the content. In this paper, we leverage post-click feedback, e.g., skips and completions, to improve the training and evaluation of content recommenders. Specifically, we first experiment with existing collaborative filtering algorithms and find that they perform poorly against post-click-aware ranking metrics. Based on the insights from the experiments, we develop a generic probabilistic framework to fuse click and post-click signals. We show that our framework can be applied to improve pointwise and pairwise recommendation models. Through extensive evaluations on a short-video and music dataset, our approach is shown to outperform existing methods by 18.3% and 2.5% in terms of Area Under the Curve (AUC). We discuss the effectiveness of our approach across content domains and trade-offs in weighting various user feedback signals.

  • LPLORE: A Large-Scale Offer Recommendation Engine with Eligibility and Capacity Constraints
    by Rahul Makhijani, Shreya Chakrabarti, Yi Liu, Dale Struble

    Businesses, such as Amazon, department store chains, home furnishing store chains, Uber, and Lyft, frequently offer deals, product discounts and incentives to drive sales, increase new product acceptance and engage with users. In order to appeal to diverse user groups, these businesses typically design more than one promotion offer but market different ones to different users. For instance, Uber offers a percentage discount in the rides to some users and a low fixed price to others. In this paper, we propose solutions to optimally recommend promotions and items to maximize user conversion constrained by user eligibility and item or offer capacity (limited quantity of items or offers) simultaneously. We achieve this through an offer recommendation model based on Min-Cost Flow network optimization, which enables us to satisfy the constraints within the optimization itself and solve it in polynomial time. We present two approaches that can be used in various settings: single period solution and sequential time period offering. We evaluate these approaches against competing methods using counterfactual evaluation in offline mode. We also discuss three practical aspects that may affect online performance of constrained optimization: capacity determination, traffic arrival pattern and clustering for large scale setting.

  • LPOnline Learning to Rank for Sequential Music Recommendation
    by Bruno Pereira, Alberto Ueda, Gustavo Penha Penha, Rodrygo Santos, Nivio Ziviani

    The prominent success of music streaming services has brought increasingly complex challenges for music recommendation. In particular, in a streaming setting, songs are consumed sequentially within a listening session, which should cater not only for the user’s historical preferences, but also for eventual preference drifts, triggered by a sudden change in the user’s context. In this paper, we propose a novel online learning to rank approach for music recommendation aimed to continuously learn from the user’s listening feedback. In contrast to existing online learning approaches for music recommendation, we leverage implicit feedback as the only signal of the user’s preference. Moreover, to adapt rapidly to preference drifts over millions of songs, we represent each song in a lower dimensional feature space and explore multiple directions in this space as duels of candidate recommendation models. Our thorough evaluation using listening sessions from Last.fm demonstrates the effectiveness of our approach at learning faster and better compared to state-of-the-art online learning approaches.

  • LPOnline Ranking Combination
    by Erzsébet Frigó, Levente Kocsis

    As a task of high importance for recommender systems, we consider the problem of learning the convex combination of ranking algorithms by online machine learning. In the case of two base recommenders, we show that the exponentially weighted combination achieves near optimal performance. However, the number of required points to be evaluated may be prohibitive with more base models in a real application. We propose a gradient based stochastic optimization algorithm that uses finite differences. Our new algorithm achieves similar empirical performance for two base rankers, while scaling well with an increased number of models. In our experiments with five real-world recommendation data sets, we show that the combination offers significant improvement over previously known stochastic optimization techniques. Our algorithm is the first effective stochastic optimization method for combining ranked recommendation lists by online machine learning.

  • LPPersonalized Diffusions for Top-N Recommendation
    by Athanasios N. Nikolakopoulos, Dimitris Berberidis, George Karypis, Georgios Giannakis

    The present work introduces PERDIF ; a novel framework for learning personalized diffusions over item-to-item graphs for top-n recommendation. PERDIF learns the teleportation probabilities of a time-inhomogeneous random walk with restarts capturing a user-specific underlying item exploration process. Such approach can lead to significant improvements in recommendation accuracy, while also providing useful information about the users in the system. Per-user fitting can be performed in parallel and very efficiently even in large-scale settings. A comprehensive set of experiments on real-world datasets demonstrate the scalability as well as the qualitative merits of the proposed framework. PERDIF achieves high recommendation accuracy, outperforming state- of-the-art competing approaches—including several recently proposed methods relying on deep neural networks.

  • LPPersonalized Re-ranking for E-commerce Recommender Systems
    by Changhua Pei, Yi Zhang, Yongfeng Zhang, Fei Sun, Xiao Lin, Hanxiao Sun, Jian Wu, Peng Jiang, Junfeng Ge, Wenwu Ou

    Ranking is a core task in E-commerce recommender systems, which aims at providing an ordered list of items to users. Typically, a ranking function is learned from the labeled dataset to optimize the global performance, which produces a ranking score for each individual item. However, it may be sub-optimal because the scoring function applies to each item individually and does not explicitly consider the mutual influence between items, as well as the differences of users’ preferences or intents. Therefore, we propose a personalized re-ranking model for E-commerce recommender systems. The proposed re-ranking model can be easily deployed as a follow-up modular after any ranking algorithm, by directly using the existing ranking feature vectors. It directly optimizes the whole recommendation list by employing a transformer structure to efficiently encode the information of all items in the list. Specifically, the Transformer applies a self-attention mechanism that directly models the global relationships between any pair of items in the whole list. Besides, we confirm that the performance can be further improved by introducing pre-trained embedding to learn personalized encoding functions for different users. Experimental results on both offline benchmarks and real-world online E-commerce systems demonstrate the significant improvements of the proposed re-ranking model.

  • LPPower to the people! A qualitative evaluation of user control in recommendation systems
    by Jaron Harambam, Dimitrios Bountouridis, Mykola Makhortykh, Joris van Hoboken

    Recommender systems (RS) are on the rise in many domains, but while they offer great promises, they also raise concerns: lack of transparency, reduction of diversity, little to no user control. In this paper, we align with the normative turn in computer science which scrutinizes the ethical and societal implications of RS. We focus and elaborate on the concept of user control because that catches many birds with just one stone. Taking the news industry as our domain, we conducted four focus groups, or moderated think-aloud sessions, with Dutch news readers (N=21) to systematically study how people evaluate different control mechanisms (at the input, process, and output phase) in a News Recommender Prototype (NRP). While these mechanisms are sometimes met with distrust about the actual control they offer, we found that an intelligible user profile (including reading history and flexible preferences settings), coupled with possibilities to influence the recommendation algorithms is extremely valued, especially when these control mechanisms can be operated in relation to achieving personal goals. This paper contributes to a richer understanding of why and how to design for user control in recommender systems.

  • LPPrivateJobMatch: A Privacy-Oriented Deferred Multi-Match Recommender System for Stable Employment
    by Amar Saini, Florin Rusu, Andrew Johnston

    Coordination failure reduces match quality among employers and candidates in the job market, resulting in a large number of unfilled positions and/or unstable, short-term employment. Centralized job search engines provide a platform that connects directly employers with job-seekers. However, they require users to disclose a significant amount of personal data, i.e., build a user profile, in order to provide meaningful recommendations. In this paper, we present PrivateJobMatch — a privacy-oriented deferred multi-match recommender system — which generates stable pairings while requiring users to provide only a partial ranking of their preferences. PrivateJobMatch explores a series of adaptations of the game-theoretic Gale-Shapley deferred-acceptance algorithm which combine the flexibility of decentralized markets with the intelligence of centralized matching. We identify the shortcomings of the original algorithm when applied to a job market and propose novel solutions that rely on machine learning techniques. Experimental results on real and synthetic data confirm the benefits of the proposed algorithms across several quality measures. Over the past year, we have implemented a PrivateJobMatch prototype and deployed it in an active job market economy. Using the gathered real-user preference data, we find that the match recommendations are superior to a typical decentralized job market—while requiring only a partial ranking of the user preferences.

  • LPRelaxed Softmax for PU Learning
    by Ugo Tanielian, Flavian Vaslie

    In recent years, the softmax model and its fast approximations have become the de-facto loss functions for deep neural networks when dealing with multi-class prediction. This loss has been extended to language modeling and recommendation, two fields that fall into the framework of learning from Positive and Unlabeled data. In this paper, we stress the different drawbacks of the current family of softmax losses and sampling schemes when applied in a Positive and Unlabeled learning setup. We propose both a Relaxed Softmax loss (RS) and a new negative sampling scheme based on a Boltzmann formulation. We show that the new training objective is better suited for the tasks of density estimation, item similarity and next-event prediction by driving uplifts in performance on textual and recommendation datasets against classical softmax.

  • LPSampling-Bias-Corrected Neural Modeling for Large Corpus Item Recommendations
    by Xinyang Yi, Ji Yang, Lichan Hong, Derek Zhiyuan Cheng, Lukasz Heldt, Aditee Kumthekar, Zhe Zhao, Wei Li, Ed Chi

    Many recommendation systems retrieve and score items from a very large corpus. A common recipe to handle data sparsity and power-law item distribution is to learn item representations from its content features. Apart from many content-aware systems based on matrix factorization, we consider a modeling framework using two-tower neural net, with one of the towers (item tower) encoding a wide variety of item content features. A general recipe of training such two-tower models is to optimize loss functions calculated from in-batch negatives, which are items sampled from a random mini-batch. However, in-batch loss is subject to sampling biases, potentially hurting model performance, particularly in the case of highly skewed distribution. In this paper, we present a novel algorithm for estimating item frequency from streaming data. Our main idea is to sketch and estimate item occurrences via gradient descent. Through theoretical analysis and simulation, we show that the proposed algorithm can work without requiring fixed item vocabulary, and is capable of producing unbiased estimation and being adaptive to item distribution change. We then apply the sampling-bias-corrected modeling approach to build a large scale neural retrieval system for YouTube recommendations. The system is deployed to retrieve personalized suggestions from a corpus with tens of millions of videos. We demonstrate the effectiveness of sampling-bias correction through offline experiments on two real-world datasets. We also conduct live A/B testings to show that the neural retrieval system leads to improved recommendation quality for YouTube.

  • LPStyle Conditioned Recommendations
    by Murium Iqbal, Kamelia Aryafar, Timothy Anderton

    We propose Style Conditioned Recommendations (SCR) and introduce style injection as a method to diversify recommendations. We use Conditional Variational Autoencoder (CVAE) architecture, where both the encoder and decoder are conditioned on a user profile learned from item content data. This allows us to apply style transfer methodologies to the task of recommendations, which we refer to as injection. To enable style injection, user profiles are learned to be interpretable such that they express users’ propensities for specific predefined styles. These are learned via label-propagation from a dataset of item content, with limited labeled points. To perform injection, the condition on the encoder is learned while the condition on the decoder is selected per explicit feedback. Explicit feedback can be taken either from a user’s response to a style or interest quiz, or from item ratings. In the absence of explicit feedback, the condition at the encoder is applied to the decoder. We show a 12% improvement on NDCG@20 over the traditional VAE based approach and an average 22% improvement on AUC across all classes for predicting user style profiles against our best performing baseline. After injecting styles we compare the user style profile to the style of the recommendations and show that injected styles have an average +133% increase in presence. Our results show that style injection is a powerful method to diversify recommendations while maintaining personal relevance. Our main contribution is an application of a semi-supervised approach that extends item labels to interpretable user profiles. This enables our novel style injection approach to recommendations which allows incorporation of explicit feedback data.

  • LPUplift-based Evaluation and Optimization of Recommenders
    by Masahiro Sato, Janmajay Singh, Sho Takemori, Takashi Sonoda, Qian Zhang, Tomoko Ohkuma

    Recommender systems aim to increase user actions such as clicks and purchases. Typical evaluations of recommenders regard the purchase of a recommended item as a success. However, the item may have been purchased even without the recommendation. An uplift is defined as an increase in user actions caused by recommendations. Situations with and without a recommendation cannot both be observed for a specific user-item pair at a given time instance, making uplift-based evaluation and optimization challenging. This paper proposes new evaluation metrics and optimization methods for the uplift in a recommender system. We apply a causal inference framework to estimate the average uplift for the offline evaluation of recommenders. Our evaluation protocol leverages both purchase and recommendation logs under a currently deployed recommender system, to simulate the cases both with and without recommendations. This enables the offline evaluation of the uplift for newly generated recommendation lists. For optimization, we need to define positive and negative samples that are specific to an uplift-based approach. For this purpose, we deduce four classes of items by observing purchase and recommendation logs. We derive the relative priorities among these four classes in terms of the uplift and use them to construct both pointwise and pairwise sampling methods for uplift optimization. Through dedicated experiments with three public datasets, we demonstrate the effectiveness of our optimization methods in improving the uplift.

  • LPUsers in the Loop: A Psychologically-Informed Approach to Similar Item Retrieval
    by Amy Winecoff, Florin Brasoveanu, Bryce Casavant Casavant, Pearce Washabaugh, Matthew Graham

    Recommender systems (RS) often leverage information about the similarity between items’ features to make recommendations. Yet, many commonly used similarity functions make mathematical assumptions such as symmetry (i.e., sim(a,b) = sim(b,a)) that are inconsistent with how humans make similarity judgments. Moreover, most algorithm validations either do not directly measure users’ behavior or fail to comply with methodological standards for psychological research. RS that are developed and evaluated without regard to users’ psychology may fail to meet users’ needs. To provide recommendations that do meet the needs of users, we must: 1) develop similarity functions that account for known properties of human cognition, and 2) rigorously evaluate the performance of these functions using methodologically sound user testing. Here, we develop a framework for evaluating users’ judgments of similarity that is informed by best practices in psychological research methods. Leveraging users’ fashion item similarity judgments collected using our framework, we demonstrate that a psychologically-informed similarity function (i.e., Tversky contrast model) outperforms a psychologically naive similarity function (i.e., Jaccard similarity) in predicting users’ similarity judgments.

  • LPWhen Actions Speak Louder than Clicks: A Combined Model of Purchase Probability and Long-term Customer Satisfaction
    by Gal Lavee, Noam Koenigstein, Oren Barkan

    Maximizing sales and revenue is an important goal of online commercial retailers. Recommender systems are designed to maximize users’ click or purchase probability, but often disregard users’ eventual satisfaction with purchased items. As result, such systems promote items with high appeal at the selling stage (e.g. an eye-catching presentation) over items that would yield more satisfaction to users in the long run. This work presents a novel unified model that considers both goals and can be tuned to balance between them according to the needs of the business scenario. We propose a multi-task probabilistic matrix factorization model with a dual task objective: predicting binary purchase/no purchase variables combined with predicting continuous satisfaction scores. Model parameters are optimized using Variational Bayes which allows learning a posterior distribution over model parameters. This model allows making predictions that balance the two goals of maximizing the probability for an immediate purchase and maximizing user satisfaction and engagement down the line. These goals lie at the heart of most commercial recommendation scenario and enabling their balance has the potential to improve value for millions of users worldwide. Finally, we present experimental evaluation on different types of consumer retail datasets that demonstrate the benefits of the model over popular baselines on a number of well-known ranking metrics.

  • LPVariational Low Rank Multinomials for Collaborative Filtering with Side-Information
    by Ehtsham Elahi, Tony Jebara

    We are interested in Bayesian models for collaborative filtering that incorporate side-information or metadata about items in addition to user-item interaction data. We present a simple and flexible framework to build models for this task that exploit the low-rank structure in user-item interaction datasets. Although the resulting models are non-conjugate, we develop an efficient technique for approximating posteriors over model parameters using variational inference. We borrow the “re-parameterization trick” from Bayesian deep learning literature to enable variational inference in our models. The resulting approximate Bayesian inference algorithm is scalable and can handle large scale datasets. We demonstrate our ideas on three real world datasets where we show competitive performance against widely used baselines

Back to Program

Diamond Supporters
 
 
Platinum Supporters
 
 
 
Gold Supporters
 
 
 
Silver Supporter
 
 
Special Supporter