Paper Session P7: Understanding and Modeling Preferences

Session A: 16:0017:30, chaired by Joe Konstan and Bamshad Mobasher. Attend in Whova
Session B: 3:004:30, chaired by Michael Ekstrand and Ludovico Boratto. Attend in Whova

  • LPContent-Collaborative Disentanglement Representation Learning for Enhanced Recommendation
    by Yin Zhang (Texas A&M University), Ziwei Zhu (Texas A&M University), Yun He (Texas A&M University), James Caverlee (Texas A&M University)

    Modern recommenders usually consider both collaborative features from user behavior data (e.g., clicks) and content information about the users and items (e.g., user ages or item images) for improved recommendations. While encouraging, the uncovered user preference representations derived from these collaborative and content-based perspectives can be entangled by intermixing the influence from each other, leading to sub-optimal performance and unstable recommendations. Hence, we propose to disentangle representations learned from user behavior data and content information. Specifically, we propose a novel two-level disentanglement generative recommendation model (DICER) that supports both content-collaborative disentanglement and feature disentanglement: for the content-collaborative disentanglement, DICER decomposes the features by their marginal distributions based on content and user-item interactions, to ensure the learned features from each type are statistically independent. For feature disentanglement, by decomposing the Kullback-Leibler divergence, we theoretically show that extracted features within each type are disentangled at a granular level. Furthermore, DICER utilizes a co-decoder that simultaneously decodes the content and user-item interactions to ensure the high-quality of learned features. Through extensive experiments on three real-world datasets, results show that DICER significantly outperforms other state-of-the-art methods by 13.5% in NDCG and 14.4% in hit ratio on average.

    Full text in ACM Digital Library

  • LPA Ranking Optimization Approach to Latent Linear Critiquing for Conversational Recommender Systems
    by Hanze Li (Mechanical and Industrial Engineering, University of Toronto), Scott Sanner (Mechanical and Industrial Engineering, University of Toronto), Kai Luo (Mechanical and Industrial Engineering, University of Toronto), Ga Wu (Borealis AI)

    “Critiquing is a method for conversational recommendation that incrementally adapts recommendations in response to user preference feedback. Specifically, a user is iteratively provided with item recommendations and attribute descriptions for those items; the user may then either accept the recommendation or choose to critique an attribute to generate a new recommendation. A recent direction known as latent linear critiquing (LLC) takes a modern embedding-based approach that seeks to optimize the combination of user preference embeddings with embeddings of critiques based on subjective item descriptions (i.e., keyphrases from user reviews); LLC does so by exploiting the linear structure of the embeddings to efficiently optimize their weights in a linear programming (LP) formulation. In this paper, we revisit LLC and note that it’s score-based optimization approach inherently encourages extreme weightings in order to maximize predicted score gaps between preferred and non-preferred items. Noting that the overall end task objective in critiquing is to re-rank rather than re-score, in this paper we take a ranking optimization approach that seeks to optimize embedding weights based on observed rank violations from earlier critiquing iterations. We evaluate the proposed framework on two recommendation datasets containing user reviews. Empirical results demonstrate that ranking-based LLC generally outperforms scoring-based LLC and other baselines across a variety of datasets, critiquing styles, and both satisfaction and session-length performance metrics.”

    Full text in ACM Digital Library

  • LPWho doesn’t like dinosaurs? Finding and Eliciting Richer Preferences for Recommendation
    by Tobias Schnabel (Microsoft), Gonzalo Ramos (Microsoft), Saleema Amershi (Microsoft)

    Real-world recommender systems often allow users to adjust the presented content through a variety of preference elicitation techniques such as “liking” or interest profiles. These elicitation techniques trade-off time and effort to users with the richness of the signal they provide to learning component driving the recommendations. In this paper, we explore this trade-off, seeking new ways for people to express their preferences with the goal of improving communication channels between users and the recommender system. Through a need-finding study, we observe the patterns in how people express their preferences during curation task, propose a taxonomy for organizing them, and point out research opportunities. We present a case study that illustrates how using this taxonomy to design an onboarding experience can lead to more accurate machine-learned recommendations while maintaining user satisfaction under low effort.

    Full text in ACM Digital Library

  • LPTAFA: Two-headed Attention Fused Autoencoder for Context-Aware Recommendations
    by Jin Peng Zhou (University of Toronto, Layer 6 AI), Zhaoyue Cheng (Layer 6 AI), Felipe Pérez (Layer 6 AI), Maksims Volkovs (Layer 6 AI)

    Collaborative filtering with implicit feedback is a ubiquitous class of recommendation problems where only positive interactions such as purchases or clicks are observed. Autoencoder-based recommendation models have shown strong performance on many implicit feedback benchmarks. However, these models tend to suffer from popularity bias making recommendations less personalized. User-generated reviews contain a rich source of preference information, often with specific details that are important to each user, and can help mitigate the popularity bias. Since not all reviews are equally useful, existing work has been exploring various forms of attention to distill relevant information. In the majority of proposed approaches, representations from implicit feedback and review branches are simply concatenated at the end to generate predictions. This can prevent the model from learning deeper correlations between the two modalities and affect prediction accuracy. To address these problems, we propose a novel Two-headed Attention Fused Autoencoder (TAFA) model that jointly learns representations from user reviews and implicit feedback to make recommendations. We apply early and late modality fusion which allows the model to fully correlate and extract relevant information from both input sources. To further combat popularity bias, we leverage the Noise Contrastive Estimation (NCE) objective to “de-popularize” the fused user representation via a two-headed decoder architecture. Empirically, we show that TAFA outperforms leading baselines on multiple real-world benchmarks. Moreover, by tracing attention weights back to reviews we can provide explanations for the generated recommendations and gain further insights into user preferences. Full code for this work is available here: https://github.com/layer6ai-labs/TAFA.

    Full text in ACM Digital Library

  • REPNeural Collaborative Filtering vs. Matrix Factorization Revisited
    by Steffen Rendle (Google Research), Walid Krichene (Google Research), Li Zhang (Google Research), John Anderson (Google Research)

    Embedding based models have been the state of the art in collaborative filtering for over a decade. Traditionally, the dot product or higher order equivalents have been used to combine two or more embeddings, e.g., most notably in matrix factorization. In recent years, it was suggested to replace the dot product with a learned similarity e.g. using a multilayer perceptron (MLP). This approach is often referred to as neural collaborative filtering (NCF). In this work, we revisit the experiments of the NCF paper that popularized learned similarities using MLPs. First, we show that with a proper hyperparameter selection, a simple dot product substantially outperforms the proposed learned similarities. Second, while a MLP can in theory approximate any function, we show that it is non-trivial to learn a dot product with an MLP. Finally, we discuss practical issues that arise when applying MLP based similarities and show that MLPs are too costly to use for item recommendation in production environments while dot products allow to apply very efficient retrieval algorithms. We conclude that MLPs should be used with care as embedding combiner and that dot products might be a better default choice.

    Full text in ACM Digital Library

  • INQuery as Context for Item-to-Item Recommendation
    by Moumita Bhattacharya (Etsy, Inc), Amey Barapatre (Etsy, Inc)

    Recommender Systems is one of the main machine learning applications for an e-commerce platforms such as Etsy, a two sided marketplace. A frequent usage of such system is for item-to-item recommendations that show similar items (also referred as listings) based on the listing a user is currently viewing. Item-to-item recommendations typically take into account information that are only associated with the target listing and other listings in the inventory. However, other contextual information such as user intents, queries and seasonality, are often not taken into account. In this talk, we will present two approaches we developed to utilize additional contextual information in the form of queries in generating item-to-item recommendations. Moreover, we will present our journey in migrating Etsy’s rankers from linear to non-linear models. Additionally, we propose new metrics to evaluate candidate sets that accesses diversity and price spread, while not compromising relevance. The proposed metrics can also be used beyond the current application. Our proposed candidate set generation approach outperforms the model in production as well as yielding significant lift in conversion rate and other engagement metrics as indicated by several A/B tests.

    Full text in ACM Digital Library

Back to Program

Select timezone:

Current time in :

Diamond Supporter
 
Platinum Supporters
 
 
 
 
Gold Supporters
 
Silver Supporter
 
Special Supporter