Paper Session P1: Real-World Applications I

Session A: 16:0017:30, chaired by Zeno Gantner and Robin Burke
Session B: 3:004:30, chaired by Tao Ye and Weike Pan

  • LPGoal-driven Command Recommendations for Analysts
    by Samarth Aggarwal (Indian Institute of Technology Delhi), Rohin Garg (Indian Institute of Technology Kanpur), Abhilasha Sancheti (Adobe Research, University of Maryland), Bhanu Prakash Reddy Guda (Adobe Research), Iftikhar Ahamath Burhanuddin (Adobe Research)

    Recent times have seen data analytics software applications become an integral part of the decision-making process of analysts. The users of these software applications generate a vast amount of unstructured log data. These logs contain clues to the user’s goals, which traditional recommender systems may find difficult to model implicitly from the log data. With this assumption, we would like to assist the analytics process of a user through command recommendations. We categorize the commands into software and data categories based on their purpose to fulfill the task at hand. On the premise that the sequence of commands leading up to a data command is a good predictor of the latter, we design, develop, and validate various sequence modeling techniques. In this paper, we propose a framework to provide goal-driven data command recommendations to the user by leveraging unstructured logs. We use the log data of a web-based analytics software to train our neural network models and quantify their performance, in comparison to relevant and competitive baselines. We propose a custom loss function to tailor the recommended data commands according to the goal information provided exogenously. We also propose an evaluation metric that captures the degree of goal orientation of the recommendations. We demonstrate the promise of our approach by evaluating the models with the proposed metric and showcasing the robustness of our models in the case of adversarial examples, where the user activity is misaligned with selected goal, through offline evaluation.

  • LPSSE-PT: Sequential Recommendation Via Personalized Transformer
    by Liwei Wu (University of California, Davis), Shuqing Li (University of California, Davis), Cho-Jui Hsieh (University of California, Los Angles), James Sharpnack (University of California, Davis)

    Temporal information is crucial for recommendation problems because user preferences are naturally dynamic in the real world. Recent advances in deep learning, especially the discovery of various attention mechanisms and newer architectures in addition to widely used RNN and CNN in natural language processing, have allowed for better use of the temporal ordering of items that each user has engaged with. In particular, the SASRec model, inspired by the popular Transformer model in natural languages processing, has achieved state-of-the-art results. However, SASRec, just like the original Transformer model, is inherently an un-personalized model and does not include personalized user embeddings. To overcome this limitation, we propose a Personalized Transformer (SSE-PT) model, outperforming SASRec by almost 5% in terms of NDCG@10 on 5 real-world datasets. Furthermore, after examining some random users’ engagement history, we find our model not only more interpretable but also able to focus on recent engagement patterns for each user. Moreover, our SSE-PT model with a slight modification, which we call SSE-PT++, can handle extremely long sequences and outperform SASRec in ranking results with comparable training speed, striking a balance between performance and speed requirements. Our novel application of the Stochastic Shared Embeddings (SSE) regularization is essential to the success of personalization. Code and data are open-sourced at https://github.com/wuliwei9278/SSE-PT.

  • INA Human Perspective on Algorithmic Similarity
    by Zachary A. Schendel (Product Innovation, Netflix), Faraz Farzin (Product Innovation, Netflix), Siddhi Sundar (Product Innovation, Netflix)

    “In the Netflix user interface (UI), when a row or UI element is named “Because you Watched…”, “More Like This”, or “Because you added to your list”, the overarching goal is to recommend a movie or TV show that a member might like based on the fact that they took a meaningful action on a source item. We have employed similar recommendations in many UI elements: on the homepage as a row of recommendations, after you click into a title, or as a piece of information about why a member should watch a title.
    From an algorithmic perspective, there are many ways to define a “successful” similar recommendation. We sought to broaden the definition of success. To this end, the Consumer Insights team recently completed a suite of research projects to explore the intricacies of member perceptions of similar recommendations. The Netflix Consumer Insights team employs qualitative (e.g., in-depth interviews) and quantitative (e.g., surveys) research methods, interfacing directly with Netflix members to uncover pain points that can inspire new product innovation. The research concluded that, while the typical member believes movies are broadly similar when they share a common genre or theme, similarity is more complex, nuanced, and personal than we might have imagined. The vernacular we use in the UI implies that there should be at least some kind of relationship between the source item and the recommendations that follow. Many of our similar recommendations felt “out of place”, mostly because the relationship between the source item and the recommendation was unclear or absent. When similar recommendations tell a completely misleading, incorrect, or confusing story, member trust can be broken.
    We will structure the presentation around three new insights that our research found to have an influence on the perception of similarity in the context of Netflix as well as the research methods used to uncover those insights. First, the reason a member loves a given movie will vary. For example, do you want to watch other baseball movies like Field of Dreams, or would you prefer other romances like Field of Dreams? Second, members are more or less flexible about how similar a recommendation actually needs to be depending on the properties of and their interactions with the canvas containing the recommendation. For example, a Because You Watched row on the homepage implies vaguer similarity while a More Like This gallery behind a click into the source item implies stricter similarity. Finally, even when we held the UI element constant, we found that similar recommendations are only valuable in some contexts. After finishing a movie, a member might prefer a similar recommendation one day and a change of pace the next. Research methods discussed will include single-arrangement Inverse Multi-Dimensional Scaling [1], survey experimentation, and ways to apply qualitative research to improve algorithmic recommendations.”

  • INBehavior-based Popularity Ranking on Amazon Video
    by Lakshmi Ramachandran (Amazon Search)

    With the growth in the number of video streaming services, providers have to strive hard to make relevant content available and keep customers engaged. A good experience would help customers discover new and popular videos to stream with ease. Customer streaming behavior tends to be a strong indicator of whether they found a video engaging. Aggregate customer behavior serves as a useful predictor of popularity. We discuss the use of past streaming behavior to learn patterns and predict a video’s popularity using tree ensembles.

  • INDeveloping Recommendation System to provide a Personalized Learning experience at Chegg
    by Sanghamitra Deb (Chegg Inc)

    “Online learning has become the primary source of education in 2020 with the impact of COVID-19 forcing millions of students to stay at home. In order to personalize the learning experience we have built a recommendation system that takes advantage of the (1) Rich content developed at Chegg (2) An excellent knowledge graph that organizes content in a hierarchical fashion (3) Interaction of students across multiple products to enhance user signal in individual products.
    Chegg is a centralized learning platform. Students visit Chegg to get help with homework using Chegg Study, learn from flashcards for their tests, practice examinations, learn relevant concepts, work with a tutor online to get one on one learning experience. This represents a large amount of content available to students. In order to organize the content we have developed a Knowledge Graph with nodes representing a hierarchical scheme of concepts taught at different educational institutions and the content at Chegg. In order to create edges between concept nodes and content we build text classifiers to tag content with concept nodes.
    Often students will interact with one or two products such as Chegg Study or Text Book rentals and browse other products such as flashcards, exam practice, etc. In order to suggest relevant content to them, we deduce the concepts they have been studying in products where they are more active and suggest content in products where they are less active.
    In this presentation I will talk about the general framework for developing personalized recommendations at Chegg and do a deep dive into (1) Text classifiers required for content tagging and (2) Building cross product recommendations. In text classification, I will go into details about working with noisy training data and model improvements using multi-task learning [1]. For cross product recommendations I will talk about combining user signals from multiple products [2] to deduce general pattern of student interest and use that information to retrieve relevant content — for example flashcards or practice exams for users.”

Back to Program

Select timezone:

Current time in :

Diamond Supporter
 
Platinum Supporters
 
 
 
 
Gold Supporters
 
Silver Supporter
 
Special Supporter