Paper Session 5: Algorithms I

Date: Tuesday, Aug 29, 2017, 11:00-12:30
Location: Room 1
Chair: Marco de Gemmis

  • LPSequential User-based Recurrent Neural Network Recommendations by Tim Donkers, Benedikt Loepp and Jürgen Ziegler

    Recurrent Neural Networks are powerful tools for modeling sequences. They are flexibly extensible and can incorporate various kinds of information including temporal order. These properties make them well suited for generating sequential recommendations. In this paper, we extend Recurrent Neural Networks by considering unique characteristics of the Recommender Systems domain. One of these characteristics is the explicit notion of the user recommendations are specifically generated for. We show how individual users can be represented in addition to sequences of consumed items in a new type of Gated Recurrent Unit to effectively produce personalized next item recommendations. Offline experiments on two real-world datasets indicate that our extensions clearly improve objective performance when compared to state-of-the-art recommender algorithms and to a conventional Recurrent Neural Network.

  • LPTranslation-based Recommendation by Ruining He, Wang-Cheng Kang and Julian McAuley

    Modeling the complex interactions between users and items as well as amongst items themselves is at the core of designing successful recommender systems. One classical setting is predicting users’ personalized sequential behavior (or ‘next-item’ recommendation), where the challenges mainly lie in modeling ‘third-order’ interactions between a user, her previously visited item(s), and the next item to consume. Existing methods typically decompose these higher-order interactions into a combination pairwise relationships, by way of which user preferences (user-item interactions) and sequential patterns (item-item interactions) are captured by separate components. In this paper, we propose a unified method, TransRec, to model such third-order relationships for large-scale sequential prediction. Methodologically, we embed items into a ‘transition space’ where users are modeled as translation vectors operating on item sequences. Empirically, this approach outperforms the state-of-the-art on a wide spectrum of real-world datasets.

  • LPMPR: Multi-Objective Pairwise Ranking by Rasaq Otunba, Raimi A. Rufai and Jessica Lin

    The recommendation challenge can be posed as the problem of predicting either item ratings or item rankings. The latter approach has proven more effective. Pairwise learning-to-rank techniques have been relatively successful. Hence, they are popularly used for learning recommender model parameters such as those in collaborative filtering (CF) models. The model parameters are learned by optimizing close smooth approximations of the non-smooth information retrieval (IR) metrics such as Mean Area Under ROC curve (AUC).

    Targeted campaigns are an alternative to item recommendations for increasing conversion. The user ranking task is referred to as audience retrieval. It is used in targeted campaigns to rank push campaign recipients based on their potential to convert. In this work, we consider the task of efficiently learning a ranking model that provides item recommendations and user rankings simultaneously. We adopt pairwise learning for this task. We refer to our novel approach as multi-objective pairwise ranking (MPR).

    We describe our approach and evaluate its performance by experiments.

  • SPAn Elementary View on Factorization Machines by Sebastian Prillo

    Factorization Machines (FMs) are a model class capable of learning pairwise (and in general higher order) feature interactions from high dimensional, sparse data. In this paper we adopt an elementary view on FMs. Specifically, we view FMs as a sum of simple surfaces – a hyperplane plus several squared hyperplanes – in the original feature space; this elementary view, although equivalent to that of low rank matrix factorization, is geometrically more intuitive and points to some interesting generalizations. Led by our intuition, we challenge our understanding of the inductive bias of FMs by showing a simple dataset where FMs counterintuitively fail to learn the weight of the interaction between two features. We discuss the reasons and mathematically formulate and prove a form of this limitation. Also inspired by our elementary view, we propose modeling intermediate orders of interaction, such as 1.5-way FMs. Beyond the specific proposals, the goal of this paper is to expose our thoughts and ideas to the research community, in an effort to take FMs to the next level.

Back to Program

Diamond Supporter
Platinum Supporters
Gold Supporter
Silver Supporter
Special Supporters