Paper Session 2: Ranking

Date: Monday, Aug 28, 2017, 10:30-12:30
Location: Room 1
Chair: Harald Steck

  • LPLearning to Rank with Trust and Distrust in Recommender Systems by Dimitrios Rafailidis and Fabio Crestani

    The sparsity of users’ preferences can significantly degrade the quality of recommendations in the collaborative filtering strategy. To account for the fact that the selections of social friends and foes may improve the recommendation accuracy, we propose a learning to rank model that exploits users’ trust and distrust relationships. Our learning to rank model focusses on the performance at the top of the list, with the recommended items that end-users will actually see. In our model, we try to push the relevant items of users and their friends at the top of the list, while ranking low those of their foes. Furthermore, we propose a weighting strategy to capture the correlations of users’ preferences with friends’ trust and foes’ distrust degrees in two intermediate trust- and distrust-preference user latent spaces, respectively. Our experiments on the Epinions dataset show that the proposed learning to rank model significantly outperforms other state-of-the-art methods in the presence of sparsity in users’ preferences and when a part of trust and distrust relationships is not available. Furthermore, we demonstrate the crucial role of our weighting strategy in our model, to balance well the influences of friends and foes on users’ preferences.

  • LPMetalearning for Context-aware Filtering: Selection of Tensor Factorization Algorithms by Tiago Cunha, Carlos Soares and André C.P.L.F. de Carvalho

    This work addresses the problem of selecting Tensor Factorization algorithms for the Context-aware Filtering recommendation task using a metalearning approach. The most important challenge of applying metalearning on new problems is the development of useful measures able to characterize the data, i.e. metafeatures. We propose an extensive and exhaustive set of metafeatures to characterize Context-aware Filtering recommendation task. These metafeatures take advantage of the tensor’s hierarchical structure via slice operations. The algorithm selection task is addressed as a Label Ranking problem, which ranks the Tensor Factorization algorithms according to their expected performance, rather than simply selecting the algorithm that is expected to obtain the best performance. A comprehensive experimental work is conducted on both levels, baselevel and metalevel (Tensor Factorization and Label Ranking, respectively). The results show that the proposed metafeatures lead to metamodels that tend to rank Tensor Factorization algorithms accurately and that the selected algorithms present high recommendation performance.

  • LPA Gradient-based Adaptive Learning Framework for Efficient Personal Recommendation by Yue Ning, Yue Shi, Liangjie Hong, Huzefa Rangwala and Naren Ramakrishnan

    Recommending personalized content to users is a long-standing challenge to many online services including Facebook, Yahoo, Linkedin and Twitter. Traditional recommendation models such as latent factor models and feature-based models are usually trained for all users and optimize an “average” experience for them, yielding sub-optimal solutions. Although multi-task learning provides an opportunity to learn personalized models per user, learning algorithms are usually tailored to specific models (e.g., generalized linear model, matrix factorization and etc.), creating obstacles for a unified engineering interface, which is important for large Internet companies. In this paper, we present an empirical framework to learn user-specific personal models for content recommendation by utilizing gradient information from a global model. Our proposed method can potentially benefit any model that can be optimized through gradients, offering a lightweight yet generic alternative to conventional multi-task learning algorithms for user personalization. We demonstrate the effectiveness of the proposed framework by incorporating it in three popular machine learning algorithms including logistic regression, gradient boosting decision tree and matrix factorization. Our extensive empirical evaluation shows that the proposed framework can significantly improve the efficiency of personalized recommendation in real-world datasets.

  • SPentity2rec: Learning User-Item Relatedness from Knowledge Graphs for Top-N Item Recommendation by Enrico Palumbo, Giuseppe Rizzo and Raphaël Troncy

    Knowledge Graphs have proven to be extremely valuable to recommender systems, as they enable hybrid graph-based recommendation models encompassing both collaborative and content information. Leveraging this wealth of heterogeneous information for top-N item recommendation is a challenging task, as it requires the ability of effectively encoding a diversity of semantic relations and connectivity patterns.

    In this work, we propose a novel approach to learning user-item relatedness from knowledge graphs for top-N item recommendations. We start from a knowledge graph modeling user-item and item-item relations and we learn property-specific vector representations of users and items applying neural language models on the network. These representations are used to create property-specific user-item relatedness features, which are in turn fed into learning to rank algorithms to learn a global relatedness model that optimizes top-N item recommendations. We evaluate the proposed approach in terms of ranking quality on the MovieLens 1M dataset, outperforming two state-of-the-art recommender systems, and we assess the importance of property-specific relatedness scores on the overall ranking quality.

  • SPOn Parallelizing SGD for Pairwise Learning to Rank in Collaborative Filtering Recommender Systems by Murat Yagci, Tevfik Aytekin and Fikret Gurgen

    Learning to rank with pairwise loss functions has been found useful in collaborative filtering recommender systems. At web scale, the optimization is often based on stochastic gradient descent (SGD) which has a sequential nature. We investigate two different shared memory lock-free parallel SGD schemes based on block partitioning and no partitioning for use with pairwise loss functions. To speed up convergence to a solution, we extrapolate simple practical algorithms from their application to pointwise learning to rank. Experimental results show that the proposed algorithms are quite useful regarding their ranking ability and speedup patterns in comparison to their sequential counterpart.

  • SPControlling Popularity Bias in Learning-to-Rank Recommendation by Himan Abdollahpouri, Robin Burke and Bamshad Mobasher

    Many recommendation algorithms suffer from popularity bias in their output: popular items are recommended frequently and less popular ones rarely, if at all. However, less popular, long-tail items are precisely those that are desirable for increased user satisfaction. In this paper, we introduce a flexible regularization-based framework to enhance the longtail coverage of recommendation lists in a learning-to-rank algorithm. We show that regularization provides a tunable mechanism for controlling the trade-off between accuracy and coverage. Moreover, the experimental results using two data sets show that it is possible to achieve higher coverage of long tail items without substantial sacrifice of ranking performance.

Back to Program

Diamond Supporter
 
Platinum Supporters
 
 
 
 
Gold Supporter
 
Silver Supporter
 
Special Supporters