Session: Recommendation Algorithms

Date: Friday, October 24, 08:45-10:30

  • Boosting collaborative filtering based on statistical prediction errors

    by Shengchao Ding, Shiwan Zhao, Quan Yuan, Xiatian Zhang, Rongyao Fu, Lawrence Bergman

    User-based collaborative filtering methods typically predict a user’s item ratings as a weighted average of the ratings given by similar users, where the weight is proportional to the user similarity. Therefore, the accuracy of user similarity is the key to the success of the recommendation, both for selecting neighborhoods and computing predictions. However, the computed similarities between users are somewhat inaccurate due to data sparsity.

    For a given user, the set of neighbors selected for predicting ratings on different items typically exhibit overlap. Thus, error terms contributing to rating predictions will tend to be shared, leading to correlation of the prediction errors.

    Through a set of case studies, we discovered that for a given user, the prediction errors on different items are correlated to the similarities of the corresponding items, and to the degree to which they share common neighbors.

    We propose a framework to improve prediction accuracy based on these statistical prediction errors. Two different strategies to estimate the prediction error on a desired item are proposed. Our experiments show that these approaches improve the prediction accuracy of standard user based methods significantly, and they outperform other state-of-the-art methods.

    Details

  • The long tail of recommender systems and how to leverage it

    by Yoon-Joo Park, Alexander Tuzhilin

    The paper studies the Long Tail problem of recommender systems when many items in the Long Tail have only few ratings, thus making it hard to use them in recommender systems. The approach presented in the paper splits the whole itemset into the head and the tail parts and clusters only the tail items. Then recommendations for the tail items are based on the ratings in these clusters and for the head items on the ratings of individual items. If such partition and clustering are done properly, we show that this reduces the recommendation error rates for the tail items, while maintaining reasonable computational performance.

    Details

  • Tied boltzmann machines for cold start recommendations

    by Asela Gunawardana, Christopher Meek

    We describe a novel statistical model, the tied Boltzmann machine, for combining collaborative and content information for recommendations. In our model, pairwise interactions between items are captured through a Boltzmann machine, whose parameters are constrained according to the content associated with the items. This allows the model to use content information to recommend items that are not seen during training. We describe a tractable algorithm for training the model, and give experimental results evaluating the model in two cold start recommendation tasks on the MovieLens data set.

    Details

  • MobHinter: epidemic collaborative filtering and self-organization in mobile ad-hoc networks

    by Rossano Schifanella, André Panisson, Cristina Gena, Giancarlo Ruffo

    We focus on collaborative filtering dealing with self-organizing communities, host mobility, wireless access, and ad-hoc communications. In such a domain, knowledge representation and users profiling can be hard; remote servers can be often unreachable due to client mobility; and feedback ratings collected during random connections to other users’ ad-hoc devices can be useless, because of natural differences between human beings. Our approach is based on so called Affinity Networks, and on a novel system, called MobHinter, that epidemically spreads recommendations through spontaneous similarities between users. Main results of our study are two fold: firstly, we show how to reach comparable recommendation accuracies in the mobile domain as well as in a complete knowledge scenario; secondly, we propose epidemic collaborative strategies that can reduce rapidly and realistically the cold start problem.

    Details

  • Mining recommendations from the web

    by Guy Shani, Max Chickering, Christopher Meek

    In this paper we study the challenges and evaluate the effectiveness of data collected from the web for recommendations. We provide experimental results, including a user study, showing that our methods produce good recommendations in realistic applications. We propose a new evaluation metric, that takes into account the difficulty of prediction. We show that the new metric aligns well with the results from a user study.

    Details

Back to Program