Session: Privacy & Trust
Date: Friday, October 19, 11:00-12:45
- Private distributed collaborative filtering using estimated concordance measures
by Neal Lathia, Stephen Hailes, Licia Capra
Collaborative filtering has become an established method to measure users’ similarity and to make predictions about their interests. However, prediction accuracy comes at the cost of user’s privacy: in order to derive accurate similarity measures, users are required to share their rating history with each other. In this work we propose a new measure of similarity, which achieves comparable prediction accuracy to the Pearson correlation coefficient, and that can successfully be estimated without breaking users’ privacy. This novel method works by estimating the number of concordant, discordant and tied pairs of ratings between two users with respect to a shared random set of ratings. In doing so, neither the items rated nor the ratings themselves are disclosed, thus achieving strictly-private collaborative filtering. The technique has been evaluated using the recently released Netflix prize dataset.
- Enhancing privacy and preserving accuracy of a distributed collaborative filtering
by Shlomo Berkovsky, Yaniv Eytani, Tsvi Kuflik, Francesco Ricci
Collaborative Filtering (CF) is a powerful technique for generating personalized predictions. CF systems are typically based on a central storage of user profiles used for generating the recommendations. However, such centralized storage introduces a severe privacy breach, since the profiles may be accessed for purposes, possibly malicious, not related to the recommendation process. Recent researches proposed to protect the privacy of CF by distributing the profiles between multiple repositories and exchange only a subset of the profile data, which is useful for the recommendation. This work investigates how a decentralized distributed storage of user profiles combined with data modification techniques may mitigate some privacy issues. Results of experimental evaluation show that parts of the user profiles can be modified without hampering the accuracy of CF predictions. The experiments also indicate which parts of the user profiles are most useful for generating accurate CF predictions, while their exposure still keeps the essential privacy of the users.
- Trust-aware recommender systems
by Paolo Massa, Paolo Avesani
Recommender Systems based on Collaborative Filtering suggest to users items they might like. However due to data sparsity of the input ratings matrix, the step of finding similar users often fails. We propose to replace this step with the use of a trust metric, an algorithm able to propagate trust over the trust network and to estimate a trust weight that can be used in place of the similarity weight. An empirical evaluation on Epinions.com dataset shows that Recommender Systems that make use of trust information are the most effective in term of accuracy while preserving a good coverage. This is especially evident on users who provided few ratings.
- The influence limiter: provably manipulation-resistant recommender systems
by Paul Resnick, Rahul Sami
An attacker can draw attention to items that don’t deserve that attention by manipulating recommender systems. We describe an influence-limiting algorithm that can turn existing recommender systems into manipulation-resistant systems. Honest reporting is the optimal strategy for raters who wish to maximize their influence. If an attacker can create only a bounded number of shills, the attacker can mislead only a small amount. However, the system eventually makes full use of information from honest, informative raters. We describe both the influence limits and the information loss incurred due to those limits in terms of information-theoretic concepts of loss functions and entropies.
RecSys 2007 (Minnesota)
Sponsors and Benefactors
![]() |
![]() |
![]() |
![]() |



