Paper Session P4: Fairness, Filter Bubbles, and Ethical Concerns

Session A: 16:0017:30, chaired by Robin Burke and Ludovico Boratto. Attend in Whova
Session B: 3:004:30, chaired by Bamshad Mobasher and Alex Tuzhilin. Attend in Whova

  • LPRevisiting Adversarially Learned Injection Attacks Against Recommender Systems
    by Jiaxi Tang (Simon Fraser University), Hongyi Wen (Cornell Tech, Cornell University), Ke Wang (Simon Fraser University)

    Recommender systems play an important role in modern information and e-commerce applications. While increasing research is dedicated to improving the relevance and diversity of the recommendations, the potential risks of state-of-the-art recommendation models are under-explored, that is, these models could be subject to attacks from malicious third parties, through injecting fake user interactions to achieve their purposes. This paper revisits the adversarially-learned injection attack problem, where the injected fake user ‘behaviors’ are learned locally by the attackers with their own model – one that is potentially different from the model under attack, but shares similar properties to allow attack transfer. We found that most existing works in literature suffer from two major limitations: (1) they do not solve the optimization problem precisely, making the attack less harmful than it could be, (2) they assume perfect knowledge for the attack, causing the lack of understanding for realistic attack capabilities. We demonstrate that the exact solution for generating fake users as an optimization problem could lead to a much larger impact. Our experiments on a real-world dataset reveal important properties of the attack, including attack transferability and its limitations. These findings can inspire useful defensive methods against this possible existing attack.

    Full text in ACM Digital Library

  • LPGlobal and Local Differential Privacy for Collaborative Bandits
    by Huazheng Wang (University of Virginia), Qian Zhao (Bloomberg L.P.), Qingyun Wu (University of Virginia), Shubham Chopra (Bloomberg L.P.), Abhinav Khaitan (Bloomberg L.P.), Hongning Wang (University of Virginia)

    “Collaborative bandit learning has become an emerging focus for personalized recommendation. It leverages user dependence for joint model estimation and recommendation. As such online learning solutions directly learn from users, e.g., result clicks, they bring in new challenges in privacy protection. Despite the existence of recent studies about privacy in contextual bandit algorithms, how to efficiently protect user privacy in a collaborative bandit learning environment remains unknown.
    In this paper, we develop a general solution framework to achieve differential privacy in collaborative bandit algorithms, under the notion of global differential privacy and local differential privacy. The key idea is to inject noise in a bandit model’s sufficient statistics (either on server side to achieve global differential privacy or client side to achieve local differential privacy) and calibrate the noise scale with respect to the structure of collaboration among users. We study two popularly used collaborative bandit algorithms to illustrate the application of our solution framework. Theoretical analysis proves our derived private algorithms reduce the added regret caused by privacy-preserving mechanism compared to its linear bandits counterparts, i.e., collaboration actually helps to achieve stronger privacy with the same amount of injected noise. We also empirically evaluate the algorithms on both synthetic and real-world datasets to demonstrate the trade-off between privacy and utility.”

    Full text in ACM Digital Library

  • LPDebiasing Item-to-Item Recommendations With Small Annotated Datasets
    by Tobias Schnabel (Microsoft), Paul N. Bennett (Microsoft)

    Item-to-item recommendation (e.g., “People who like this also like…”) is a ubiquitous and important type of recommendation in real-world systems. Observational data from historical interaction logs abound in these settings. However, since virtually all observational data exhibit biases, such as time-in-inventory or interface biases, it is crucial that recommender algorithms account for these biases. In this paper, we develop a principled approach for item-to-item recommendation based on causal inference and present a practical and highly effective method for estimating the causal parameters from a small annotated dataset. Empirically, we find that our approach substantially improves upon existing methods while requiring only small amounts of annotated data.

    Full text in ACM Digital Library

Back to Program

Select timezone:

Current time in :

Diamond Supporter
 
Platinum Supporters
 
 
 
 
Gold Supporters
 
Silver Supporter
 
Special Supporter