- LPRevisiting Adversarially Learned Injection Attacks Against Recommender Systems
by Jiaxi Tang (Simon Fraser University), Hongyi Wen (Cornell Tech, Cornell University), Ke Wang (Simon Fraser University)
Recommender systems play an important role in modern information and e-commerce applications. While increasing research is dedicated to improving the relevance and diversity of the recommendations, the potential risks of state-of-the-art recommendation models are under-explored, that is, these models could be subject to attacks from malicious third parties, through injecting fake user interactions to achieve their purposes. This paper revisits the adversarially-learned injection attack problem, where the injected fake user ‘behaviors’ are learned locally by the attackers with their own model – one that is potentially different from the model under attack, but shares similar properties to allow attack transfer. We found that most existing works in literature suffer from two major limitations: (1) they do not solve the optimization problem precisely, making the attack less harmful than it could be, (2) they assume perfect knowledge for the attack, causing the lack of understanding for realistic attack capabilities. We demonstrate that the exact solution for generating fake users as an optimization problem could lead to a much larger impact. Our experiments on a real-world dataset reveal important properties of the attack, including attack transferability and its limitations. These findings can inspire useful defensive methods against this possible existing attack.
Full text in ACM Digital Library
|
|
- LPTowards Safety and Sustainability: Designing Local Recommendations for Post-pandemic World
by Gourab K Patro (Indian Institute of Technology), Abhijnan Chakraborty (Max-Planck Institute for Software Systems), Ashmi Banerjee (Technical University of Munich), Niloy Ganguly (Indian Institute of Technology)
The COVID-19 pandemic has made it paramount to maintain social distance to limit the viral transmission probability. At the same time, local businesses (e.g., restaurants, cafes, stores, malls) need to operate to ensure their economic sustainability. Considering the wide usage of local recommendation platforms like Google Local and Yelp by customers to choose local businesses, we propose to design local recommendation systems which can help in achieving both safety and sustainability goals. Our investigation of existing local recommendation systems shows that they can lead to overcrowding at some businesses compromising customer safety, and very low footfall at other places threatening their economic sustainability. On the other hand, naive ways of ensuring safety and sustainability can cause significant loss in recommendation utility for the customers. Thus, we formally express the problem as a multi-objective optimization problem and solve by innovatively mapping it to a bipartite matching problem with polynomial time solutions. Extensive experiments over multiple real-world datasets reveal the efficacy of our approach along with the three-way control over sustainability, safety, and utility goals.
Full text in ACM Digital Library
|
- LPGlobal and Local Differential Privacy for Collaborative Bandits
by Huazheng Wang (University of Virginia), Qian Zhao (Bloomberg L.P.), Qingyun Wu (University of Virginia), Shubham Chopra (Bloomberg L.P.), Abhinav Khaitan (Bloomberg L.P.), Hongning Wang (University of Virginia)
“Collaborative bandit learning has become an emerging focus for personalized recommendation. It leverages user dependence for joint model estimation and recommendation. As such online learning solutions directly learn from users, e.g., result clicks, they bring in new challenges in privacy protection. Despite the existence of recent studies about privacy in contextual bandit algorithms, how to efficiently protect user privacy in a collaborative bandit learning environment remains unknown. In this paper, we develop a general solution framework to achieve differential privacy in collaborative bandit algorithms, under the notion of global differential privacy and local differential privacy. The key idea is to inject noise in a bandit model’s sufficient statistics (either on server side to achieve global differential privacy or client side to achieve local differential privacy) and calibrate the noise scale with respect to the structure of collaboration among users. We study two popularly used collaborative bandit algorithms to illustrate the application of our solution framework. Theoretical analysis proves our derived private algorithms reduce the added regret caused by privacy-preserving mechanism compared to its linear bandits counterparts, i.e., collaboration actually helps to achieve stronger privacy with the same amount of injected noise. We also empirically evaluate the algorithms on both synthetic and real-world datasets to demonstrate the trade-off between privacy and utility.”
Full text in ACM Digital Library
|
|
- LPDebiasing Item-to-Item Recommendations With Small Annotated Datasets
by Tobias Schnabel (Microsoft), Paul N. Bennett (Microsoft)
Item-to-item recommendation (e.g., “People who like this also like…”) is a ubiquitous and important type of recommendation in real-world systems. Observational data from historical interaction logs abound in these settings. However, since virtually all observational data exhibit biases, such as time-in-inventory or interface biases, it is crucial that recommender algorithms account for these biases. In this paper, we develop a principled approach for item-to-item recommendation based on causal inference and present a practical and highly effective method for estimating the causal parameters from a small annotated dataset. Empirically, we find that our approach substantially improves upon existing methods while requiring only small amounts of annotated data.
Full text in ACM Digital Library
|