Session 9: Privacy, Fairness, Bias

Date: Wednesday 18:00 – 19:30 CET
Chair: Christine Bauer (Utrecht University)

  • INChallenges Experienced in Public Service Media Recommendation Systems
    by Andreas Grün (ZDF, Germany) and Xenija Neufeld (Accso – Accelerated Solutions GmbH, Germany)

    After multiple years of successfully applying recommendation algorithms at ZDF, a German Public Service Media provider, we have faced certain challenges in regards to the optimization of our systems and the resulting recommendations. The design and the optimization of our systems are guided by various, partially competing objectives and are, therefore, influenced by various factors. Similarly to commercial video on demand services, ZDF is interested in binding its audience by providing personalized recommendations in its streaming media service. However, more importantly, as a Public Service Media provider, we are committed to offer diverse, universal, unbiased, and transparent recommendations while following established editorial guidelines and strict privacy regulations. Additionally, we are committed to provide environmentally-friendly or green recommendations optimizing our systems for run time and power consumption. With the intent to start a public discussion, we describe the challenges that arise when optimizing Public Service Media recommendation systems towards machine learning metrics, business Key Performance Indicators, Public Service Media values, and run-time simultaneously, while aiming to keep the results transparent.

    Full text in ACM Digital Library

  • PADebiased Explainable Pairwise Ranking from Implicit Feedback
    by Khalil Damak (Department of Computer Science and Engineering University of Louisville, United States), Sami Khenissi (University of Louisville, United States), and Olfa Nasraoui (Dept of Computer Engineering and Computer Science University of Louisville, United States)

    Recent work in recommender systems has emphasized the importance of fairness, with a particular interest in bias and transparency, in addition to predictive accuracy. In this paper, we focus on the state of the art pairwise ranking model, Bayesian Personalized Ranking (BPR), which has previously been found to outperform pointwise models in predictive accuracy, while also being able to handle implicit feedback. Specifically, we address two limitations of BPR: (1) BPR is a black box model that does not explain its outputs, thus limiting the user’s trust in the recommendations, and the analyst’s ability to scrutinize a model’s outputs; and (2) BPR is vulnerable to exposure bias due to the data being Missing Not At Random (MNAR). This exposure bias usually translates into an unfairness against the least popular items because they risk being under-exposed by the recommender system. In this work, we first propose a novel explainable loss function and a corresponding Matrix Factorization-based model called Explainable Bayesian Personalized Ranking (EBPR) that generates recommendations along with item-based explanations. Then, we theoretically quantify additional exposure bias resulting from the explainability, and use it as a basis to propose an unbiased estimator for the ideal EBPR loss. The result is a ranking model that aptly captures both debiased and explainable user preferences. Finally, we perform an empirical study on three real-world datasets that demonstrate the advantages of our proposed models.

    Full text in ACM Digital Library

  • PAPrivacy Preserving Collaborative Filtering by Distributed Mediation
    by Alon Ben Horin (Mathematics and Computer Science, The Open University, Israel) and Tamir Tassa (Mathematics and Computer Science, The Open University, Israel)

    Recommender systems have become very influential in our everyday decision making, e.g., helping us choose a movie from a content platform, or offering us suitable products on e-commerce websites. While most vendors who utilize recommender systems rely exclusively on training data consisting of past transactions that took place through them, the accuracy of recommendations can be improved if several vendors conjoin their datasets. Alas, such data sharing poses grave privacy concerns for both the vendors and the users. In this study we present secure multi-party protocols that enable several vendors to share their data, in a privacy-preserving manner, in order to allow more accurate Collaborative Filtering (CF). Shmueli and Tassa (RecSys 2017) introduced privacy-preserving CF protocols that rely on a mediator; namely, a third party that assists in performing the computations. They demonstrated the significant advantages of mediation in that context. We take here the mediation approach into the next level by using several independent mediators. Such distributed mediation maintains all of the advantages that were identified by Shmueli and Tassa, and offers additional ones, in comparison with the single-mediator protocols: stronger security and dramatically shorter runtimes. In addition, while all prior art assumed limited and unrealistic settings, in which each user can purchase any given item through only one vendor, we consider here a general and more realistic setting, which encompasses all previously considered settings, where users can choose between different competing vendors. We demonstrate the appealing performance of our protocols through extensive experimentation.

    Full text in ACM Digital Library

  • PAStronger Privacy for Federated Collaborative Filtering With Implicit Feedback
    by Lorenzo Minto (Brave Software, United Kingdom), Moritz Haller (Brave Software, United Kingdom), Benjamin Livshits (Brave Software, United Kingdom), and Hamed Haddadi (Brave Software, United Kingdom)

    Recommender systems are commonly trained on centrally-collected user interaction data like views or clicks. This practice however raises serious privacy concerns regarding the recommender’s collection and handling of potentially sensitive data. Several privacy-aware recommender systems have been proposed in recent literature, but comparatively little attention has been given to systems at the intersection of implicit feedback and privacy. To address this shortcoming, we propose a practical federated recommender system for implicit data under user-level local differential privacy (LDP). The privacy-utility trade-off is controlled by parameters ϵ and k, regulating the per-update privacy budget and the number of ϵ-LDP gradient updates sent by each user, respectively. To further protect the user’s privacy, we introduce a proxy network to reduce the fingerprinting surface by anonymizing and shuffling the reports before forwarding them to the recommender. We empirically demonstrate the effectiveness of our framework on the MovieLens dataset, achieving up to Hit Ratio with K=10 (HR@10) 0.68 on 50,000 users with 5,000 items. Even on the full dataset, we show that it is possible to achieve reasonable utility with HR@10>0.5 without compromising user privacy.

    Full text in ACM Digital Library

Platinum Supporters
 
 
Gold Supporters
 
 
 
 
 
Silver Supporters
 
 
 
 
Special Supporter