Session: Privacy & Security

Chair: Bamshad Mobasher
Date: Saturday, October 24, 13:50-15:30

  • Effective diverse and obfuscated attacks on model-based recommender systems

    by Zunping Cheng, Neil Hurley

    Robustness analysis research has shown that conventional memory-based recommender systems are very susceptible to malicious profile-injection attacks. A number of attack models have been proposed and studied and recent work has suggested that model-based collaborative filtering (CF) algorithms have greater robustness against these attacks. Moreover, to combat such attacks, several attack detection algorithms have been proposed. One that has shown high detection accuracy is based on using principal component analysis (PCA) to cluster attack profiles on the basis that such profiles are highly correlated. In this paper, we argue that the robustness observed in model-based algorithms is due to the fact that the proposed attacks have not targeted the specific vulnerabilities of these algorithms. We discuss how an effective attack targeting model-based algorithms that employ profile clustering can be designed. It transpires that the attack profiles employed in this attack, exhibit low rather than high pair-wise similarities and can easily be obfuscated to avoid PCA-based detection, while remaining effective.

    Details

  • Statistical attack detection

    by Neil Hurley, Zunping Cheng, Mi Zhang

    It has been shown in recent years that effective profile injection or shilling attacks can be mounted on standard recommendation algorithms. These attacks consist of the insertion of bogus user profiles into the system database in order to manipulate the recommendation output, for example to promote or demote the predicted ratings for a particular product. A number of attack models have been proposed and some detection strategies to identify these attacks have been empirically evaluated. In this paper we show that the standard attack models can be readily detected using statistical detection techniques. We argue that insufficient consideration of the effectiveness of attacks under a constraint of statistical invariance has been taken in past research. In fact, it is possible to create effective attacks that are undetectable using the detection strategies proposed to date, including the PCA-based clustering strategy which has shown excellent performance against standard attacks. Nevertheless, these more advanced attacks can also be detected with careful design of a statistical detector. The question posed for future research is whether attack models that produce effective attack profiles that are statistically identical to genuine profiles are really possible.

    Details

  • Preserving privacy in collaborative filtering through distributed aggregation of offline profiles

    by Reza Shokri, Pedram Pedarsani, George Theodorakopoulos, Jean-Pierre Hubaux

    In recommender systems, usually, a central server needs to have access to users’ profiles in order to generate useful recommendations. Having this access, however, undermines the users’ privacy.

    The more information is revealed to the server on the user-item relations, the lower the users’ privacy is. Yet, hiding part of the profiles to increase the privacy comes at the cost of recommendation accuracy or difficulty of implementing the method. In this paper, we propose a distributed mechanism for users to augment their profiles in a way that obfuscates the user-item connection to an untrusted server, with minimum loss on the accuracy of the recommender system. We rely on the central server to generate the recommendations. However, each user stores his profile offline, modifies it by partly merging it with the profile of similar users through direct contact with them, and only then periodically uploads his profile to the server. We propose a metric to measure privacy at the system level, using graph matching concepts. Applying our method to the Netflix prize dataset, we show the effectiveness of the algorithm in solving the tradeoff between privacy and accuracy in recommender systems in an applicable way.

    Details

  • Manipulation-resistant collaborative filtering systems

    by Benjamin Van Roy, Xiang Yan

    A collaborative filtering system recommends to users products that similar users like. Collaborative filtering systems influence purchase decisions, and hence have become targets of manipulation by unscrupulous vendors. We provide theoretical and empirical results demonstrating that while common nearest neighbor algorithms, which are widely used in commercial systems, can be highly susceptible to manipulation, a class of collaborative filtering algorithms which we refer to as linear is relatively robust. These results provide guidance for the design of future collaborative filtering systems.

    Details

Back to Program