Paper Session 5: Trust and Reliability

Date: Sunday, Sept 18, 2016, 09:40-10:40
Location: Kresge Auditorium
Chair: Dietmar Jannach

  • LPMechanism Design for Personalized Recommender Systems
    by Qingpeng Cai, Aris Filos-Ratsikas, Chang Liu, Pingzhong Tang

    Strategic behaviour from sellers on e-commerce websites, such as faking transactions and manipulating the recommendation scores through artificial reviews, have been among the most notorious obstacles that prevent websites from maximizing the efficiency of their recommendations. Previous approaches have focused almost exclusively on machine learning-related techniques to detect and penalize such behaviour. In this paper, we tackle the problem from a different perspective, using the approach of the field of mechanism design. We put forward a game model tailored for the setting at hand and aim to construct truthful mechanisms, i.e. mechanisms that do not provide incentives for dishonest reputation-augmenting actions, that guarantee good recommendations in the worst-case. For the setting with two agents, we propose a truthful mechanism that is optimal in terms of social efficiency. For the general case of m agents, we prove both lower and upper bound results on the effciency of truthful mechanisms and propose truthful mechanisms that yield significantly better results, when compared to an existing mechanism from a leading e-commerce site on real data.

    Full text in ACM Digital Library

  • LPMood-Sensitive Truth Discovery For Reliable Recommendation Systems in Social Sensing
    by Jermaine Marshall, Dong Wang

    This work is motivated by the need to provide reliable information recommendation to users in social sensing. Social sensing has become an emerging application paradigm that uses humans as sensors to observe and report events in the physical world. These human sensed observations are often viewed as binary claims (either true or false). A fundamental challenge in social sensing is how to ascertain the credibility of claims and the reliability of sources without knowing either of them a priori. We refer to this challenge as truth discovery. While prior works have made progress on addressing this challenge, an important limitation exists: they did not explore the mood sensitivity aspect of the problem. Therefore, the claims identified as correct by current solutions can be completely biased in regards to the mood of human sources and lead to useless or even misleading recommendations. In this paper, we present a new analytical model that explicitly considers the mood sensitivity feature in the solution of truth discovery problem. The new model solves a multi-dimensional estimation problem to jointly estimate the correctness and mood neutrality of claims as well as the reliability and mood sensitivity of sources. We compare our model with state-of-the-art truth discovery solutions using four real world datasets collected from Twitter during recent disastrous and emergent events: Brussels Bombing, Paris Attack, Oregon Shooting, Baltimore Riots, which occurred in 2015 and 2016. The results show that our model has significant improvements over the compared baselines by finding more correct and mood neutral claims.

    Full text in ACM Digital Library

  • LPCrowd-Based Personalized Natural Language Explanations for Recommendations
    by Shuo Chang, F. Maxwell Harper, Loren Gilbert Terveen

    Explanations are important for users to make decisions on whether to take recommendations. However, algorithm generated explanations can be overly simplistic and unconvincing. We believe that humans can overcome these limitations. Inspired by how people explain word-of-mouth recommendations, we designed a process, combining crowdsourcing and computation, that generates personalized natural language explanations. We modeled key topical aspects of movies, asked crowdworkers to write explanations based on quotes from online movie reviews, and personalized the explanations presented to users based on their rating history. We evaluated the explanations by surveying 220 MovieLens users, finding that compared to personalized tag-based explanations, natural language explanations: 1) contain a more appropriate amount of information, 2) earn more trust from users, and 3) make users more satisfied. This paper contributes to the research literature by describing a scalable process for generating high quality and personalized natural language explanations, improving on state-of-the-art content-based explanations, and showing the feasibility and advantages of approaches that combine human wisdom with algorithmic processes.

    Full text in ACM Digital Library

Back to Program

Diamond Supporters
 
 
Platinum Supporters
Netflix
Quora
 
 
Gold Supporters
 
Amazon
 
 
Silver Supporter
 
 
Special Supporters