Monday Poster & Coffee Break Session

Date: Monday 15:30 – 16:00 CET
Chair: To be announced

  • LBREigenvalue Perturbation for Item-based Recommender Systems
    by Cesare Bernardis (Politecnico di Milano, Italy) and Paolo Cremonesi (Politecnico di Milano, Italy)

    Adding confidence estimates to predicted ratings has been shown to positively influence the quality of the recommendations provided by a recommender system. While confidence over single point predictions of ratings and preferences has been widely studied in literature, limited effort has been put in exploring the benefits provided by user-level confidence indices. In this work we exploit a recently introduced user-level confidence index, called eigenvalue confidence index, in order to provide maximum confidence recommendations for item-based recommender systems. We firstly derive a closed form solution to calculate the index, then we propose a new recommendation methodology for item-based models, called eigenvalue perturbation, founded on the strongly positive correlation between the index value and the accuracy of the recommendations. We show and discuss the accuracy results obtained with a comprehensive set of experiments over several datasets and using different item-based models, empirically proving that applying the new technique we are able to outperform the original recommendation models in most of the experimental configurations.

    Full text in ACM Digital Library

  • LBRQuality Metrics in Recommender Systems: Do We Calculate Metrics Consistently?
    by Yan-Martin Tamm (Sber AI Lab, Russian Federation), Rinchin Damdinov (Sber AI Lab, Russian Federation), and Alexey Vasilev (Sber AI Lab, Russian Federation)

    Offline evaluation is a popular approach to determine the best algorithm in terms of the chosen quality metric. However, if the chosen metric calculates something unexpected, this miscommunication can lead to poor decisions and wrong conclusions. In this paper, we thoroughly investigate quality metrics used for recommender systems evaluation. We look at the practical aspect of implementations found in modern RecSys libraries and at the theoretical aspect of definitions in academic papers. We find that Precision is the only metric universally understood among papers and libraries, while other metrics may have different interpretations. Metrics implemented in different libraries sometimes have the same name but measure different things, which leads to different results given the same input. When defining metrics in an academic paper, authors sometimes omit explicit formulations or give references that do not contain explanations either. In 47% of cases, we cannot easily know how the metric is defined because the definition is not clear or absent. These findings highlight yet another difficulty in recommender system evaluation and call for a more detailed description of evaluation protocols.

    Full text in ACM Digital Library

  • DSArgument-based generation and explanation of recommendations
    by Andrés Segura-Tinoco (Departamento de Ingeniería Informática Universidad Autónoma de Madrid, Spain)

    In the recommender systems literature, it has been shown that, in addition to improving system effectiveness, explaining recommendations may increase user satisfaction, trust, persuasion and loyalty. In general, explanations focus on the filtering algorithms or the users and items involved in the generation of recommendations. However, on certain domains that are rich on user-generated textual content, it would be valuable to provide justifications of recommendations according to arguments that are explicit, underlying or related with the data used by the systems, e.g., the reasons for customers’ opinions in reviews of e-commerce sites, and the requests and claims in citizens’ proposals and debates of e-participation platforms. In this context, there is a need and challenging task to automatically extract and exploit the arguments given for and against evaluated items. We thus advocate to focus not only on user preferences and item features, but also on associated arguments. In other words, we propose to not only consider what is said about items, but also why it is said. Hence, arguments would not only be part of the recommendation explanations, but could also be used by the recommendation algorithms themselves. To this end, in this thesis, we propose to use argument mining techniques and tools that allow retrieving and relating argumentative information from textual content, and investigate recommendation methods that exploit that information before, during and after their filtering processes.

    Full text in ACM Digital Library

Platinum Supporters
 
 
Gold Supporters
 
 
 
 
 
Silver Supporters
 
 
 
 
Special Supporter