Doctoral Symposium – Algorithms & Explanations III

Date: Monday October 14
Time: 16:45-18:15
Location: Room M

  • DSEvaluating the Pros and Cons of Recommender Systems Explanations
    by Kathrin Wardatzky (University of Zurich)

    Despite the growing interest in explainable AI in the RecSys community, the evaluation of explanations is still an open research topic. Typically, explanations are evaluated using offline metrics, with a case study, or through a user study. In my research, I will have a closer look at the evaluation of the effects of explanations on users. I investigate two possible factors that can impact the effects reported in recent publications, namely the explanation design and content as well as the users themselves. I further address the problem of determining promising explanations for an application scenario from a seemingly endless pool of options. Lastly, I propose a user study to close some of the research gaps established in the surveys and investigate how recommender systems explanations impact the understanding of users with different backgrounds.

  • DSCEERS: Counterfactual Evaluations of Explanations in Recommender Systems
    by Mikhail Baklanov (Tel Aviv University)

    The increasing focus on explainability within ethical AI, mandated by frameworks such as GDPR, highlights the critical need for robust explanation mechanisms in Recommender Systems (RS). A fundamental aspect of advancing such methods involves developing reproducible and quantifiable evaluation metrics. Traditional evaluation approaches involving human subjects are inherently non-reproducible, costly, subjective, and context-dependent. Furthermore, the complexity of AI models often transcends human comprehension capabilities, rendering it challenging for evaluators to ascertain the accuracy of explanations. Consequently, there is an urgent need for objective and scalable metrics that can accurately assess explanation methods in RS. Drawing inspiration from established practices in computer vision, this research introduces a counterfactual methodology to evaluate the accuracy of explanations in RS.

    Although counterfactual methods are well recognized in other fields, they remain relatively unexplored within the domain of recommender systems. This study aims to establish quantifiable metrics that objectively evaluate the correctness of local explanations. In this work, we wish to adopt these methods for recommendation systems, thereby enabling an easy and reproducible approach to evaluating the correctness of explanations for recommendation systems.

  • DSLearning Personalized Health Recommendations via Offline Reinforcement Learning
    by Larry Preuett (University of Washington)

    The healthcare industry is strained and would benefit from personalized treatment plans for treating various health conditions (e.g., HIV and diabetes). Reinforcement Learning is a promising approach to learning such sequential recommendation systems. However, applying reinforcement learning in the medical domain is challenging due to the lack of adequate evaluation metrics, partial observability, and the inability to explore due to safety concerns. In this line of work, we identify three research directions to improve the applicability of treatment plans learned using offline reinforcement learning.

  • DSTowards Symbiotic Recommendations: Leveraging LLMs for Conversational Recommendation Systems
    by Alessandro Petruzzelli (University of Bari Aldo Moro)

    Traditional recommender systems (RSs) generate suggestions by relying on user preferences and item characteristics. However, they do not to properly involve the user in the decision-making process. This gap is particularly evident in Conversational Recommender Systems (CRSs), where existing methods struggle to facilitate meaningful dialogue and dynamic user interactions.

    To address this limitation, in my Ph.D. project I will ground on the principles of Symbiotic AI (SAI) to propose a novel approach for CRS. Rather than treating users as passive recipients, this approach aims to engage them in an adaptive dialogue based on their preferences, previous interactions, and personal characteristics, thus fostering collaborative decision-making. To achieve this objective, my research unfolds in three phases. First, I will adapt Large Language Models (LLMs) to effectively handle recommendation tasks in a number of different domains, by also introducing knowledge injection techniques. Second, I will develop a CRS that not only provides accurate recommendations but also offers natural language explanations and responds to user queries, thereby promoting transparency and building user trust. Finally, I will consider users’ personal characteristics to personalize the CRS’s response strategy, ensuring adaptive and effective communication in line with SAI principles.

Back to program

Sapphire Supporter
 
Diamond Supporter
 
Amazon Science
 
Platinum Supporter
 
Gold Supporter
 
Silver Supporter
 
 
Bronze Supporter
 
Women in RecSys’s Event Supporter
 
Challenge Sponsor
EkstraBladet
 
Special Supporters