Title
A Collectivist Vision of AI: Collaborative Learning, Statistical Incentives, and Social Welfare
Abstract
Artificial intelligence (AI) has focused on a paradigm in which intelligence inheres in a single, autonomous agent. Social issues are entirely secondary in this paradigm. When AI systems are deployed in social contexts, however, the overall design of such systems is often naive—a centralized entity provides services to passive agents and reaps the rewards. Such a paradigm need not be the dominant paradigm for information technology. In a broader framing, agents are active, they are cooperative, and they wish to obtain value from their participation in learning-based systems. Agents may supply data and other resources to the system, only if it is in their interest to do so. Critically, intelligence inheres as much in the overall system as it does in individual agents, be they humans or computers. This is a perspective that is familiar in the social sciences, and a key theme in my work is that of bringing economics into contact with foundational issues in computing and data sciences. I’ll emphasize some of the design challenges that arise at this tripartite interface.
About the speaker
Michael I. Jordan is a researcher at Inria Paris and Professor Emeritus at the University of California, Berkeley. His research interests bridge the computational, statistical, cognitive, biological and social sciences. Prof. Jordan is a member of the National Academy of Sciences, a member of the National Academy of Engineering, a member of the American Academy of Arts and Sciences, and a Foreign Member of the Royal Society. He was the inaugural winner of the World Laureates Association (WLA) Prize in 2022. He was a Plenary Lecturer at the International Congress of Mathematicians in 2018. He has received the Ulf Grenander Prize from the American Mathematical Society, the IEEE John von Neumann Medal, the IJCAI Research Excellence Award, the David E. Rumelhart Prize, and the ACM/AAAI Allen Newell Award. In 2016, Prof. Jordan was named the “most influential computer scientist” worldwide in an article in Science, based on rankings from the Semantic Scholar search engine.
Title
The Power of AI in Recommender and Search Systems: An Industry Perspective Through the Lens of Spotify
Abstract
This talk explores the transformative impact of AI on recommender systems through the lens of Spotify. With a vast audio catalog, Spotify leverages advanced AI-driven search and recommendation systems to enhance the user experience. These systems not only guide users through millions of tracks, podcasts, and audiobooks but also foster content discovery. By precisely modeling user preferences and employing state-of-the-art AI techniques like machine learning and generative AI, Spotify creates highly personalized recommendations. This approach not only fulfills users’ immediate needs but also introduces them to new and relevant content, keeping them engaged and enriching their listening experience. This presentation draws on collective research from Spotify’s ongoing innovations in AI-powered recommendations.
About the speaker
Mounia is a Senior Director of Research at Spotify and Head of Tech Research in Personalization at Spotify. She also holds honorary positions at University College London and the University of Amsterdam. Previously, she served as Director of Research at Yahoo, where she focused on advertising quality and user engagement, and held academic roles at the University of Glasgow and Queen Mary, University of London. A prominent figure in the research community, Mounia regularly serves on senior program committees for major conferences like WSDM, KDD, and SIGIR. She has co-chaired SIGIR 2015, WWW 2018, WSDM 2020, and CIKM 2023, and has authored over 250 papers. Mounia has also been nominated for the VentureBeat Women in AI Awards for Research in both 2022 and 2023.
Title
Toward Human-Centered Explainable AI
Abstract
Artificial intelligence systems are increasingly involved in high-stakes decision-making, such as healthcare, financial, and educational determinations. Many have called for explainable AI (XAI), which are AI systems that provide human-understandable explanations for their reasoning or responses. Through explanations, developers and researchers aim to create AI systems that allow human oversight and improved decision-making while fostering trust. While XAI has traditionally focused on developers in their pursuit for improved system performance and fairness, I consider the question of how XAI systems can support non-technical end-users. A core assumption of XAI is that explanations are actionable. That is, they change what users know, enabling them to act using the AI. What makes explanations actionable for end-users? What can happen if we ignore actionability? What human factor dimensions might we be overlooking in our AI designs? In this talk, I will attempt to focalize explainable AI on these questions and give evidence on how asking these questions can improve outcomes for users of AI systems.
About the speaker
Dr. Mark Riedl is a Professor in the Georgia Tech School of Interactive Computing and Associate Director of the Georgia Tech Machine Learning Center. Dr. Riedl’s research focuses on human-centered artificial intelligence—the development of artificial intelligence and machine learning technologies that understand, enhance, and augment the human condition. Dr. Riedl’s recent work has focused on story understanding and generation, computational creativity, explainable AI, and teaching virtual agents to behave safely. His research is supported by the NSF, DARPA, ONR, the U.S. Army, U.S. Health and Human Services, Disney, Google, Meta, and Amazon. He is the recipient of a DARPA Young Faculty Award and an NSF CAREER Award.