Title
From Documents to Dialogues: How LLMs are Shaping the Future of Work
Abstract
The future of work is changing rapidly, with knowledge increasingly embedded in conversations rather than documents. In this keynote, I will explore how large language models (LLMs) can boost people’s productivity and creativity by generating natural language suggestions and feedback that align with their context and intent. To do this effectively, LLMs need to be able to leverage relevant content from various sources to ground their responses. People also need to learn new conversational patterns that elicit the full value of LLMs, as the ones that work well among people may not be optimal for LLMs. I will discuss the importance of prompt engineering in productivity contexts, and highlight the value in being able to identify and recommend conversational templates. By leaning into these research topics, there is an opportunity for the Recommender System community to create a new – and better – future of work.
About the speaker
Jaime Teevan is Chief Scientist and Technical Fellow at Microsoft, where she is responsible for driving research-backed innovation in the company’s core products. Jaime is an advocate for finding smarter ways for people to make the most of their time. She leads Microsoft’s future of work initiative, which explores how everything from AI to hybrid work changes the way people get things done. Previously she was Technical Advisor to CEO Satya Nadella and led the Productivity team at Microsoft Research. Jaime is an ACM Fellow and a member of the ACM SIGIR and SIGCHI Academies, and has received the TR35, BECA, and Karen Spärck Jones awards. She holds a Ph.D. in AI from MIT and a B.S. from Yale, and is an Affiliate Professor at the University of Washington.
The slides for this presentation can be found here.
Title
Towards Generative Search and Recommendation
Abstract
The emergence of large language models (LLM’s) that offer significant capabilities in content comprehension, content generation, and flexible dialogues, has the potential to revolutionize the ways we seek and consume information. We can now freely converse with such systems to express our intent in a fine-grained and multimodal manner, and we expect the system to recommend existing items or generate new items as necessary, and present them in a concise summarized form. This has prompted the recent trend in both academia and industry to develop LLM-based systems for various applications with enhanced capabilities. However, before such systems can be widely used and accepted, we need to address several challenges. The first is the trust in generated content as we expect the LLM’s to make mistakes because of the quality of data being used for their training. They might also lack knowledge in certain vertical domains. We thus need to develop both external and self-evaluation techniques to assess the trustability of the generated content. The second is the integration of retrieved and generated content. This is because for many vertical domain applications, such as the Fintech, Healthcare, and Event Detection, there is a need to integrate the latest information and signals to supplement the existing and generated content. The third challenge is how to teach the system to be pro-active in anticipating the needs of the users and directing the conversation towards a fruitful direction. In this talk, I will present a generative information seeking paradigm, and discuss our research towards a trustable generative system for search and recommendation. In particular, I will discuss how we address the challenges of trust, integration of retrieved and generated content, and proactivity in two vertical domain LLM-based systems. Finally, I will present some promising research directions.
About the speaker
Dr. Chua is the KITHCT Chair Professor at the School of Computing, National University of Singapore (NUS). He is also the Distinguished Visiting Professor of Tsinghua University, the Visiting Pao Yue-Kong Chair Professor of Zhejiang University, and the Distinguished Visiting Professor of Sichuan University. Dr. Chua was the Founding Dean of the School of Computing from 1998-2000. His main research interests include unstructured data analytics, video analytics, conversational search and recommendation, and robust and trustable AI. He is the co-Director of NExT, a joint research Center between NUS and Tsinghua University.
Dr Chua is the recipient of the 2015 ACM SIGMM Achievements Award, and the winner of the 2022 NUS Research Recognition Award. He is the Chair of steering committee of Multimedia Modeling (MMM) conference series, and ACM International Conference on Multimedia Retrieval (ICMR) (2015-2018). He is the General Co-Chair of ACM Multimedia 2005, ACM SIGIR 2008, ACM Web Science 2015, ACM MM-Asia 2020, WSDM 2023, and the upcoming TheWebConf (or WWW) 2024. He serves in the editorial boards of two international journals. Dr. Chua is the co-Founder of two technology startup companies in Singapore.
Title
Recommendation systems: Challenges and solutions
Abstract
In this talk, I will present Machine Learning solutions for three specific recommendation system challenges in the real world –
- Node recommendations in directed graphs: Given a directed graph, the problem is to recommend the top-k nodes with the highest likelihood of a link from a query node. We enhance GNNs with dual embeddings and propose adaptive neighborhood sampling techniques to handle asymmetric recommendations.
- Delayed feedback: The problem is to train an ML model in the presence of target labels that may change over time due to delayed feedback of user actions. We employ an importance sampling strategy to deal with delayed feedback – the strategy corrects the bias in both target labels and feature computation, and leverages pre-conversion signals such as clicks.
- Uncertainty in model predictions: For binary classification problems, we show that we can leverage uncertainty estimates for model predictions to improve accuracy. Specifically, we propose algorithms to select decision boundaries with multiple threshold values on model scores, one per uncertainty level, to increase recall without hurting precision.
About the speaker
Rajeev Rastogi is the Vice President of Machine Learning (ML) for Amazon’s International Stores business. He leads the development of ML solutions in the areas of Search, Advertising, Deals, Catalog Quality, Payments, Forecasting, Question Answering, Grocery Grading, etc. Previously, he was Vice President of Yahoo! Labs Bangalore and the founding Director of the Bell Labs Research Center in Bangalore, India. Rajeev is an ACM Fellow and a Bell Labs Fellow. He has published over 125 papers, and holds over 100 patents. He currently serves on the editorial board of the CACM, and has been an Associate editor for IEEE Transactions on Knowledge and Data Engineering in the past. Rajeev received his B. Tech degree from IIT Bombay, and a PhD degree in Computer Science from the University of Texas, Austin.