The KDD workshop on online and adaptive recommender systems (OARS) will serve as a platform for publication and discussion of OARS. This workshop will bring together practitioners and researchers from academia and industry to discuss the challenges and approaches to implement OARS algorithms and systems, and improve user experiences by better modeling and responding to user intent.
Many recommender systems deployed in the real world rely on categorical user-profiles and/or pre-calculated recommendation actions that stay static during a user session. Recent trends suggest that recommender systems should model user intent in real time and constantly adapt to meet user needs at the moment or change user behavior in situ. In addition, various techniques have been proposed to help recommender systems adapt to new users, items, or behaviors. Some strategies to build “adaptive” recommenders include:
We invite submission of papers and posters of two to ten pages (including references), representing original research, preliminary research results, proposals for new work, position, and opinion papers. All submitted papers and posters will be single-blind and will be peer reviewed by an international program committee of researchers of high repute. Accepted submissions will be presented at the workshop.
Topics of interest include, but are not limited to:
All papers will be peer reviewed (single-blind) by the program committee and judged by their relevance to the workshop, especially to the main themes identified above, and their potential to generate discussion.
All submissions must be formatted according to the latest ACM SIG proceedings template (two column format). One recommended setting for Latex file of manuscript is: \documentclass[sigconf, anonymous, review]{acmart}. Submissions must describe work that is not previously published, not accepted for publication elsewhere, and not currently under review elsewhere. All submissions must be in English.
Please note that at least one of the authors of each accepted paper must register for the workshop and attend in person to present the paper during the workshop.
Submissions to KDD OARS workshop should be made at easychair page.
Submissions Due | |
---|---|
Notification | June 23 |
Camera Ready Version Due | July 10 |
Workshop Day | Aug 6 |
Time | Talk |
---|---|
1:00-1:10 PT | Openning |
1:10-1:50 PT | Invited Talk 1 Large Language Models for Generative Recommendation [slides] Yongfeng Zhang, Rutgers University The boom of Generative AI driven by Foundation Models such as large language models has brought a paradigm shift for recommender systems. Instead of traditional multi-stage filtering and matching-based recommendation, it now becomes possible to do straightforward single-stage recommendation by directly generating the recommended items based on user’s personalized inputs. This paradigm shift not only brings increased recommendation accuracy, but also improves the efficiency through single-stage recommendation, and enables better controllability for users based on natural language prompts. This talk with introduce generative recommendation from various perspectives, including large language models for recommendation, item representation, multi-modal recommendation, prompt generation, as well as the explainability and fairness of large language models in recommendation. |
1:50-2:10 PT | Contributed Talk 1 Active Learning with a Budget to Rank Candidates Rated by Disjoint Assessors Tushar Phule, Pragalbh Vashishtha and Arun Rajkumar [slides] [BibTex] |
2:10-2:30 PT | Contributed Talk 2 Evaluating Federated Session-Based Recommender Systems Marko Harasic, Dennis Lehmann, Adrian Paschke and Babak Mafakheri [slides] [BibTex] |
2:30-3:10 PT | Invited Talk 3 Incremental Training, Session-based Recommendation, and System Level Approaches to Online Recommender Systems [slides] Even Oldridge, NVIDIA Recommendation scenarios like social networks and news are extremely dynamic in nature with user interests changing over time and new items being continuously added due to breaking news and trending events. In order to deal with this situation the community has developed several techniques which address the problem of an ever changing user and item catalogue. The NVIDIA Merlin team has developed a series of open source libraries which provide the capability for developing online recommender systems. This talk will introduce some of these tools and highlight the experiments that we've done using them. |
3:10-3:50 PT | Invited Talk 4 User Centric Recommender Systems [slides] Tania Bedrax-Weiss, Google Research User centric recommender systems must understand and meet genuine user needs and preferences using natural, unobtrusive, and transparent interaction. In this talk, I will advocate how Language technologies, and more recently Large Language Models, can help enable these user centric recommender systems. I will briefly outline the problems, solutions and outstanding challenges in understanding the user, creating rich user interactions, understanding the domain, and understanding the ecosystem, and give examples of these in Google products. |
3:50-4:10 PT | Contributed Talk 3 FLASH4Rec: A Lightweight and Sparsely Activated Transformer for User-Aware Sequential Recommendation Yachen Yan and Liubo Li [slides] [BibTex] |
4:10-4:30 PT | Contributed Talk 4 Empowering recommender systems using automatically generated Knowledge Graphs and Reinforcement Learning Ghanshyam Verma, Simon Simanta, Huan Chen, Devishree Pillai, John P. McCrae, János A. Perge, Shovon Sengupta and Paul Buitelaar [slides] [BibTex] |
4:30-5:10 PT | Invited Talk 2 Exploring the Potential of Large Language Models (LLMs) in Learning on Graphs [slides] Jiliang Tang, Michigan State University Learning on Graphs has attracted immense attention due to its wide real-world applications such as recommender systems. The most popular pipeline for learning on graphs with textual node attributes primarily relies on Graph Neural Networks (GNNs), and utilizes shallow text embedding as initial node representations, which has limitations in general knowledge and profound semantic understanding. In recent years, Large Language Models (LLMs) have been proven to possess extensive common knowledge and powerful semantic comprehension abilities that have revolutionized existing workflows to handle text data. In this talk, I will discuss pipelines to explore the potential of LLMs in graph machine learning and introduce original observations and find new insights that open new possibilities and suggest promising directions to leverage LLMs for learning on graphs. |
5:10-5:30 PT | Contributed Talk 5 Decision Layer: Enhancing Multi-model, Multi Timescale Decisions on the Fly with Online Feedback Meet Pradhuman Gandhi, Agniva Som, Suraj Satishkumar Sheth and Amrita Kumari [slides] [BibTex] |
Google Research
San Jose, CA
Michigan State University
East Lansing, MI
NVIDIA
Vancouver, British Columbia, Canada
Rutgers University
New Brunswick, NJ
The Home Depot
Atlanta, GA
Walmart Global Tech
Sunnyvale, CA
University of California
Berkeley
Google
New York
Georgia Institute of Technology
Atlanta, GA
University of California San Diego
San Diego, CA
Amazon
San Francisco, CA
Indeed
San Francisco, CA
Claypot AI
San Francisco, CA
Please send questions and enquiries to workshop.oars@gmail.com.