Session: Implicit Feedback and User Preference
Date: Tuesday, September 11, 11:00-12:30
- Optimal Radio Channel Recommendations with Implicit Feedback
by Omar Moling, Linas Baltrunas and Francesco Ricci
The very large majority of recommender systems are running as server-side applications, and they are controlled by the content provider, i.e., who provides the recommended items. This paper focuses on a different scenario: the user is supposed to be able to access content from multiple providers, in our application they offer radio channels, and it is up to a personal recommender installed on the clients’ side to decide which channel to select and recommend to the user. We exploit the implicit feedback derived from the user’s listening behavior, and we model channel recommendation as a sequential decision making problem. We have implemented a personal RS that integrates reinforcement learning techniques to decide what channel to play every time the user asks for a new music track or the current track finishes playing. In a live user study we show that the proposed system can sequentially select the next channel to play such that the users listen to the streamed tracks for a larger fraction, and for more time, compared to a baseline system not exploiting implicit feedback.
- Alternating Least Squares for Personalized Ranking
by Gabor Takacs and Domonkos Tikk
Two flavors of the recommendation problem are the explicit and the implicit feedback settings. In the explicit feedback case, users rate items and the user-item preference relationship can be modelled on the basis of the ratings. In the harder but more common implicit feedback case, the system has to infer user preferences from indirect information: presence or absence of events, such as a user viewed an item. One approach for handling implicit feedback is to minimize a ranking objective function instead of the conventional prediction mean squared error. The naive minimization of a ranking objective function is typically expensive. This difficulty is usually overcome by a trade-off: sacrificing the accuracy to some extent for computational efficiency by sampling the objective function. In this paper, we present a computationally effective approach for the direct minimization of a ranking objective function, without sampling. We demonstrate by experiments on the Y!Music and Netflix data sets that the proposed method outperforms other implicit feedback recommenders in many cases in terms of the ErrorRate, ARP and Recall evaluation metrics.
- Local Implicit Feedback Mining for Music Recommendation
by Diyi Yang, Tianqi Chen, Qiuxia Lu and Weinan Zhang
Digital music has experienced a quite fascinating transformation during the past decades. Thousands of people share or distribute their music collections on the Internet, resulting in an explosive increase of information and more user dependence on automatic recommender systems. Though there are many techniques such as collaborative filtering, most approaches focus mainly on users’ global behaviors, neglecting local actions and the specific properties of music. In this paper, we propose a simple and effective local implicit feedback model mining users’ local preferences to get better recommendation performance in both rating and ranking prediction. Moreover, we design an efficient training algorithm to speed up the updating procedure, and give a method to find the most appropriate time granularity to assist the performance. We conduct various experiments to evaluate the performance of this model, which show that it outperforms baseline model significantly. Integration with existing temporal models achieves a great improvement compared to the reported best single model for Yahoo! Music.
- How Many Bits per Rating?
by Daniel Kluver, Tien T. Nguyen, Michael Ekstrand, Shilad Sen and John Riedl
Most recommender systems assume user ratings accurately represent user preferences. However, prior research shows that user ratings are imperfect and noisy. Moreover, this noise limits the measurable predictive power of any recommender system. We propose an information theoretic framework for quantifying the preference information contained in ratings and predictions. We computationally explore the properties of our model and apply our framework to estimate the efficiency of different rating scales for real world datasets. We then estimate how the amount of information predictions give to users is related to the scale ratings are collected on. Our findings suggest a tradeoff in rating scale granularity: while previous research indicates that coarse scales (such as thumbs up / thumbs down) take less time, we find that ratings with these scales provide less predictive value to users. We introduce a new measure, preference bits per second, to quantitatively reconcile this tradeoff.
RecSys 2012 (Dublin)
Sponsors and Benefactors
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |










