Paper Session 9: Deep Learning

Date: Wednesday, Aug 30, 2017, 10:45-12:30
Location: Main Room
Chair: Domonkos Tikk

  • LPGetting Deep Recommenders Fit: Bloom Embeddings for Sparse Binary Input/Output Networks by Joan Serrà and Alexandros Karatzoglou

    Recommendation algorithms that incorporate techniques from deep learning are becoming increasingly popular. Due to the structure of the data coming from recommendation domains (i.e., one-hot-encoded vectors of item preferences), these algorithms tend to have large input and output dimensionalities that dominate their overall size. This makes them difficult to train, due to the limited memory of graphical processing units, and difficult to deploy on mobile devices with limited hardware. To address these difficulties, we propose Bloom embeddings, a compression technique that can be applied to the input and output of neural network models dealing with sparse high-dimensional binary-coded instances. Bloom embeddings are computationally efficient, and do not seriously compromise the accuracy of the model up to 1/5 compression ratios. In some cases, they even improve over the original accuracy, with relative increases up to 12%. We evaluate Bloom embeddings on 7 data sets and compare it against 4 alternative methods, obtaining favorable results. We also discuss a number of further advantages of Bloom embeddings, such as ‘on-the-fly’ constant-time operation, zero or marginal space requirements, training time speedups, or the fact that they do not require any change to the core model architecture or training configuration.

  • LPTransNets: Learning to Transform for Recommendation by Rose Catherine and William Cohen

    Recently, deep learning methods have been shown to improve the performance of recommender systems over traditional methods, especially when review text is available. For example, a recent model, DeepCoNN, uses neural nets to learn one latent representation for the text of all reviews written by a target user, and a second latent representation for the text of all reviews for a target item, and then combines these latent representations to obtain state-of-the-art performance on recommendation tasks. We show that (unsurprisingly) much of the predictive value of review text comes from reviews of the target user for the target item. We then introduce a way in which this information can be used in recommendation, even when the target user’s review for the target item is not available. Our model, called TransNets, extends the DeepCoNN model by introducing an additional latent layer representing the target user-target item pair. We then regularize this layer, at training time, to be similar to another latent representation of the target user’s review of the target item. We show that TransNets and extensions of it improve substantially over the previous state-of-the-art.

  • LPInterpretable Convolutional Neural Networks with Dual Local and Global Attention for Review Rating Prediction by Sungyong Seo, Jing Huang, Hao Yang and Yan Liu

    Recently, many e-commerce websites have encouraged their users to rate shopping items and write review text. This review text information has been very useful for understanding user preferences and item properties and it enhances the capability to make personalized recommendations of these websites. In this paper, we propose to model user preferences and item properties using convolutional neural networks (CNNs) with dual local and global attention, motivated by the superiority of CNNs to extract complex features. By using aggregated review text from a user and aggregated review text for an item, our model can learn the unique features (embedding) of each user and each item. These features are then used to predict ratings. We train these user and item networks jointly and this enables the interaction between users and items in a similar way to matrix factorization. The local attention gives us insight on a user’s preferences or an item’s properties. The global attention helps CNNs focus on semantic meanings of the whole review text. Thus, the combined local and global attentions enable an interpretable and better-learned representation of users and items. We validate the proposed models by applying popular review datasets in Yelp and Amazon and compare the results with matrix factorization (MF), the hidden factors as topics (HFT) model, and the recently proposed convolutional matrix factorization (ConvMF+). The proposed CNNs with dual attention model outperforms HFT and ConvMF+ in terms of mean square errors (MSE). In addition, we compare the user/item embeddings learned from these models for classification and recommendation. These results also confirm the superior quality of user/item embeddings learned from our model.

  • SPWhen Recurrent Neural Networks meet the Neighborhood for Session-Based Recommendation by Dietmar Jannach and Malte Ludewig

    Deep learning methods have led to substantial progress in various application fields of AI, and in recent years a number of proposals were made to improve recommender systems with artificial neural networks. For the problem of making session-based recommendations, i.e., for recommending the next item in an anonymous session, Hidasi et al. recently investigated the application of recurrent neural networks with Gated Recurrent Units (GRU4REC). Assessing the true value of such novel approaches based only on what is reported in the literature is however difficult when no standard evaluation protocols are applied and when the strength of the baselines used in the performance comparison is not clear. In this work we show based on a comprehensive empirical evaluation that a heuristics-based nearest neighbor (kNN) scheme for sessions outperforms GRU4REC in the large majority of the tested configurations and datasets. Neighborhood sampling and efficient in-memory data structures ensure the scalability of the kNN method. The best results in the end were often achieved when we combine the kNN approach with GRU4REC, which shows that RNNs can leverage sequential signals in the data that cannot be detected by the co-occurrence-based kNN method.

  • SPRecommendation of High Quality Representative Reviews in e-commerce by Debanjan Paul, Sudeshna Sarkar, Muthusamy Chelliah, Chetan Kalyan and Prajit Prashant Sinai Nadkarni

    Users of ecommerce portals commonly use customer reviews for making purchase decisions. Many products contain tens or hundreds of reviews which makes it impossible for the customer to read all of them in order to get a good idea about the product. A review recommendation system that can recommend a subset of the reviews is thus useful for e-commerce websites. However customer reviews are of varied quality and different reviews cover different aspects or issues about the product. We follow previous work that maintain the statistical distribution of product aspects along with their associated sentiments of the entire review set for that particular product. However we address the challenge which arises due to the fact that similar aspects are mentioned in different reviews using different natural language expressions (e.g. camera, photo and picture refers to the same product aspect of camera). We use vector representations to identify mentions of similar aspects and group them together under single product aspect. Review helpfulness score may act as a proxy for quality of reviews, but this approach suffers from cold start problem as new reviews do not have any helpfulness score. We measure the quality of the review based on its content by doing supervised training using a convolutional neural network. Though our neural network model is trained with reviews from Amazon dataset which have helpfulness score, our model can be used to predict the quality score of new reviews which do not have any helpfulness score. The recommended subset of reviews have high content score and the coverage of the product aspects, issues and sentiments are representative of the entire review set. The system is evaluated on datasets from Amazon and is found to be more useful than the competing methods.

Back to Program

Diamond Supporter
 
Platinum Supporters
 
 
 
 
Gold Supporter
 
Silver Supporter
 
Special Supporters