2017
pdf
bib
abs
Web-Scale Language-Independent Cataloging of Noisy Product Listings for E-Commerce
Pradipto Das
|
Yandi Xia
|
Aaron Levine
|
Giuseppe Di Fabbrizio
|
Ankur Datta
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
The cataloging of product listings through taxonomy categorization is a fundamental problem for any e-commerce marketplace, with applications ranging from personalized search recommendations to query understanding. However, manual and rule based approaches to categorization are not scalable. In this paper, we compare several classifiers for categorizing listings in both English and Japanese product catalogs. We show empirically that a combination of words from product titles, navigational breadcrumbs, and list prices, when available, improves results significantly. We outline a novel method using correspondence topic models and a lightweight manual process to reduce noise from mis-labeled data in the training set. We contrast linear models, gradient boosted trees (GBTs) and convolutional neural networks (CNNs), and show that GBTs and CNNs yield the highest gains in error reduction. Finally, we show GBTs applied in a language-agnostic way on a large-scale Japanese e-commerce dataset have improved taxonomy categorization performance over current state-of-the-art based on deep belief network models.
pdf
bib
abs
Large-Scale Categorization of Japanese Product Titles Using Neural Attention Models
Yandi Xia
|
Aaron Levine
|
Pradipto Das
|
Giuseppe Di Fabbrizio
|
Keiji Shinzato
|
Ankur Datta
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
We propose a variant of Convolutional Neural Network (CNN) models, the Attention CNN (ACNN); for large-scale categorization of millions of Japanese items into thirty-five product categories. Compared to a state-of-the-art Gradient Boosted Tree (GBT) classifier, the proposed model reduces training time from three weeks to three days while maintaining more than 96% accuracy. Additionally, our proposed model characterizes products by imputing attentive focus on word tokens in a language agnostic way. The attention words have been observed to be semantically highly correlated with the predicted categories and give us a choice of automatic feature extraction for downstream processing.
2015
pdf
bib
SemEval-2015 Task 8: SpaceEval
James Pustejovsky
|
Parisa Kordjamshidi
|
Marie-Francine Moens
|
Aaron Levine
|
Seth Dworman
|
Zachary Yocum
Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015)