Vincent Gao
2022
Leveraging Seq2seq Language Generation for Multi-level Product Issue Identification
Yang Liu
|
Varnith Chordia
|
Hua Li
|
Siavash Fazeli Dehkordy
|
Yifei Sun
|
Vincent Gao
|
Na Zhang
Proceedings of the Fifth Workshop on e-Commerce and NLP (ECNLP 5)
In a leading e-commerce business, we receive hundreds of millions of customer feedback from different text communication channels such as product reviews. The feedback can contain rich information regarding customers’ dissatisfaction in the quality of goods and services. To harness such information to better serve customers, in this paper, we created a machine learning approach to automatically identify product issues and uncover root causes from the customer feedback text. We identify issues at two levels: coarse grained (L-Coarse) and fine grained (L-Granular). We formulate this multi-level product issue identification problem as a seq2seq language generation problem. Specifically, we utilize transformer-based seq2seq models due to their versatility and strong transfer-learning capability. We demonstrate that our approach is label efficient and outperforms the traditional approach such as multi-class multi-label classification formulation. Based on human evaluation, our fine-tuned model achieves 82.1% and 95.4% human-level performance for L-Coarse and L-Granular issue identification, respectively. Furthermore, our experiments illustrate that the model can generalize to identify unseen L-Granular issues.
2021
Improving Factual Consistency of Abstractive Summarization on Customer Feedback
Yang Liu
|
Yifei Sun
|
Vincent Gao
Proceedings of the 4th Workshop on e-Commerce and NLP
E-commerce stores collect customer feedback to let sellers learn about customer concerns and enhance customer order experience. Because customer feedback often contains redundant information, a concise summary of the feedback can be generated to help sellers better understand the issues causing customer dissatisfaction. Previous state-of-the-art abstractive text summarization models make two major types of factual errors when producing summaries from customer feedback, which are wrong entity detection (WED) and incorrect product-defect description (IPD). In this work, we introduce a set of methods to enhance the factual consistency of abstractive summarization on customer feedback. We augment the training data with artificially corrupted summaries, and use them as counterparts of the target summaries. We add a contrastive loss term into the training objective so that the model learns to avoid certain factual errors. Evaluation results show that a large portion of WED and IPD errors are alleviated for BART and T5. Furthermore, our approaches do not depend on the structure of the summarization model and thus are generalizable to any abstractive summarization systems.
Search
Co-authors
- Yang Liu 2
- Yifei Sun 2
- Varnith Chordia 1
- Hua Li 1
- Siavash Fazeli Dehkordy 1
- show all...
- Na Zhang 1