Smart “Chef”: Verifying the Effect of Role-based Paraphrasing for Aspect Term Extraction

Jiaxiang Chen, Yu Hong, Qingting Xu, Jianmin Yao


Abstract
We tackle Aspect Term Extraction (ATE), a task of automatically extracting aspect terms from sentences. The current Pretrained Language Model (PLM) based extractors have achieved significant improvements. They primarily benefit from context-aware encoding. However, a considerable number of sentences in ATE corpora contain uninformative or low-quality contexts. Such sentences frequently act as “troublemakers” during test. In this study, we explore the context-oriented quality improvement method. Specifically, we propose to automatically rewrite the sentences from the perspectives of virtual experts with different roles, such as a “chef” in the restaurant domain. On this basis, we perform ATE over the paraphrased sentences during test, using the well-trained extractors without any change. In the experiments, we leverage ChatGPT to determine virtual experts in the considered domains, and induce ChatGPT to generate paraphrases conditioned on the roles of virtual experts. We experiment on the benchmark SemEval datasets, including Laptop-domain L14 and Restaurant-domain R14-16. The experimental results show that our approach effectively recalls the inconspicuous aspect terms like “al di la”, although it reduces the precision. In addition, it is proven that our approach can be substantially improved by redundancy elimination and multi-role voting. More importantly, our approach can be used to expand the predictions obtained on the original sentences. This yields state-of-the-art performance (i.e., F1-scores of 86.2%, 89.3%, 77.7%, 82.7% on L14 and R14-16) without retraining or fine-tuning the baseline extractors.
Anthology ID:
2023.findings-emnlp.144
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2190–2197
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.144
DOI:
10.18653/v1/2023.findings-emnlp.144
Bibkey:
Cite (ACL):
Jiaxiang Chen, Yu Hong, Qingting Xu, and Jianmin Yao. 2023. Smart “Chef”: Verifying the Effect of Role-based Paraphrasing for Aspect Term Extraction. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 2190–2197, Singapore. Association for Computational Linguistics.
Cite (Informal):
Smart “Chef”: Verifying the Effect of Role-based Paraphrasing for Aspect Term Extraction (Chen et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.144.pdf