Multi-View Active Learning for Short Text Classification in User-Generated Data

Payam Karisani, Negin Karisani, Li Xiong


Abstract
Mining user-generated data often suffers from the lack of enough labeled data, short document lengths, and the informal user language. In this paper, we propose a novel active learning model to overcome these obstacles in the tasks tailored for query phrases–e.g., detecting positive reports of natural disasters. Our model has three novelties: 1) It is the first approach to employ multi-view active learning in this domain. 2) It uses the Parzen-Rosenblatt window method to integrate the representativeness measure into multi-view active learning. 3) It employs a query-by-committee strategy, based on the agreement between predictors, to address the usually noisy language of the documents in this domain. We evaluate our model in four publicly available Twitter datasets with distinctly different applications. We also compare our model with a wide range of baselines including those with multiple classifiers. The experiments testify that our model is highly consistent and outperforms existing models.
Anthology ID:
2022.findings-emnlp.481
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6441–6453
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.481
DOI:
10.18653/v1/2022.findings-emnlp.481
Bibkey:
Cite (ACL):
Payam Karisani, Negin Karisani, and Li Xiong. 2022. Multi-View Active Learning for Short Text Classification in User-Generated Data. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 6441–6453, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Multi-View Active Learning for Short Text Classification in User-Generated Data (Karisani et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-emnlp.481.pdf