Deep Active Learning for Sequence Labeling Based on Diversity and Uncertainty in Gradient

Yekyung Kim


Abstract
Recently, several studies have investigated active learning (AL) for natural language processing tasks to alleviate data dependency. However, for query selection, most of these studies mainly rely on uncertainty-based sampling, which generally does not exploit the structural information of the unlabeled data. This leads to a sampling bias in the batch active learning setting, which selects several samples at once. In this work, we demonstrate that the amount of labeled training data can be reduced using active learning when it incorporates both uncertainty and diversity in the sequence labeling task. We examined the effects of our sequence-based approach by selecting weighted diverse in the gradient embedding approach across multiple tasks, datasets, models, and consistently outperform classic uncertainty-based sampling and diversity-based sampling.
Anthology ID:
2020.lifelongnlp-1.1
Volume:
Proceedings of the 2nd Workshop on Life-long Learning for Spoken Language Systems
Month:
December
Year:
2020
Address:
Suzhou, China
Editors:
William M. Campbell, Alex Waibel, Dilek Hakkani-Tur, Timothy J. Hazen, Kevin Kilgour, Eunah Cho, Varun Kumar, Hadrien Glaude
Venue:
lifelongnlp
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–8
Language:
URL:
https://aclanthology.org/2020.lifelongnlp-1.1
DOI:
Bibkey:
Cite (ACL):
Yekyung Kim. 2020. Deep Active Learning for Sequence Labeling Based on Diversity and Uncertainty in Gradient. In Proceedings of the 2nd Workshop on Life-long Learning for Spoken Language Systems, pages 1–8, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Deep Active Learning for Sequence Labeling Based on Diversity and Uncertainty in Gradient (Kim, lifelongnlp 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.lifelongnlp-1.1.pdf
Data
ATISCoNLL 2003