Active learning and negative evidence for language identification

Thomas Lippincott, Ben Van Durme


Abstract
Language identification (LID), the task of determining the natural language of a given text, is an essential first step in most NLP pipelines. While generally a solved problem for documents of sufficient length and languages with ample training data, the proliferation of microblogs and other social media has made it increasingly common to encounter use-cases that *don’t* satisfy these conditions. In these situations, the fundamental difficulty is the lack of, and cost of gathering, labeled data: unlike some annotation tasks, no single “expert” can quickly and reliably identify more than a handful of languages. This leads to a natural question: can we gain useful information when annotators are only able to *rule out* languages for a given document, rather than supply a positive label? What are the optimal choices for gathering and representing such *negative evidence* as a model is trained? In this paper, we demonstrate that using negative evidence can improve the performance of a simple neural LID model. This improvement is sensitive to policies of how the evidence is represented in the loss function, and for deciding which annotators to employ given the instance and model state. We consider simple policies and report experimental results that indicate the optimal choices for this task. We conclude with a discussion of future work to determine if and how the results generalize to other classification tasks.
Anthology ID:
2021.dash-1.8
Volume:
Proceedings of the Second Workshop on Data Science with Human in the Loop: Language Advances
Month:
June
Year:
2021
Address:
Online
Venues:
DaSH | NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
47–51
Language:
URL:
https://aclanthology.org/2021.dash-1.8
DOI:
10.18653/v1/2021.dash-1.8
Bibkey:
Cite (ACL):
Thomas Lippincott and Ben Van Durme. 2021. Active learning and negative evidence for language identification. In Proceedings of the Second Workshop on Data Science with Human in the Loop: Language Advances, pages 47–51, Online. Association for Computational Linguistics.
Cite (Informal):
Active learning and negative evidence for language identification (Lippincott & Van Durme, DaSH 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.dash-1.8.pdf