Thomas Lippincott


pdf bib
Active learning and negative evidence for language identification
Thomas Lippincott | Ben Van Durme
Proceedings of the Second Workshop on Data Science with Human in the Loop: Language Advances

Language identification (LID), the task of determining the natural language of a given text, is an essential first step in most NLP pipelines. While generally a solved problem for documents of sufficient length and languages with ample training data, the proliferation of microblogs and other social media has made it increasingly common to encounter use-cases that *don’t* satisfy these conditions. In these situations, the fundamental difficulty is the lack of, and cost of gathering, labeled data: unlike some annotation tasks, no single “expert” can quickly and reliably identify more than a handful of languages. This leads to a natural question: can we gain useful information when annotators are only able to *rule out* languages for a given document, rather than supply a positive label? What are the optimal choices for gathering and representing such *negative evidence* as a model is trained? In this paper, we demonstrate that using negative evidence can improve the performance of a simple neural LID model. This improvement is sensitive to policies of how the evidence is represented in the loss function, and for deciding which annotators to employ given the instance and model state. We consider simple policies and report experimental results that indicate the optimal choices for this task. We conclude with a discussion of future work to determine if and how the results generalize to other classification tasks.


pdf bib
Unsupervised Morphology-Based Vocabulary Expansion
Mohammad Sadegh Rasooli | Thomas Lippincott | Nizar Habash | Owen Rambow
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)


pdf bib
Learning Syntactic Verb Frames using Graphical Models
Thomas Lippincott | Anna Korhonen | Diarmuid Ó Séaghdha
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)