Trina Chatterjee
2022
Beyond Counting Datasets: A Survey of Multilingual Dataset Construction and Necessary Resources
Xinyan Yu
|
Trina Chatterjee
|
Akari Asai
|
Junjie Hu
|
Eunsol Choi
Findings of the Association for Computational Linguistics: EMNLP 2022
While the NLP community is generally aware of resource disparities among languages, we lack research that quantifies the extent and types of such disparity. Prior surveys estimating the availability of resources based on the number of datasets can be misleading as dataset quality varies: many datasets are automatically induced or translated from English data. To provide a more comprehensive picture of language resources, we examine the characteristics of 156 publicly available NLP datasets. We manually annotate how they are created, including input text and label sources and tools used to build them, and what they study, tasks they address and motivations for their creation. After quantifying the qualitative NLP resource gap across languages, we discuss how to improve data collection in low-resource languages. We survey language-proficient NLP researchers and crowd workers per language, finding that their estimated availability correlates with dataset availability. Through crowdsourcing experiments, we identify strategies for collecting high-quality multilingual data on the Mechanical Turk platform. We conclude by making macro and micro-level suggestions to the NLP community and individual researchers for future multilingual data development.
longhorns at DADC 2022: How many linguists does it take to fool a Question Answering model? A systematic approach to adversarial attacks.
Venelin Kovatchev
|
Trina Chatterjee
|
Venkata S Govindarajan
|
Jifan Chen
|
Eunsol Choi
|
Gabriella Chronis
|
Anubrata Das
|
Katrin Erk
|
Matthew Lease
|
Junyi Jessy Li
|
Yating Wu
|
Kyle Mahowald
Proceedings of the First Workshop on Dynamic Adversarial Data Collection
Developing methods to adversarially challenge NLP systems is a promising avenue for improving both model performance and interpretability. Here, we describe the approach of the team “longhorns” on Task 1 of the The First Workshop on Dynamic Adversarial Data Collection (DADC), which asked teams to manually fool a model on an Extractive Question Answering task. Our team finished first (pending validation), with a model error rate of 62%. We advocate for a systematic, linguistically informed approach to formulating adversarial questions, and we describe the results of our pilot experiments, as well as our official submission.
Search
Co-authors
- Eunsol Choi 2
- Xinyan Yu 1
- Akari Asai 1
- Junjie Hu 1
- Venelin Kovatchev 1
- show all...