Aditya Singhal
2023
Common Law Annotations: Investigating the Stability of Dialog System Output Annotations
Seunggun Lee
|
Alexandra DeLucia
|
Nikita Nangia
|
Praneeth Ganedi
|
Ryan Guan
|
Rubing Li
|
Britney Ngaw
|
Aditya Singhal
|
Shalaka Vaidya
|
Zijun Yuan
|
Lining Zhang
|
João Sedoc
Findings of the Association for Computational Linguistics: ACL 2023
Metrics for Inter-Annotator Agreement (IAA), like Cohen’s Kappa, are crucial for validating annotated datasets. Although high agreement is often used to show the reliability of annotation procedures, it is insufficient to ensure or reproducibility. While researchers are encouraged to increase annotator agreement, this can lead to specific and tailored annotation guidelines. We hypothesize that this may result in diverging annotations from different groups. To study this, we first propose the Lee et al. Protocol (LEAP), a standardized and codified annotation protocol. LEAP strictly enforces transparency in the annotation process, which ensures reproducibility of annotation guidelines. Using LEAP to annotate a dialog dataset, we empirically show that while research groups may create reliable guidelines by raising agreement, this can cause divergent annotations across different research groups, thus questioning the validity of the annotations. Therefore, we caution NLP researchers against using reliability as a proxy for reproducibility and validity.
2022
Task Transfer and Domain Adaptation for Zero-Shot Question Answering
Xiang Pan
|
Alex Sheng
|
David Shimshoni
|
Aditya Singhal
|
Sara Rosenthal
|
Avirup Sil
Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing
Pretrained language models have shown success in various areas of natural language processing, including reading comprehension tasks. However, when applying machine learning methods to new domains, labeled data may not always be available. To address this, we use supervised pretraining on source-domain data to reduce sample complexity on domainspecific downstream tasks. We evaluate zeroshot performance on domain-specific reading comprehension tasks by combining task transfer with domain adaptation to fine-tune a pretrained model with no labelled data from the target task. Our approach outperforms DomainAdaptive Pretraining on downstream domainspecific reading comprehension tasks in 3 out of 4 domains.
Search
Co-authors
- Seunggun Lee 1
- Alexandra Delucia 1
- Nikita Nangia 1
- Praneeth Ganedi 1
- Ryan Guan 1
- show all...