Jana Thompson
2022
QuALITY: Question Answering with Long Input Texts, Yes!
Richard Yuanzhe Pang
|
Alicia Parrish
|
Nitish Joshi
|
Nikita Nangia
|
Jason Phang
|
Angelica Chen
|
Vishakh Padmakumar
|
Johnny Ma
|
Jana Thompson
|
He He
|
Samuel Bowman
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
To enable building and testing models on long-document comprehension, we introduce QuALITY, a multiple-choice QA dataset with context passages in English that have an average length of about 5,000 tokens, much longer than typical current models can process. Unlike in prior work with passages, our questions are written and validated by contributors who have read the entire passage, rather than relying on summaries or excerpts. In addition, only half of the questions are answerable by annotators working under tight time constraints, indicating that skimming and simple search are not enough to consistently perform well. Our baseline models perform poorly on this task (55.4%) and significantly lag behind human performance (93.5%).
BBQ: A hand-built bias benchmark for question answering
Alicia Parrish
|
Angelica Chen
|
Nikita Nangia
|
Vishakh Padmakumar
|
Jason Phang
|
Jana Thompson
|
Phu Mon Htut
|
Samuel Bowman
Findings of the Association for Computational Linguistics: ACL 2022
It is well documented that NLP models learn social biases, but little work has been done on how these biases manifest in model outputs for applied tasks like question answering (QA). We introduce the Bias Benchmark for QA (BBQ), a dataset of question-sets constructed by the authors that highlight attested social biases against people belonging to protected classes along nine social dimensions relevant for U.S. English-speaking contexts. Our task evaluate model responses at two levels: (i) given an under-informative context, we test how strongly responses reflect social biases, and (ii) given an adequately informative context, we test whether the model’s biases override a correct answer choice. We find that models often rely on stereotypes when the context is under-informative, meaning the model’s outputs consistently reproduce harmful biases in this setting. Though models are more accurate when the context provides an informative answer, they still rely on stereotypes and average up to 3.4 percentage points higher accuracy when the correct answer aligns with a social bias than when it conflicts, with this difference widening to over 5 points on examples targeting gender for most models tested.
Search
Co-authors
- Alicia Parrish 2
- Nikita Nangia 2
- Jason Phang 2
- Angelica Chen 2
- Vishakh Padmakumar 2
- show all...