Improving QA Generalization by Concurrent Modeling of Multiple Biases

Mingzhu Wu, Nafise Sadat Moosavi, Andreas Rücklé, Iryna Gurevych


Abstract
Existing NLP datasets contain various biases that models can easily exploit to achieve high performances on the corresponding evaluation sets. However, focusing on dataset-specific biases limits their ability to learn more generalizable knowledge about the task from more general data patterns. In this paper, we investigate the impact of debiasing methods for improving generalization and propose a general framework for improving the performance on both in-domain and out-of-domain datasets by concurrent modeling of multiple biases in the training data. Our framework weights each example based on the biases it contains and the strength of those biases in the training data. It then uses these weights in the training objective so that the model relies less on examples with high bias weights. We extensively evaluate our framework on extractive question answering with training data from various domains with multiple biases of different strengths. We perform the evaluations in two different settings, in which the model is trained on a single domain or multiple domains simultaneously, and show its effectiveness in both settings compared to state-of-the-art debiasing methods.
Anthology ID:
2020.findings-emnlp.74
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Venues:
EMNLP | Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
839–853
Language:
URL:
https://aclanthology.org/2020.findings-emnlp.74
DOI:
10.18653/v1/2020.findings-emnlp.74
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/2020.findings-emnlp.74.pdf
Video:
 https://slideslive.com/38940113
Code
 UKPLab/qa-generalization-concurrent-debiasing
Data
DROPDuoRCHotpotQANatural QuestionsNewsQARACESQuADTriviaQA