Not to Overfit or Underfit the Source Domains? An Empirical Study of Domain Generalization in Question Answering

Md Arafat Sultan, Avi Sil, Radu Florian


Abstract
Machine learning models are prone to overfitting their training (source) domains, which is commonly believed to be the reason why they falter in novel target domains. Here we examine the contrasting view that multi-source domain generalization (DG) is first and foremost a problem of mitigating source domain underfitting: models not adequately learning the signal already present in their multi-domain training data. Experiments on a reading comprehension DG benchmark show that as a model learns its source domains better—using familiar methods such as knowledge distillation (KD) from a bigger model—its zero-shot out-of-domain utility improves at an even faster pace. Improved source domain learning also demonstrates superior out-of-domain generalization over three popular existing DG approaches that aim to limit overfitting. Our implementation of KD-based domain generalization is available via PrimeQA at: https://ibm.biz/domain-generalization-with-kd.
Anthology ID:
2022.emnlp-main.247
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3752–3761
Language:
URL:
https://aclanthology.org/2022.emnlp-main.247
DOI:
10.18653/v1/2022.emnlp-main.247
Bibkey:
Cite (ACL):
Md Arafat Sultan, Avi Sil, and Radu Florian. 2022. Not to Overfit or Underfit the Source Domains? An Empirical Study of Domain Generalization in Question Answering. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 3752–3761, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Not to Overfit or Underfit the Source Domains? An Empirical Study of Domain Generalization in Question Answering (Sultan et al., EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.247.pdf