Assessing Out-of-Domain Language Model Performance from Few Examples

Prasann Singhal, Jarad Forristal, Xi Ye, Greg Durrett


Abstract
While pretrained language models have exhibited impressive generalization capabilities, they still behave unpredictably under certain domain shifts. In particular, a model may learn a reasoning process on in-domain training data that does not hold for out-of-domain test data. We address the task of predicting out-of-domain (OOD) performance in a few-shot fashion: given a few target-domain examples and a set of models with similar training performance, can we understand how these models will perform on OOD test data? We benchmark the performance on this task when looking at model accuracy on the few-shot examples, then investigate how to incorporate analysis of the models’ behavior using feature attributions to better tackle this problem. Specifically, we explore a set of factors designed to reveal model agreement with certain pathological heuristics that may indicate worse generalization capabilities. On textual entailment, paraphrase recognition, and a synthetic classification task, we show that attribution-based factors can help rank relative model OOD performance. However, accuracy on a few-shot test set is a surprisingly strong baseline, particularly when the system designer does not have in-depth prior knowledge about the domain shift.
Anthology ID:
2023.eacl-main.175
Volume:
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
Month:
May
Year:
2023
Address:
Dubrovnik, Croatia
Editors:
Andreas Vlachos, Isabelle Augenstein
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2385–2397
Language:
URL:
https://aclanthology.org/2023.eacl-main.175
DOI:
10.18653/v1/2023.eacl-main.175
Bibkey:
Cite (ACL):
Prasann Singhal, Jarad Forristal, Xi Ye, and Greg Durrett. 2023. Assessing Out-of-Domain Language Model Performance from Few Examples. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 2385–2397, Dubrovnik, Croatia. Association for Computational Linguistics.
Cite (Informal):
Assessing Out-of-Domain Language Model Performance from Few Examples (Singhal et al., EACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.eacl-main.175.pdf
Video:
 https://aclanthology.org/2023.eacl-main.175.mp4