On the Limits of Minimal Pairs in Contrastive Evaluation

Jannis Vamvas, Rico Sennrich


Abstract
Minimal sentence pairs are frequently used to analyze the behavior of language models. It is often assumed that model behavior on contrastive pairs is predictive of model behavior at large. We argue that two conditions are necessary for this assumption to hold: First, a tested hypothesis should be well-motivated, since experiments show that contrastive evaluation can lead to false positives. Secondly, test data should be chosen such as to minimize distributional discrepancy between evaluation time and deployment time. For a good approximation of deployment-time decoding, we recommend that minimal pairs are created based on machine-generated text, as opposed to human-written references. We present a contrastive evaluation suite for English–German MT that implements this recommendation.
Anthology ID:
2021.blackboxnlp-1.5
Volume:
Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Venues:
BlackboxNLP | EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
58–68
Language:
URL:
https://aclanthology.org/2021.blackboxnlp-1.5
DOI:
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/2021.blackboxnlp-1.5.pdf
Code
 zurichnlp/distil-lingeval