On the Limits of Minimal Pairs in Contrastive Evaluation

Jannis Vamvas, Rico Sennrich


Abstract
Minimal sentence pairs are frequently used to analyze the behavior of language models. It is often assumed that model behavior on contrastive pairs is predictive of model behavior at large. We argue that two conditions are necessary for this assumption to hold: First, a tested hypothesis should be well-motivated, since experiments show that contrastive evaluation can lead to false positives. Secondly, test data should be chosen such as to minimize distributional discrepancy between evaluation time and deployment time. For a good approximation of deployment-time decoding, we recommend that minimal pairs are created based on machine-generated text, as opposed to human-written references. We present a contrastive evaluation suite for English–German MT that implements this recommendation.
Anthology ID:
2021.blackboxnlp-1.5
Volume:
Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Venue:
BlackboxNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
58–68
Language:
URL:
https://aclanthology.org/2021.blackboxnlp-1.5
DOI:
10.18653/v1/2021.blackboxnlp-1.5
Bibkey:
Cite (ACL):
Jannis Vamvas and Rico Sennrich. 2021. On the Limits of Minimal Pairs in Contrastive Evaluation. In Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 58–68, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
On the Limits of Minimal Pairs in Contrastive Evaluation (Vamvas & Sennrich, BlackboxNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.blackboxnlp-1.5.pdf
Video:
 https://aclanthology.org/2021.blackboxnlp-1.5.mp4
Code
 zurichnlp/distil-lingeval