How does Punctuation Affect Neural Models in Natural Language Inference

Adam Ek, Jean-Philippe Bernardy, Stergios Chatzikyriakidis


Abstract
Natural Language Inference models have reached almost human-level performance but their generalisation capabilities have not been yet fully characterized. In particular, sensitivity to small changes in the data is a current area of investigation. In this paper, we focus on the effect of punctuation on such models. Our findings can be broadly summarized as follows: (1) irrelevant changes in punctuation are correctly ignored by the recent transformer models (BERT) while older RNN-based models were sensitive to them. (2) All models, both transformers and RNN-based models, are incapable of taking into account small relevant changes in the punctuation.
Anthology ID:
2020.pam-1.15
Volume:
Proceedings of the Probability and Meaning Conference (PaM 2020)
Month:
June
Year:
2020
Address:
Gothenburg
Editors:
Christine Howes, Stergios Chatzikyriakidis, Adam Ek, Vidya Somashekarappa
Venue:
PaM
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
109–116
Language:
URL:
https://aclanthology.org/2020.pam-1.15
DOI:
Bibkey:
Cite (ACL):
Adam Ek, Jean-Philippe Bernardy, and Stergios Chatzikyriakidis. 2020. How does Punctuation Affect Neural Models in Natural Language Inference. In Proceedings of the Probability and Meaning Conference (PaM 2020), pages 109–116, Gothenburg. Association for Computational Linguistics.
Cite (Informal):
How does Punctuation Affect Neural Models in Natural Language Inference (Ek et al., PaM 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.pam-1.15.pdf
Data
MultiNLISNLI