Probing for Understanding of English Verb Classes and Alternations in Large Pre-trained Language Models

David Yi, James Bruno, Jiayu Han, Peter Zukerman, Shane Steinert-Threlkeld


Abstract
We investigate the extent to which verb alternation classes, as described by Levin (1993), are encoded in the embeddings of Large Pre-trained Language Models (PLMs) such as BERT, RoBERTa, ELECTRA, and DeBERTa using selectively constructed diagnostic classifiers for word and sentence-level prediction tasks. We follow and expand upon the experiments of Kann et al. (2019), which aim to probe whether static embeddings encode frame-selectional properties of verbs. At both the word and sentence level, we find that contextual embeddings from PLMs not only outperform non-contextual embeddings, but achieve astonishingly high accuracies on tasks across most alternation classes. Additionally, we find evidence that the middle-to-upper layers of PLMs achieve better performance on average than the lower layers across all probing tasks.
Anthology ID:
2022.blackboxnlp-1.12
Volume:
Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates (Hybrid)
Editors:
Jasmijn Bastings, Yonatan Belinkov, Yanai Elazar, Dieuwke Hupkes, Naomi Saphra, Sarah Wiegreffe
Venue:
BlackboxNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
142–152
Language:
URL:
https://aclanthology.org/2022.blackboxnlp-1.12
DOI:
10.18653/v1/2022.blackboxnlp-1.12
Bibkey:
Cite (ACL):
David Yi, James Bruno, Jiayu Han, Peter Zukerman, and Shane Steinert-Threlkeld. 2022. Probing for Understanding of English Verb Classes and Alternations in Large Pre-trained Language Models. In Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 142–152, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
Cite (Informal):
Probing for Understanding of English Verb Classes and Alternations in Large Pre-trained Language Models (Yi et al., BlackboxNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.blackboxnlp-1.12.pdf