Probing the Category of Verbal Aspect in Transformer Language Models

Anisia Katinskaia, Roman Yangarber


Abstract
We investigate how pretrained language models (PLM) encode the grammatical category of verbal aspect in Russian. Encoding of aspect in transformer LMs has not been studied previously in any language. A particular challenge is posed by ”alternative contexts”: where either the perfective or the imperfective aspect is suitable grammatically and semantically. We perform probing using BERT and RoBERTa on alternative and non-alternative contexts. First, we assess the models’ performance on aspect prediction, via behavioral probing. Next, we examine the models’ performance when their contextual representations are substituted with counterfactual representations, via causal probing. These counterfactuals alter the value of the “boundedness” feature—a semantic feature, which characterizes the action in the context. Experiments show that BERT and RoBERTa do encode aspect—mostly in their final layers. The counterfactual interventions affect perfective and imperfective in opposite ways, which is consistent with grammar: perfective is positively affected by adding the meaning of boundedness, and vice versa. The practical implications of our probing results are that fine-tuning only the last layers of BERT on predicting aspect is faster and more effective than fine-tuning the whole model. The model has high predictive uncertainty about aspect in alternative contexts, which tend to lack explicit hints about the boundedness of the described action.
Anthology ID:
2024.findings-naacl.212
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3347–3366
Language:
URL:
https://aclanthology.org/2024.findings-naacl.212
DOI:
10.18653/v1/2024.findings-naacl.212
Bibkey:
Cite (ACL):
Anisia Katinskaia and Roman Yangarber. 2024. Probing the Category of Verbal Aspect in Transformer Language Models. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 3347–3366, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Probing the Category of Verbal Aspect in Transformer Language Models (Katinskaia & Yangarber, Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-naacl.212.pdf