What Does BERT actually Learn about Event Coreference? Probing Structural Information in a Fine-Tuned Dutch Language Model

Loic De Langhe, Orphee De Clercq, Veronique Hoste


Abstract
We probe structural and discourse aspects of coreferential relationships in a fine-tuned Dutch BERT event coreference model. Previous research has suggested that no such knowledge is encoded in BERT-based models and the classification of coreferential relationships ultimately rests on outward lexical similarity. While we show that BERT can encode a (very) limited number of these discourse aspects (thus disproving assumptions in earlier research), we also note that knowledge of many structural features of coreferential relationships is absent from the encodings generated by the fine-tuned BERT model.
Anthology ID:
2023.insights-1.13
Volume:
Proceedings of the Fourth Workshop on Insights from Negative Results in NLP
Month:
May
Year:
2023
Address:
Dubrovnik, Croatia
Editors:
Shabnam Tafreshi, Arjun Akula, João Sedoc, Aleksandr Drozd, Anna Rogers, Anna Rumshisky
Venues:
insights | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
103–108
Language:
URL:
https://aclanthology.org/2023.insights-1.13
DOI:
Bibkey:
Cite (ACL):
Loic De Langhe, Orphee De Clercq, and Veronique Hoste. 2023. What Does BERT actually Learn about Event Coreference? Probing Structural Information in a Fine-Tuned Dutch Language Model. In Proceedings of the Fourth Workshop on Insights from Negative Results in NLP, pages 103–108, Dubrovnik, Croatia. Association for Computational Linguistics.
Cite (Informal):
What Does BERT actually Learn about Event Coreference? Probing Structural Information in a Fine-Tuned Dutch Language Model (De Langhe et al., insights-WS 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.insights-1.13.pdf