Investigating the Effect of Discourse Connectives on Transformer Surprisal: Language Models Understand Connectives, Even So They Are Surprised

Yan Cong, Emmanuele Chersoni, Yu-Yin Hsu, Philippe Blache


Abstract
As neural language models (NLMs) based on Transformers are becoming increasingly dominant in natural language processing, several studies have proposed analyzing the semantic and pragmatic abilities of such models. In our study, we aimed at investigating the effect of discourse connectives on NLMs with regard to Transformer Surprisal scores by focusing on the English stimuli of an experimental dataset, in which the expectations about an event in a discourse fragment could be reversed by a concessive or a contrastive connective. By comparing the Surprisal scores of several NLMs, we found that bigger NLMs show patterns similar to humans’ behavioral data when a concessive connective is used, while connective-related effects tend to disappear with a contrastive one. We have additionally validated our findings with GPT-Neo using an extended dataset, and results mostly show a consistent pattern.
Anthology ID:
2023.blackboxnlp-1.17
Volume:
Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP
Month:
December
Year:
2023
Address:
Singapore
Editors:
Yonatan Belinkov, Sophie Hao, Jaap Jumelet, Najoung Kim, Arya McCarthy, Hosein Mohebbi
Venues:
BlackboxNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
222–232
Language:
URL:
https://aclanthology.org/2023.blackboxnlp-1.17
DOI:
10.18653/v1/2023.blackboxnlp-1.17
Bibkey:
Cite (ACL):
Yan Cong, Emmanuele Chersoni, Yu-Yin Hsu, and Philippe Blache. 2023. Investigating the Effect of Discourse Connectives on Transformer Surprisal: Language Models Understand Connectives, Even So They Are Surprised. In Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, pages 222–232, Singapore. Association for Computational Linguistics.
Cite (Informal):
Investigating the Effect of Discourse Connectives on Transformer Surprisal: Language Models Understand Connectives, Even So They Are Surprised (Cong et al., BlackboxNLP-WS 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.blackboxnlp-1.17.pdf