Implicit representations of event properties within contextual language models: Searching for “causativity neurons”

Esther Seyffarth, Younes Samih, Laura Kallmeyer, Hassan Sajjad


Abstract
This paper addresses the question to which extent neural contextual language models such as BERT implicitly represent complex semantic properties. More concretely, the paper shows that the neuron activations obtained from processing an English sentence provide discriminative features for predicting the (non-)causativity of the event denoted by the verb in a simple linear classifier. A layer-wise analysis reveals that the relevant properties are mostly learned in the higher layers. Moreover, further experiments show that appr. 10% of the neuron activations are enough to already predict causativity with a relatively high accuracy.
Anthology ID:
2021.iwcs-1.11
Volume:
Proceedings of the 14th International Conference on Computational Semantics (IWCS)
Month:
June
Year:
2021
Address:
Groningen, The Netherlands (online)
Editors:
Sina Zarrieß, Johan Bos, Rik van Noord, Lasha Abzianidze
Venue:
IWCS
SIG:
SIGSEM
Publisher:
Association for Computational Linguistics
Note:
Pages:
110–120
Language:
URL:
https://aclanthology.org/2021.iwcs-1.11
DOI:
Bibkey:
Cite (ACL):
Esther Seyffarth, Younes Samih, Laura Kallmeyer, and Hassan Sajjad. 2021. Implicit representations of event properties within contextual language models: Searching for “causativity neurons”. In Proceedings of the 14th International Conference on Computational Semantics (IWCS), pages 110–120, Groningen, The Netherlands (online). Association for Computational Linguistics.
Cite (Informal):
Implicit representations of event properties within contextual language models: Searching for “causativity neurons” (Seyffarth et al., IWCS 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.iwcs-1.11.pdf
Code
 eseyffarth/predicting-causativity-iwcs-2021