Common Sense Bias in Semantic Role Labeling

Heather Lent, Anders Søgaard


Abstract
Large-scale language models such as ELMo and BERT have pushed the horizon of what is possible in semantic role labeling (SRL), solving the out-of-vocabulary problem and enabling end-to-end systems, but they have also introduced significant biases. We evaluate three SRL parsers on very simple transitive sentences with verbs usually associated with animate subjects and objects, such as, “Mary babysat Tom”: a state-of-the-art parser based on BERT, an older parser based on GloVe, and an even older parser from before the days of word embeddings. When arguments are word forms predominantly used as person names, aligning with common sense expectations of animacy, the BERT-based parser is unsurprisingly superior; yet, with abstract or random nouns, the opposite picture emerges. We refer to this as “common sense bias” and present a challenge dataset for evaluating the extent to which parsers are sensitive to such a bias. Our code and challenge dataset are available here: github.com/coastalcph/comte
Anthology ID:
2021.wnut-1.14
Volume:
Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)
Month:
November
Year:
2021
Address:
Online
Editors:
Wei Xu, Alan Ritter, Tim Baldwin, Afshin Rahimi
Venue:
WNUT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
114–119
Language:
URL:
https://aclanthology.org/2021.wnut-1.14
DOI:
10.18653/v1/2021.wnut-1.14
Bibkey:
Cite (ACL):
Heather Lent and Anders Søgaard. 2021. Common Sense Bias in Semantic Role Labeling. In Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021), pages 114–119, Online. Association for Computational Linguistics.
Cite (Informal):
Common Sense Bias in Semantic Role Labeling (Lent & Søgaard, WNUT 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.wnut-1.14.pdf