Measuring Context-Word Biases in Lexical Semantic Datasets

Qianchu Liu, Diana McCarthy, Anna Korhonen


Abstract
State-of-the-art pretrained contextualized models (PCM) eg. BERT use tasks such as WiC and WSD to evaluate their word-in-context representations. This inherently assumes that performance in these tasks reflect how well a model represents the coupled word and context semantics. We question this assumption by presenting the first quantitative analysis on the context-word interaction being tested in major contextual lexical semantic tasks. To achieve this, we run probing baselines on masked input, and propose measures to calculate and visualize the degree of context or word biases in existing datasets. The analysis was performed on both models and humans. Our findings demonstrate that models are usually not being tested for word-in-context semantics in the same way as humans are in these tasks, which helps us better understand the model-human gap. Specifically, to PCMs, most existing datasets fall into the extreme ends (the retrieval-based tasks exhibit strong target word bias while WiC-style tasks and WSD show strong context bias); In comparison, humans are less biased and achieve much better performance when both word and context are available than with masked input. We recommend our framework for understanding and controlling these biases for model interpretation and future task design.
Anthology ID:
2022.emnlp-main.173
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2699–2713
Language:
URL:
https://aclanthology.org/2022.emnlp-main.173
DOI:
10.18653/v1/2022.emnlp-main.173
Bibkey:
Cite (ACL):
Qianchu Liu, Diana McCarthy, and Anna Korhonen. 2022. Measuring Context-Word Biases in Lexical Semantic Datasets. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 2699–2713, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Measuring Context-Word Biases in Lexical Semantic Datasets (Liu et al., EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.173.pdf