Implications of Annotation Artifacts in Edge Probing Test Datasets

Sagnik Ray Choudhury, Jushaan Kalra


Abstract
Edge probing tests are classification tasks that test for grammatical knowledge encoded in token representations coming from contextual encoders such as large language models (LLMs). Many LLM encoders have shown high performance in EP tests, leading to conjectures about their ability to encode linguistic knowledge. However, a large body of research claims that the tests necessarily do not measure the LLM’s capacity to encode knowledge, but rather reflect the classifiers’ ability to learn the problem. Much of this criticism stems from the fact that often the classifiers have very similar accuracy when an LLM vs a random encoder is used. Consequently, several modifications to the tests have been suggested, including information theoretic probes. We show that commonly used edge probing test datasets have various biases including memorization. When these biases are removed, the LLM encoders do show a significant difference from the random ones, even with the simple non-information theoretic probes.
Anthology ID:
2023.conll-1.39
Volume:
Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)
Month:
December
Year:
2023
Address:
Singapore
Editors:
Jing Jiang, David Reitter, Shumin Deng
Venue:
CoNLL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
575–586
Language:
URL:
https://aclanthology.org/2023.conll-1.39
DOI:
10.18653/v1/2023.conll-1.39
Bibkey:
Cite (ACL):
Sagnik Ray Choudhury and Jushaan Kalra. 2023. Implications of Annotation Artifacts in Edge Probing Test Datasets. In Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL), pages 575–586, Singapore. Association for Computational Linguistics.
Cite (Informal):
Implications of Annotation Artifacts in Edge Probing Test Datasets (Ray Choudhury & Kalra, CoNLL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.conll-1.39.pdf
Software:
 2023.conll-1.39.Software.zip