DirectProbe: Studying Representations without Classifiers

Yichu Zhou, Vivek Srikumar


Abstract
Understanding how linguistic structure is encoded in contextualized embedding could help explain their impressive performance across NLP. Existing approaches for probing them usually call for training classifiers and use the accuracy, mutual information, or complexity as a proxy for the representation’s goodness. In this work, we argue that doing so can be unreliable because different representations may need different classifiers. We develop a heuristic, DirectProbe, that directly studies the geometry of a representation by building upon the notion of a version space for a task. Experiments with several linguistic tasks and contextualized embeddings show that, even without training classifiers, DirectProbe can shine lights on how an embedding space represents labels and also anticipate the classifier performance for the representation.
Anthology ID:
2021.naacl-main.401
Volume:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
June
Year:
2021
Address:
Online
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5070–5083
Language:
URL:
https://aclanthology.org/2021.naacl-main.401
DOI:
10.18653/v1/2021.naacl-main.401
Bibkey:
Cite (ACL):
Yichu Zhou and Vivek Srikumar. 2021. DirectProbe: Studying Representations without Classifiers. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5070–5083, Online. Association for Computational Linguistics.
Cite (Informal):
DirectProbe: Studying Representations without Classifiers (Zhou & Srikumar, NAACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.naacl-main.401.pdf
Code
 utahnlp/DirectProbe