Interpretable Entity Representations through Large-Scale Typing

Yasumasa Onoe, Greg Durrett


Abstract
In standard methodology for natural language processing, entities in text are typically embedded in dense vector spaces with pre-trained models. The embeddings produced this way are effective when fed into downstream models, but they require end-task fine-tuning and are fundamentally difficult to interpret. In this paper, we present an approach to creating entity representations that are human readable and achieve high performance on entity-related tasks out of the box. Our representations are vectors whose values correspond to posterior probabilities over fine-grained entity types, indicating the confidence of a typing model’s decision that the entity belongs to the corresponding type. We obtain these representations using a fine-grained entity typing model, trained either on supervised ultra-fine entity typing data (Choi et al. 2018) or distantly-supervised examples from Wikipedia. On entity probing tasks involving recognizing entity identity, our embeddings used in parameter-free downstream models achieve competitive performance with ELMo- and BERT-based embeddings in trained models. We also show that it is possible to reduce the size of our type set in a learning-based way for particular domains. Finally, we show that these embeddings can be post-hoc modified through a small number of rules to incorporate domain knowledge and improve performance.
Anthology ID:
2020.findings-emnlp.54
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Editors:
Trevor Cohn, Yulan He, Yang Liu
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
612–624
Language:
URL:
https://aclanthology.org/2020.findings-emnlp.54
DOI:
10.18653/v1/2020.findings-emnlp.54
Bibkey:
Cite (ACL):
Yasumasa Onoe and Greg Durrett. 2020. Interpretable Entity Representations through Large-Scale Typing. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 612–624, Online. Association for Computational Linguistics.
Cite (Informal):
Interpretable Entity Representations through Large-Scale Typing (Onoe & Durrett, Findings 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.findings-emnlp.54.pdf