Calibrated Seq2seq Models for Efficient and Generalizable Ultra-fine Entity Typing

Yanlin Feng, Adithya Pratapa, David Mortensen


Abstract
Ultra-fine entity typing plays a crucial role in information extraction by predicting fine-grained semantic types for entity mentions in text. However, this task poses significant challenges due to the massive number of entity types in the output space. The current state-of-the-art approaches, based on standard multi-label classifiers or cross-encoder models, suffer from poor generalization performance or inefficient inference speed. In this paper, we present CASENT, a seq2seq model designed for ultra-fine entity typing that predicts ultra-fine types with calibrated confidence scores. Our model takes an entity mention as input and employs constrained beam search to generate multiple types autoregressively. The raw sequence probabilities associated with the predicted types are then transformed into confidence scores using a novel calibration method. We conduct extensive experiments on the UFET dataset which contains over 10k types. Our method outperforms the previous state-of-the-art in terms of F1 score and calibration error, while achieving an inference speedup of over 50 times. Additionally, we demonstrate the generalization capabilities of our model by evaluating it in zero-shot and few-shot settings on five specialized domain entity typing datasets that are unseen during training. Remarkably, our model outperforms large language models with 10 times more parameters in the zero-shot setting, and when fine-tuned on 50 examples, it significantly outperforms ChatGPT on all datasets.
Anthology ID:
2023.findings-emnlp.1040
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15550–15560
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.1040
DOI:
10.18653/v1/2023.findings-emnlp.1040
Bibkey:
Cite (ACL):
Yanlin Feng, Adithya Pratapa, and David Mortensen. 2023. Calibrated Seq2seq Models for Efficient and Generalizable Ultra-fine Entity Typing. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 15550–15560, Singapore. Association for Computational Linguistics.
Cite (Informal):
Calibrated Seq2seq Models for Efficient and Generalizable Ultra-fine Entity Typing (Feng et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.1040.pdf