Prompt-based Text Entailment for Low-Resource Named Entity Recognition

Dongfang Li, Baotian Hu, Qingcai Chen


Abstract
Pre-trained Language Models (PLMs) have been applied in NLP tasks and achieve promising results. Nevertheless, the fine-tuning procedure needs labeled data of the target domain, making it difficult to learn in low-resource and non-trivial labeled scenarios. To address these challenges, we propose Prompt-based Text Entailment (PTE) for low-resource named entity recognition, which better leverages knowledge in the PLMs. We first reformulate named entity recognition as the text entailment task. The original sentence with entity type-specific prompts is fed into PLMs to get entailment scores for each candidate. The entity type with the top score is then selected as final label. Then, we inject tagging labels into prompts and treat words as basic units instead of n-gram spans to reduce time complexity in generating candidates by n-grams enumeration. Experimental results demonstrate that the proposed method PTE achieves competitive performance on the CoNLL03 dataset, and better than fine-tuned counterparts on the MIT Movie and Few-NERD dataset in low-resource settings.
Anthology ID:
2022.coling-1.164
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
1896–1903
Language:
URL:
https://aclanthology.org/2022.coling-1.164
DOI:
Bibkey:
Cite (ACL):
Dongfang Li, Baotian Hu, and Qingcai Chen. 2022. Prompt-based Text Entailment for Low-Resource Named Entity Recognition. In Proceedings of the 29th International Conference on Computational Linguistics, pages 1896–1903, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
Prompt-based Text Entailment for Low-Resource Named Entity Recognition (Li et al., COLING 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.coling-1.164.pdf
Data
Few-NERDMultiNLI