Shan Zhang
2024
KCL: Few-shot Named Entity Recognition with Knowledge Graph and Contrastive Learning
Shan Zhang
|
Bin Cao
|
Jing Fan
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Named Entity Recognition(NER), as a crucial subtask in natural language processing(NLP), is limited to a few labeled samples(a.k.a. few-shot). Metric-based meta-learning methods aim to learn the semantic space and assign the entity to its nearest label based on the similarity of their representations. However, these methods have trouble with semantic space learning and result in suboptimal performance. Specifically, the label name or its description is widely used for label semantic representation learning, but the label information extracted from the existing label description is limited. In addition, these methods focus on reducing the distance between the entity and the corresponding label, which may also reduce the distance between the labels and thus cause misclassification. In this paper, we propose a few-shot NER method that harnesses the power of Knowledge Graph and Contrastive Learning to improve the prototypical semantic space learning. First, KCL leverages knowledge graphs to provide rich and structured label information for label semantic representation learning. Then, KCL introduces the idea of contrastive learning to learn the label semantic representation. The label semantic representation is used to help distance the label clusters in the prototypical semantic space to reduce misclassification. Extensive experiments show that KCL achieves significant improvement over the state-of-the-art methods.
2023
Task-adaptive Label Dependency Transfer for Few-shot Named Entity Recognition
Shan Zhang
|
Bin Cao
|
Tianming Zhang
|
Yuqi Liu
|
Jing Fan
Findings of the Association for Computational Linguistics: ACL 2023
Named Entity Recognition (NER), as a crucial subtask in natural language processing (NLP), suffers from limited labeled samples (a.k.a. few-shot). Meta-learning methods are widely used for few-shot NER, but these existing methods overlook the importance of label dependency for NER, resulting in suboptimal performance. However, applying meta-learning methods to label dependency learning faces a special challenge, that is, due to the discrepancy of label sets in different domains, the label dependencies can not be transferred across domains. In this paper, we propose the Task-adaptive Label Dependency Transfer (TLDT) method to make label dependency transferable and effectively adapt to new tasks by a few samples. TLDT improves the existing optimization-based meta-learning methods by learning general initialization and individual parameter update rule for label dependency. Extensive experiments show that TLDT achieves significant improvement over the state-of-the-art methods.