Learn from Relation Information: Towards Prototype Representation Rectification for Few-Shot Relation Extraction

Yang Liu, Jinpeng Hu, Xiang Wan, Tsung-Hui Chang


Abstract
Few-shot Relation Extraction refers to fast adaptation to novel relation classes with few samples through training on the known relation classes. Most existing methods focus on implicitly introducing relation information (i.e., relation label or relation description) to constrain the prototype representation learning, such as contrastive learning, graphs, and specifically designed attentions, which may bring useless and even harmful parameters. Besides, these approaches are limited in handing outlier samples far away from the class center due to the weakly implicit constraint. In this paper, we propose an effective and parameter-less Prototype Rectification Method (PRM) to promote few-shot relation extraction, where we utilize a prototype rectification module to rectify original prototypes explicitly by the relation information. Specifically, PRM is composed of two gate mechanisms. One gate decides how much of the original prototype remains, and another one updates the remained prototype with relation information. In doing so, better and stabler global relation information can be captured for guiding prototype representations, and thus PRM can robustly deal with outliers. Moreover, we also extend PRM to both none-of-the-above (NOTA) and domain adaptation scenarios. Experimental results on FewRel 1.0 and 2.0 datasets demonstrate the effectiveness of our proposed method, which achieves state-of-the-art performance.
Anthology ID:
2022.findings-naacl.139
Volume:
Findings of the Association for Computational Linguistics: NAACL 2022
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1822–1831
Language:
URL:
https://aclanthology.org/2022.findings-naacl.139
DOI:
10.18653/v1/2022.findings-naacl.139
Bibkey:
Cite (ACL):
Yang Liu, Jinpeng Hu, Xiang Wan, and Tsung-Hui Chang. 2022. Learn from Relation Information: Towards Prototype Representation Rectification for Few-Shot Relation Extraction. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 1822–1831, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Learn from Relation Information: Towards Prototype Representation Rectification for Few-Shot Relation Extraction (Liu et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-naacl.139.pdf
Software:
 2022.findings-naacl.139.software.zip
Code
 lylylylylyly/prm-fsre
Data
FewRelFewRel 2.0