Zhang Yongxin


2022

pdf bib
Abstains from Prediction: Towards Robust Relation Extraction in Real World
Zhao Jun | Zhang Yongxin | Xu Nuo | Gui Tao | Zhang Qi | Chen Yunwen | Gao Xiang
Proceedings of the 21st Chinese National Conference on Computational Linguistics

“Supervised learning is a classic paradigm of relation extraction (RE). However, a well-performing model can still confidently make arbitrarily wrong predictions when exposed to samples of unseen relations. In this work, we propose a relation extraction method with rejection option to improve robustness to unseen relations. To enable the classifier to reject unseen relations, we introduce contrastive learning techniques and carefully design a set of class-preserving transformations to improve the discriminability between known and unseen relations. Based on the learned representation, inputs of unseen relations are assigned a low confidence score and rejected. Off-the-shelf open relation extraction (OpenRE) methods can be adopted to discover the potential relations in these rejected inputs. In addition, we find that the rejection can be further improved via readily available distantly supervised data. Experiments on two public datasets prove the effectiveness of our method capturing discriminative representations for unseen relation rejection.”