Learning Robust Representations for Continual Relation Extraction via Adversarial Class Augmentation

Peiyi Wang, Yifan Song, Tianyu Liu, Binghuai Lin, Yunbo Cao, Sujian Li, Zhifang Sui


Abstract
Continual relation extraction (CRE) aims to continually learn new relations from a class-incremental data stream. CRE model usually suffers from catastrophic forgetting problem, i.e., the performance of old relations seriously degrades when the model learns new relations. Most previous work attributes catastrophic forgetting to the corruption of the learned representations as new relations come, with an implicit assumption that the CRE models have adequately learned the old relations. In this paper, through empirical studies we argue that this assumption may not hold, and an important reason for catastrophic forgetting is that the learned representations do not have good robustness against the appearance of analogous relations in the subsequent learning process. To address this issue, we encourage the model to learn more precise and robust representations through a simple yet effective adversarial class augmentation mechanism (ACA), which is easy to implement and model-agnostic. Experimental results show that ACA can consistently improve the performance of state-of-the-art CRE models on two popular benchmarks.
Anthology ID:
2022.emnlp-main.420
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6264–6278
Language:
URL:
https://aclanthology.org/2022.emnlp-main.420
DOI:
10.18653/v1/2022.emnlp-main.420
Bibkey:
Cite (ACL):
Peiyi Wang, Yifan Song, Tianyu Liu, Binghuai Lin, Yunbo Cao, Sujian Li, and Zhifang Sui. 2022. Learning Robust Representations for Continual Relation Extraction via Adversarial Class Augmentation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 6264–6278, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Learning Robust Representations for Continual Relation Extraction via Adversarial Class Augmentation (Wang et al., EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.420.pdf