Teamwork Is Not Always Good: An Empirical Study of Classifier Drift in Class-incremental Information Extraction

Minqian Liu, Lifu Huang


Abstract
Class-incremental learning (CIL) aims to develop a learning system that can continually learn new classes from a data stream without forgetting previously learned classes. When learning classes incrementally, the classifier must be constantly updated to incorporate new classes, and the drift in decision boundary may lead to severe forgetting. This fundamental challenge, however, has not yet been studied extensively, especially in the setting where no samples from old classes are stored for rehearsal. In this paper, we take a closer look at how the drift in the classifier leads to forgetting, and accordingly, design four simple yet (super-) effective solutions to alleviate the classifier drift: an Individual Classifiers with Frozen Feature Extractor (ICE) framework where we individually train a classifier for each learning session, and its three variants ICE-PL, ICE-O, and ICE-PL&O which further take the logits of previously learned classes from old sessions or a constant logit of an Other class as constraint to the learning of new classifiers. Extensive experiments and analysis on 6 class-incremental information extraction tasks demonstrate that our solutions, especially ICE-O, consistently show significant improvement over the previous state-of-the-art approaches with up to 44.7% absolute F-score gain, providing a strong baseline and insights for future research on class-incremental learning.
Anthology ID:
2023.findings-acl.141
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2241–2257
Language:
URL:
https://aclanthology.org/2023.findings-acl.141
DOI:
10.18653/v1/2023.findings-acl.141
Bibkey:
Cite (ACL):
Minqian Liu and Lifu Huang. 2023. Teamwork Is Not Always Good: An Empirical Study of Classifier Drift in Class-incremental Information Extraction. In Findings of the Association for Computational Linguistics: ACL 2023, pages 2241–2257, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Teamwork Is Not Always Good: An Empirical Study of Classifier Drift in Class-incremental Information Extraction (Liu & Huang, Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.141.pdf
Video:
 https://aclanthology.org/2023.findings-acl.141.mp4