Distill and Replay for Continual Language Learning

Jingyuan Sun, Shaonan Wang, Jiajun Zhang, Chengqing Zong


Abstract
Accumulating knowledge to tackle new tasks without necessarily forgetting the old ones is a hallmark of human-like intelligence. But the current dominant paradigm of machine learning is still to train a model that works well on static datasets. When learning tasks in a stream where data distribution may fluctuate, fitting on new tasks often leads to forgetting on the previous ones. We propose a simple yet effective framework that continually learns natural language understanding tasks with one model. Our framework distills knowledge and replays experience from previous tasks when fitting on a new task, thus named DnR (distill and replay). The framework is based on language models and can be smoothly built with different language model architectures. Experimental results demonstrate that DnR outperfoms previous state-of-the-art models in continually learning tasks of the same type but from different domains, as well as tasks of different types. With the distillation method, we further show that it’s possible for DnR to incrementally compress the model size while still outperforming most of the baselines. We hope that DnR could promote the empirical application of continual language learning, and contribute to building human-level language intelligence minimally bothered by catastrophic forgetting.
Anthology ID:
2020.coling-main.318
Volume:
Proceedings of the 28th International Conference on Computational Linguistics
Month:
December
Year:
2020
Address:
Barcelona, Spain (Online)
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
3569–3579
Language:
URL:
https://aclanthology.org/2020.coling-main.318
DOI:
10.18653/v1/2020.coling-main.318
Bibkey:
Cite (ACL):
Jingyuan Sun, Shaonan Wang, Jiajun Zhang, and Chengqing Zong. 2020. Distill and Replay for Continual Language Learning. In Proceedings of the 28th International Conference on Computational Linguistics, pages 3569–3579, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Cite (Informal):
Distill and Replay for Continual Language Learning (Sun et al., COLING 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.coling-main.318.pdf
Data
QA-SRLSQuADWikiSQLdecaNLP