Enhancing Lifelong Language Learning by Improving Pseudo-Sample Generation

Kasidis Kanwatchara, Thanapapas Horsuwan, Piyawat Lertvittayakumjorn, Boonserm Kijsirikul, Peerapon Vateekul


Abstract
To achieve lifelong language learning, pseudo-rehearsal methods leverage samples generated from a language model to refresh the knowledge of previously learned tasks. Without proper controls, however, these methods could fail to retain the knowledge of complex tasks with longer texts since most of the generated samples are low in quality. To overcome the problem, we propose three specific contributions. First, we utilize double language models, each of which specializes in a specific part of the input, to produce high-quality pseudo samples. Second, we reduce the number of parameters used by applying adapter modules to enhance training efficiency. Third, we further improve the overall quality of pseudo samples using temporal ensembling and sample regeneration. The results show that our framework achieves significant improvement over baselines on multiple task sequences. Also, our pseudo sample analysis reveals helpful insights for designing even better pseudo-rehearsal methods in the future.
Anthology ID:
2022.cl-4.12
Volume:
Computational Linguistics, Volume 48, Issue 4 - December 2022
Month:
December
Year:
2022
Address:
Cambridge, MA
Venue:
CL
SIG:
Publisher:
MIT Press
Note:
Pages:
819–848
Language:
URL:
https://aclanthology.org/2022.cl-4.12
DOI:
10.1162/coli_a_00449
Bibkey:
Cite (ACL):
Kasidis Kanwatchara, Thanapapas Horsuwan, Piyawat Lertvittayakumjorn, Boonserm Kijsirikul, and Peerapon Vateekul. 2022. Enhancing Lifelong Language Learning by Improving Pseudo-Sample Generation. Computational Linguistics, 48(4):819–848.
Cite (Informal):
Enhancing Lifelong Language Learning by Improving Pseudo-Sample Generation (Kanwatchara et al., CL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.cl-4.12.pdf