%0 Conference Proceedings %T Learning to Solve NLP Tasks in an Incremental Number of Languages %A Castellucci, Giuseppe %A Filice, Simone %A Croce, Danilo %A Basili, Roberto %Y Zong, Chengqing %Y Xia, Fei %Y Li, Wenjie %Y Navigli, Roberto %S Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers) %D 2021 %8 August %I Association for Computational Linguistics %C Online %F castellucci-etal-2021-learning %X In real scenarios, a multilingual model trained to solve NLP tasks on a set of languages can be required to support new languages over time. Unfortunately, the straightforward retraining on a dataset containing annotated examples for all the languages is both expensive and time-consuming, especially when the number of target languages grows. Moreover, the original annotated material may no longer be available due to storage or business constraints. Re-training only with the new language data will inevitably result in Catastrophic Forgetting of previously acquired knowledge. We propose a Continual Learning strategy that updates a model to support new languages over time, while maintaining consistent results on previously learned languages. We define a Teacher-Student framework where the existing model “teaches” to a student model its knowledge about the languages it supports, while the student is also trained on a new language. We report an experimental evaluation in several tasks including Sentence Classification, Relational Learning and Sequence Labeling. %R 10.18653/v1/2021.acl-short.106 %U https://aclanthology.org/2021.acl-short.106 %U https://doi.org/10.18653/v1/2021.acl-short.106 %P 837-847