Domain adapted machine translation: What does catastrophic forgetting forget and why?

Danielle Saunders, Steve DeNeefe


Abstract
Neural Machine Translation (NMT) models can be specialized by domain adaptation, often involving fine-tuning on a dataset of interest. This process risks catastrophic forgetting: rapid loss of generic translation quality. Forgetting has been widely observed, with many mitigation methods proposed. However, the causes of forgetting and the relationship between forgetting and adaptation data are underexplored.This paper takes a novel approach to understanding catastrophic forgetting during NMT adaptation by investigating the impact of the data. We provide a first investigation of what is forgotten, and why. We examine the relationship between forgetting and the in-domain data, and show that the amount and type of forgetting is linked to that data’s target vocabulary coverage. Our findings pave the way toward better informed NMT domain adaptation.
Anthology ID:
2024.emnlp-main.704
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12660–12671
Language:
URL:
https://aclanthology.org/2024.emnlp-main.704
DOI:
Bibkey:
Cite (ACL):
Danielle Saunders and Steve DeNeefe. 2024. Domain adapted machine translation: What does catastrophic forgetting forget and why?. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 12660–12671, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Domain adapted machine translation: What does catastrophic forgetting forget and why? (Saunders & DeNeefe, EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.704.pdf