No Train but Gain: Language Arithmetic for training-free Language Adapters enhancement

Mateusz Klimaszewski, Piotr Andruszkiewicz, Alexandra Birch


Abstract
Modular deep learning is the state-of-the-art solution for lifting the curse of multilinguality, preventing the impact of negative interference and enabling cross-lingual performance in Multilingual Pre-trained Language Models. However, a trade-off of this approach is the reduction in positive transfer learning from closely related languages. In response, we introduce a novel method called language arithmetic, which enables training-free post-processing to address this limitation. Extending the task arithmetic framework, we apply learning via addition to the language adapters, transitioning the framework from a multi-task to a multilingual setup. The effectiveness of the proposed solution is demonstrated on three downstream tasks in a MAD-X-based set of cross-lingual schemes, acting as a post-processing procedure. Language arithmetic consistently improves the baselines with significant gains, especially in the most challenging case of zero-shot application. Our code and models are available at https://github.com/mklimasz/language-arithmetic.
Anthology ID:
2025.coling-main.737
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11121–11134
Language:
URL:
https://aclanthology.org/2025.coling-main.737/
DOI:
Bibkey:
Cite (ACL):
Mateusz Klimaszewski, Piotr Andruszkiewicz, and Alexandra Birch. 2025. No Train but Gain: Language Arithmetic for training-free Language Adapters enhancement. In Proceedings of the 31st International Conference on Computational Linguistics, pages 11121–11134, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
No Train but Gain: Language Arithmetic for training-free Language Adapters enhancement (Klimaszewski et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.737.pdf