Generalization Measures for Zero-Shot Cross-Lingual Transfer

Saksham Bassi, Duygu Ataman, Kyunghyun Cho


Abstract
Building robust and reliable machine learning systems requires models with the capacity to generalize their knowledge to interpret unseen inputs with different characteristics. Traditional language model evaluation tasks lack informative metrics about model generalization, and their applicability in new settings is often measured using task and language-specific downstream performance, which is lacking in many languages and tasks. To address this gap, we explore a set of efficient and reliable measures that could aid in computing more information related to the generalization capability of language models, particularly in cross-lingual zero-shot settings. Our central hypothesis is that the sharpness of a model’s loss landscape, i.e., the representation of loss values over its weight space, can indicate its generalization potential, with a flatter landscape suggesting better generalization. We propose a novel and stable algorithm to reliably compute the sharpness of a model optimum, and demonstrate its correlation with successful cross-lingual transfer.
Anthology ID:
2024.mrl-1.25
Volume:
Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Jonne Sälevä, Abraham Owodunni
Venue:
MRL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
298–309
Language:
URL:
https://aclanthology.org/2024.mrl-1.25
DOI:
Bibkey:
Cite (ACL):
Saksham Bassi, Duygu Ataman, and Kyunghyun Cho. 2024. Generalization Measures for Zero-Shot Cross-Lingual Transfer. In Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024), pages 298–309, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Generalization Measures for Zero-Shot Cross-Lingual Transfer (Bassi et al., MRL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.mrl-1.25.pdf