Saksham Bassi
2024
Generalization Measures for Zero-Shot Cross-Lingual Transfer
Saksham Bassi
|
Duygu Ataman
|
Kyunghyun Cho
Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)
Building robust and reliable machine learning systems requires models with the capacity to generalize their knowledge to interpret unseen inputs with different characteristics. Traditional language model evaluation tasks lack informative metrics about model generalization, and their applicability in new settings is often measured using task and language-specific downstream performance, which is lacking in many languages and tasks. To address this gap, we explore a set of efficient and reliable measures that could aid in computing more information related to the generalization capability of language models, particularly in cross-lingual zero-shot settings. Our central hypothesis is that the sharpness of a model’s loss landscape, i.e., the representation of loss values over its weight space, can indicate its generalization potential, with a flatter landscape suggesting better generalization. We propose a novel and stable algorithm to reliably compute the sharpness of a model optimum, and demonstrate its correlation with successful cross-lingual transfer.
2023
Findings of the 1st Shared Task on Multi-lingual Multi-task Information Retrieval at MRL 2023
Francesco Tinner
|
David Ifeoluwa Adelani
|
Chris Emezue
|
Mammad Hajili
|
Omer Goldman
|
Muhammad Farid Adilazuarda
|
Muhammad Dehan Al Kautsar
|
Aziza Mirsaidova
|
Müge Kural
|
Dylan Massey
|
Chiamaka Chukwuneke
|
Chinedu Mbonu
|
Damilola Oluwaseun Oloyede
|
Kayode Olaleye
|
Jonathan Atala
|
Benjamin A. Ajibade
|
Saksham Bassi
|
Rahul Aralikatte
|
Najoung Kim
|
Duygu Ataman
Proceedings of the 3rd Workshop on Multi-lingual Representation Learning (MRL)
Search
Co-authors
- Duygu Ataman 2
- Kyunghyun Cho 1
- Francesco Tinner 1
- David Ifeoluwa Adelani 1
- Chris Emezue 1
- show all...