A Closer Look at Parameter Contributions When Training Neural Language and Translation Models

Raúl Vázquez, Hande Celikkanat, Vinit Ravishankar, Mathias Creutz, Jörg Tiedemann


Abstract
We analyze the learning dynamics of neural language and translation models using Loss Change Allocation (LCA), an indicator that enables a fine-grained analysis of parameter updates when optimizing for the loss function. In other words, we can observe the contributions of different network components at training time. In this article, we systematically study masked language modeling, causal language modeling, and machine translation. We show that the choice of training objective leads to distinctive optimization procedures, even when performed on comparable Transformer architectures. We demonstrate how the various Transformer parameters are used during training, supporting that the feed-forward components of each layer are the main contributors to the optimization procedure. Finally, we find that the learning dynamics are not affected by data size and distribution but rather determined by the learning objective.
Anthology ID:
2022.coling-1.424
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Editors:
Nicoletta Calzolari, Chu-Ren Huang, Hansaem Kim, James Pustejovsky, Leo Wanner, Key-Sun Choi, Pum-Mo Ryu, Hsin-Hsi Chen, Lucia Donatelli, Heng Ji, Sadao Kurohashi, Patrizia Paggio, Nianwen Xue, Seokhwan Kim, Younggyun Hahm, Zhong He, Tony Kyungil Lee, Enrico Santus, Francis Bond, Seung-Hoon Na
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
4788–4800
Language:
URL:
https://aclanthology.org/2022.coling-1.424
DOI:
Bibkey:
Cite (ACL):
Raúl Vázquez, Hande Celikkanat, Vinit Ravishankar, Mathias Creutz, and Jörg Tiedemann. 2022. A Closer Look at Parameter Contributions When Training Neural Language and Translation Models. In Proceedings of the 29th International Conference on Computational Linguistics, pages 4788–4800, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
A Closer Look at Parameter Contributions When Training Neural Language and Translation Models (Vázquez et al., COLING 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.coling-1.424.pdf