Unveiling Multilinguality in Transformer Models: Exploring Language Specificity in Feed-Forward Networks

Sunit Bhattacharya, Ondřej Bojar


Abstract
Recent research suggests that the feed-forward module within Transformers can be viewed as a collection of key-value memories, where the keys learn to capture specific patterns from the input based on the training examples. The values then combine the output from the ‘memories’ of the keys to generate predictions about the next token. This leads to an incremental process of prediction that gradually converges towards the final token choice near the output layers. This interesting perspective raises questions about how multilingual models might leverage this mechanism. Specifically, for autoregressive models trained on two or more languages, do all neurons (across layers) respond equally to all languages? No! Our hypothesis centers around the notion that during pre-training, certain model parameters learn strong language-specific features, while others learn more language-agnostic (shared across languages) features. To validate this, we conduct experiments utilizing parallel corpora of two languages that the model was initially pre-trained on. Our findings reveal that the layers closest to the network’s input or output tend to exhibit more language-specific behaviour compared to the layers in the middle.
Anthology ID:
2023.blackboxnlp-1.9
Volume:
Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP
Month:
December
Year:
2023
Address:
Singapore
Editors:
Yonatan Belinkov, Sophie Hao, Jaap Jumelet, Najoung Kim, Arya McCarthy, Hosein Mohebbi
Venues:
BlackboxNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
120–126
Language:
URL:
https://aclanthology.org/2023.blackboxnlp-1.9
DOI:
10.18653/v1/2023.blackboxnlp-1.9
Bibkey:
Cite (ACL):
Sunit Bhattacharya and Ondřej Bojar. 2023. Unveiling Multilinguality in Transformer Models: Exploring Language Specificity in Feed-Forward Networks. In Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, pages 120–126, Singapore. Association for Computational Linguistics.
Cite (Informal):
Unveiling Multilinguality in Transformer Models: Exploring Language Specificity in Feed-Forward Networks (Bhattacharya & Bojar, BlackboxNLP-WS 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.blackboxnlp-1.9.pdf