Developmentally Plausible Multimodal Language Models Are Highly Modular

Alina Klerings, Christian Bartelt, Aaron Mueller


Abstract
Large language models demonstrate emergent modularity, where functionally specialized components and circuits arise to handle specific tasks or task formats. If similar modules arise in models trained on more cognitively plausible datasets, it could inform debates surrounding what kinds of would be learnable given more human-like language learning signals. In this paper, we describe a multimodal vision-language model submitted to the BabyLM Challenge. Our model achieves similar performance to the best-performing architectures from last year, though visual information does not improve performance on text-only tasks over text-only models (in accordance with prior findings). To better understand how the model processes the evaluation tasks of the BabyLM Challenge, we leverage causal interpretability methods to locate the neurons that contribute to the model’s final decisions. We find that the models we train are highly modular: distinct components arise to process related tasks. Furthermore, on text-and-image tasks, adding or removing visual inputs causes the model to use distinct components to process the same textual inputs. This suggests that modal and task-specific specialization is efficiently learned, and that a high degree of functional specialization arises in even small-scale language models.
Anthology ID:
2024.conll-babylm.10
Volume:
The 2nd BabyLM Challenge at the 28th Conference on Computational Natural Language Learning
Month:
November
Year:
2024
Address:
Miami, FL, USA
Editors:
Michael Y. Hu, Aaron Mueller, Candace Ross, Adina Williams, Tal Linzen, Chengxu Zhuang, Leshem Choshen, Ryan Cotterell, Alex Warstadt, Ethan Gotlieb Wilcox
Venues:
CoNLL | BabyLM | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
118–139
Language:
URL:
https://aclanthology.org/2024.conll-babylm.10/
DOI:
Bibkey:
Cite (ACL):
Alina Klerings, Christian Bartelt, and Aaron Mueller. 2024. Developmentally Plausible Multimodal Language Models Are Highly Modular. In The 2nd BabyLM Challenge at the 28th Conference on Computational Natural Language Learning, pages 118–139, Miami, FL, USA. Association for Computational Linguistics.
Cite (Informal):
Developmentally Plausible Multimodal Language Models Are Highly Modular (Klerings et al., CoNLL-BabyLM 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.conll-babylm.10.pdf