2024
pdf
bib
The 2nd BabyLM Challenge at the 28th Conference on Computational Natural Language Learning
Michael Y. Hu
|
Aaron Mueller
|
Candace Ross
|
Adina Williams
|
Tal Linzen
|
Chengxu Zhuang
|
Leshem Choshen
|
Ryan Cotterell
|
Alex Warstadt
|
Ethan Gotlieb Wilcox
The 2nd BabyLM Challenge at the 28th Conference on Computational Natural Language Learning
pdf
bib
abs
Findings of the Second BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
Michael Y. Hu
|
Aaron Mueller
|
Candace Ross
|
Adina Williams
|
Tal Linzen
|
Chengxu Zhuang
|
Ryan Cotterell
|
Leshem Choshen
|
Alex Warstadt
|
Ethan Gotlieb Wilcox
The 2nd BabyLM Challenge at the 28th Conference on Computational Natural Language Learning
The BabyLM Challenge is a community effort to close the data-efficiency gap between human and computational language learners. Participants compete to optimize language model training on a fixed language data budget of 100 million words or less. This year, we released improved text corpora, as well as a vision-and-language corpus to facilitate research into cognitively plausible vision language models. Submissions were compared on evaluation tasks targeting grammatical ability, (visual) question answering, pragmatic abilities, and grounding, among other abilities. Participants could submit to a 10M-word text-only track, a 100M-word text-only track, and/or a 100M-word and image multimodal track. From 31 submissions employing diverse methods, a hybrid causal-masked language model architecture outperformed other approaches. No submissions outperformed the baselines in the multimodal track. In follow-up analyses, we found a strong relationship between training FLOPs and average performance across tasks, and that the best-performing submissions proposed changes to the training data, training objective, and model architecture. This year’s BabyLM Challenge shows that there is still significant room for innovation in this setting, in particular for image-text modeling, but community-driven research can yield actionable insights about effective strategies for small-scale language modeling.
pdf
bib
abs
Lexicon-Level Contrastive Visual-Grounding Improves Language Modeling
Chengxu Zhuang
|
Evelina Fedorenko
|
Jacob Andreas
Findings of the Association for Computational Linguistics: ACL 2024
Today’s most accurate language models are trained on orders of magnitude more language data than human language learners receive— but with no supervision from other sensory modalities that play a crucial role in human learning. Can we make LMs’ representations and predictions more accurate (and more human-like) with more ecologically plausible supervision? This paper describes LexiContrastive Grounding (LCG), a grounded language learning procedure that leverages visual supervision to improve textual representations. LexiContrastive Grounding combines a next-token prediction strategy with a contrastive visual grounding objective, focusing on early-layerrepresentations that encode lexical information. Across multiple word-learning and sentence-understanding benchmarks, LexiContrastiveGrounding not only outperforms standard language-only models in terms of learning efficiency in small and developmentally plausible data regimes, but also improves upon vision-and-language learning procedures including CLIP, GIT, Flamingo, and Vokenization.Moreover, LexiContrastive Grounding improves perplexity by around 5% on multiple language modeling tasks compared to other models trained on the same amount of text data. This work underscores the potential of incorporating visual grounding into language models, aligning more closely with the multimodal nature of human language acquisition.
pdf
bib
abs
Visual Grounding Helps Learn Word Meanings in Low-Data Regimes
Chengxu Zhuang
|
Evelina Fedorenko
|
Jacob Andreas
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Modern neural language models (LMs) are powerful tools for modeling human sentence production and comprehension, and their internal representations are remarkably well-aligned with representations of language in the human brain. But to achieve these results, LMs must be trained in distinctly un-human-like ways — requiring orders of magnitude more language data than children receive during development, and without perceptual or social context. Do models trained more naturalistically — with grounded supervision — exhibit more humanlike language learning? We investigate this question in the context of word learning, a key sub-task in language acquisition. We train a diverse set of LM architectures, with and without auxiliary visual supervision, on datasets of varying scales. We then evaluate these models’ learning of syntactic categories, lexical relations, semantic features, word similarity, and alignment with human neural representations. We find that visual supervision can indeed improve the efficiency of word learning. However, these improvements are limited: they are present almost exclusively in the low-dataregime, and sometimes canceled out by the inclusion of rich distributional signals from text. The information conveyed by text and images isnot redundant—models mainly driven by visual information yield qualitatively different from those mainly driven by word co-occurrences. However, our results suggest that current multimodal modeling approaches fail to effectively leverage visual information to build human-like word representations from human-scale data.
2023
pdf
bib
Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning
Alex Warstadt
|
Aaron Mueller
|
Leshem Choshen
|
Ethan Wilcox
|
Chengxu Zhuang
|
Juan Ciro
|
Rafael Mosquera
|
Bhargavi Paranjabe
|
Adina Williams
|
Tal Linzen
|
Ryan Cotterell
Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning
pdf
bib
Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
Alex Warstadt
|
Aaron Mueller
|
Leshem Choshen
|
Ethan Wilcox
|
Chengxu Zhuang
|
Juan Ciro
|
Rafael Mosquera
|
Bhargavi Paranjabe
|
Adina Williams
|
Tal Linzen
|
Ryan Cotterell
Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning