Mahi Luthra
2025
LongTail-Swap: benchmarking language models’ abilities on rare words
Robin Algayres
|
Charles-Éric Saint-James
|
Mahi Luthra
|
Jiayi Shen
|
Youssef Benchekroun
|
Dongyan Lin
|
Rashel Moritz
|
Juan Pino
|
Emmanuel Dupoux
Findings of the Association for Computational Linguistics: EMNLP 2025
Children learn to speak with a low amount of data and can be taught new words on a few-shot basis, making them particularly data-efficient learners. The BabyLM challenge aims at exploring language model (LM) training in the low-data regime but uses metrics that concentrate on the head of the word distribution. Here, we introduce LongTail-Swap (LT-Swap), a benchmark that focuses on the tail of the distribution, i.e., measures the ability of LMs to learn new words with very little exposure, like infants do. LT-Swap is a pretraining corpus-specific test set of acceptable versus unacceptable sentence pairs that isolate semantic and syntactic usage of rare words. Models are evaluated in a zero-shot fashion by computing the average log probabilities over the two members of each pair.We built two such test sets associated with the 10M words and 100M words BabyLM training sets, respectively, and evaluated 16 models from the BabyLM leaderboard. Our results not only highlight the poor performance of language models on rare words but also reveal that performance differences across LM architectures are much more pronounced in the long tail than in the head. This offers new insights into which architectures are better at handling rare word generalization. We’ve also made the code publicly available on GitHub, enabling the generation of LT-Swap benchmarks based on any English text corpus.
Search
Fix author
Co-authors
- Robin Algayres 1
- Youssef Benchekroun 1
- Emmanuel Dupoux 1
- Dongyan Lin 1
- Rashel Moritz 1
- show all...