Siyuan Song
2026
BabyBabelLM: A Multilingual Benchmark of Developmentally Plausible Training Data
Jaap Jumelet | Abdellah Fourtassi | Akari Haga | Bastian Bunzeck | Bhargav Shandilya | Diana Galvan-Sosa | Faiz Ghifari Haznitrama | Francesca Padovani | Francois Meyer | Hai Hu | Julen Etxaniz | Laurent Prevot | Linyang He | María Grandury | Mila Marcheva | Negar Foroutan | Nikitas Theodoropoulos | Pouya Sadeghi | Siyuan Song | Suchir Salhan | Susana Zhou | Yurii Paniv | Ziyin Zhang | Arianna Bisazza | Alex Warstadt | Leshem Choshen
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Jaap Jumelet | Abdellah Fourtassi | Akari Haga | Bastian Bunzeck | Bhargav Shandilya | Diana Galvan-Sosa | Faiz Ghifari Haznitrama | Francesca Padovani | Francois Meyer | Hai Hu | Julen Etxaniz | Laurent Prevot | Linyang He | María Grandury | Mila Marcheva | Negar Foroutan | Nikitas Theodoropoulos | Pouya Sadeghi | Siyuan Song | Suchir Salhan | Susana Zhou | Yurii Paniv | Ziyin Zhang | Arianna Bisazza | Alex Warstadt | Leshem Choshen
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
We present BabyBabelLM, a multilingual collection of datasets modeling the language a person observes from birth until they acquire a native language. We curate developmentally plausible pretraining data aiming to cover the equivalent of 100M English words of content in each of 45 languages. We compile evaluation suites and train baseline models in each language. BabyBabelLM aims to facilitate multilingual pretraining and cognitive modeling.
What Can String Probability Tell Us About Grammaticality?
Jennifer Hu | Ethan Gotlieb Wilcox | Siyuan Song | Kyle Mahowald | Roger P. Levy
Transactions of the Association for Computational Linguistics, Volume 14
Jennifer Hu | Ethan Gotlieb Wilcox | Siyuan Song | Kyle Mahowald | Roger P. Levy
Transactions of the Association for Computational Linguistics, Volume 14
What have language models (LMs) learned about grammar? This question remains hotly debated, with major ramifications for linguistic theory. However, since probability and grammaticality are distinct notions in linguistics, it is not obvious what string probabilities can reveal about an LM’s underlying grammatical knowledge. We present a theoretical analysis of the relationship between grammar, meaning, and string probability, based on simple assumptions about the generative process of corpus data. Our framework makes three predictions, which we validate empirically using 280K sentence pairs in English and Chinese: (1) correlation between the probability of strings within minimal pairs, i.e., string pairs with minimal semantic differences; (2) correlation between models’ and humans’ deltas within minimal pairs; and (3) poor separation in probability space between unpaired grammatical and ungrammatical strings. Our analyses give theoretical grounding for using probability to learn about LMs’ structural knowledge, and suggest directions for future work in LM grammatical evaluation.
2024
Do Large Language Models Understand Conversational Implicature- A case study with a Chinese sitcom
Shisen Yue | Siyuan Song | Xinyuan Cheng | Hai Hu
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)
Shisen Yue | Siyuan Song | Xinyuan Cheng | Hai Hu
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)
“Understanding the non-literal meaning of an utterance is critical for large language models(LLMs) to become human-like social communicators. In this work, we introduce SwordsmanImp,the first Chinese multi-turn-dialogue-based dataset aimed at conversational implicature, sourcedfrom dialogues in the Chinese sitcom My Own Swordsman. It includes 200 carefully handcraftedquestions, all annotated on which Gricean maxims have been violated. We test eight close-sourceand open-source LLMs under two tasks: a multiple-choice question task and an implicature ex-planation task. Our results show that GPT-4 attains human-level accuracy (94%) on multiple-choice questions. CausalLM demonstrates a 78.5% accuracy following GPT-4. Other models,including GPT3.5 and several open-source models, demonstrate a lower accuracy ranging from20% to 60% on multiple-choice questions. Human raters were asked to rate the explanation ofthe implicatures generated by LLMs on their reasonability, logic and fluency. While all mod-els generate largely fluent and self-consistent text, their explanations score low on reasonabilityexcept for GPT-4, suggesting that most LLMs cannot produce satisfactory explanations of theimplicatures in the conversation. Moreover, we find LLMs’ performance does not vary signif-icantly by Gricean maxims, suggesting that LLMs do not seem to process implicatures derivedfrom different maxims differently. Our data and code are available at https://github.com/sjtu-compling/llm-pragmatics.”
Search
Fix author
Co-authors
- Hai Hu 2
- Arianna Bisazza 1
- Bastian Bunzeck 1
- Xinyuan Cheng 1
- Leshem Choshen 1
- Julen Etxaniz 1
- Negar Foroutan 1
- Abdellah Fourtassi 1
- Diana Galván-Sosa 1
- María Grandury 1
- Akari Haga 1
- Faiz Ghifari Haznitrama 1
- Linyang He 1
- Jennifer Hu 1
- Jaap Jumelet 1
- Roger Levy 1
- Kyle Mahowald 1
- Mila Marcheva 1
- Francois Meyer 1
- Francesca Padovani 1
- Yurii Paniv 1
- Laurent Prévot 1
- Pouya Sadeghi 1
- Suchir Salhan 1
- Bhargav Shandilya 1
- Nikitas Theodoropoulos 1
- Alex Warstadt 1
- Ethan Gotlieb Wilcox 1
- Shisen Yue 1
- Ziyin Zhang 1
- Susana Zhou 1