Siyuan Song


2026

We present BabyBabelLM, a multilingual collection of datasets modeling the language a person observes from birth until they acquire a native language. We curate developmentally plausible pretraining data aiming to cover the equivalent of 100M English words of content in each of 45 languages. We compile evaluation suites and train baseline models in each language. BabyBabelLM aims to facilitate multilingual pretraining and cognitive modeling.
What have language models (LMs) learned about grammar? This question remains hotly debated, with major ramifications for linguistic theory. However, since probability and grammaticality are distinct notions in linguistics, it is not obvious what string probabilities can reveal about an LM’s underlying grammatical knowledge. We present a theoretical analysis of the relationship between grammar, meaning, and string probability, based on simple assumptions about the generative process of corpus data. Our framework makes three predictions, which we validate empirically using 280K sentence pairs in English and Chinese: (1) correlation between the probability of strings within minimal pairs, i.e., string pairs with minimal semantic differences; (2) correlation between models’ and humans’ deltas within minimal pairs; and (3) poor separation in probability space between unpaired grammatical and ungrammatical strings. Our analyses give theoretical grounding for using probability to learn about LMs’ structural knowledge, and suggest directions for future work in LM grammatical evaluation.

2024

“Understanding the non-literal meaning of an utterance is critical for large language models(LLMs) to become human-like social communicators. In this work, we introduce SwordsmanImp,the first Chinese multi-turn-dialogue-based dataset aimed at conversational implicature, sourcedfrom dialogues in the Chinese sitcom My Own Swordsman. It includes 200 carefully handcraftedquestions, all annotated on which Gricean maxims have been violated. We test eight close-sourceand open-source LLMs under two tasks: a multiple-choice question task and an implicature ex-planation task. Our results show that GPT-4 attains human-level accuracy (94%) on multiple-choice questions. CausalLM demonstrates a 78.5% accuracy following GPT-4. Other models,including GPT3.5 and several open-source models, demonstrate a lower accuracy ranging from20% to 60% on multiple-choice questions. Human raters were asked to rate the explanation ofthe implicatures generated by LLMs on their reasonability, logic and fluency. While all mod-els generate largely fluent and self-consistent text, their explanations score low on reasonabilityexcept for GPT-4, suggesting that most LLMs cannot produce satisfactory explanations of theimplicatures in the conversation. Moreover, we find LLMs’ performance does not vary signif-icantly by Gricean maxims, suggesting that LLMs do not seem to process implicatures derivedfrom different maxims differently. Our data and code are available at https://github.com/sjtu-compling/llm-pragmatics.”