Madhur Jindal
2026
Do LLMs model human linguistic variation? A case study in Hindi-English Verb code-mixing
Mukund Choudhary | Madhur Jindal | Gaurja Aeron | Monojit Choudhury
Findings of the Association for Computational Linguistics: EACL 2026
Mukund Choudhary | Madhur Jindal | Gaurja Aeron | Monojit Choudhury
Findings of the Association for Computational Linguistics: EACL 2026
Do large language models (LLMs) model linguistic variation? We investigate this question through Hindi-English (Hinglish) verb code-mixing, where speakers can use either a Hindi verb or an English verb with the light verb karna (’do’). Both forms are grammatical, but speakers show unexplained variation in language choice for the verb. We compare human preferences on controlled code-mixed minimal pairs to LLM perplexities spanning families, sizes, and training language compositions. We find that current LLMs do not reliably classify verb language preferences to match native speaker judgments. We also see that with specific supervision, some models do predict human preference to an extent. We release native speaker acceptability judgments on 30 verb pairs, perplexity ratios for 4,279 verb pairs across 7 models, and experimental materials.
2025
SAGE: A Generic Framework for LLM Safety Evaluation
Madhur Jindal | Hari Shrawgi | Parag Agrawal | Sandipan Dandapat
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track
Madhur Jindal | Hari Shrawgi | Parag Agrawal | Sandipan Dandapat
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track
As Large Language Models are rapidly deployed across diverse applications from healthcare to financial advice, safety evaluation struggles to keep pace. Current benchmarks focus on single-turn interactions with generic policies, failing to capture the conversational dynamics of real-world usage and the application-specific harms that emerge in context. Such potential oversights can lead to harms that go unnoticed in standard safety benchmarks and other current evaluation methodologies. To address these needs for robust AI safety evaluation, we introduce SAGE (Safety AI Generic Evaluation), an automated modular framework designed for customized and dynamic harm evaluations. SAGE employs prompted adversarial agents with diverse personalities based on the Big Five model, enabling system-aware multi-turn conversations that adapt to target applications and harm policies. We evaluate seven state-of-the-art LLMs across three applications and harm policies. Multi-turn experiments show that harm increases with conversation length, model behavior varies significantly when exposed to different user personalities and scenarios, and some models minimize harm via high refusal rates that reduce usefulness. We also demonstrate policy sensitivity within a harm category where tightening a child-focused sexual policy substantially increases measured defects across applications. These results motivate adaptive, policy-aware, and context-specific testing for safer real-world deployment.