Alan Saji
2026
The Reasoning Lingua Franca: A Double-Edged Sword for Multilingual AI
Alan Saji | Raj Dabre | Anoop Kunchukuttan | Ratish Puduppully
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 2: Short Papers)
Alan Saji | Raj Dabre | Anoop Kunchukuttan | Ratish Puduppully
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 2: Short Papers)
Large Reasoning Models (LRMs) achieve strong performance on mathematical, scientific, and other question-answering tasks, but their multilingual reasoning abilities remain underexplored. When presented with non-English questions, LRMs often default to reasoning in English, raising concerns about interpretability and the handling of linguistic and cultural nuances. We systematically compare an LRM’s reasoning in English versus the language of the question. Our evaluation spans two tasks: MGSM and GPQA Diamond. Beyond measuring answer accuracy, we also analyze cognitive attributes in the reasoning traces. We find that English reasoning traces exhibit a substantially higher presence of these cognitive behaviors, and that reasoning in English generally yields higher final-answer accuracy, with the performance gap increasing as tasks become more complex. However, this English-centric strategy is susceptible to a key failure mode - getting “Lost in Translation," where translation steps lead to errors that would have been avoided by reasoning in the language of the question.
RiddleBench: A New Generative Reasoning Benchmark for LLMs
Deepon Halder | Alan Saji | Thanmay Jayakumar | Anoop Kunchukuttan | Ratish Puduppully | Raj Dabre
Findings of the Association for Computational Linguistics: EACL 2026
Deepon Halder | Alan Saji | Thanmay Jayakumar | Anoop Kunchukuttan | Ratish Puduppully | Raj Dabre
Findings of the Association for Computational Linguistics: EACL 2026
While Large Language Models (LLMs) show remarkable capabilities, their complex reasoning skills require deeper investigation. We introduce **RiddleBench**, a new benchmark of 1,737 challenging puzzles designed to test reasoning beyond simple pattern matching. Our evaluation of state-of-the-art models reveals significant limitations, including hallucination cascades (uncritically accepting flawed peer reasoning) and poor self-correction due to strong self-confirmation bias. We also find that model performance is fragile, degrading when faced with reordered constraints or irrelevant information. RiddleBench serves as a resource for diagnosing these issues and guiding the development of more robust LLMs.
2025
RomanLens: The Role Of Latent Romanization In Multilinguality In LLMs
Alan Saji | Jaavid Aktar Husain | Thanmay Jayakumar | Raj Dabre | Anoop Kunchukuttan | Ratish Puduppully
Findings of the Association for Computational Linguistics: ACL 2025
Alan Saji | Jaavid Aktar Husain | Thanmay Jayakumar | Raj Dabre | Anoop Kunchukuttan | Ratish Puduppully
Findings of the Association for Computational Linguistics: ACL 2025
Large Language Models (LLMs) exhibit strong multilingual performance despite being predominantly trained on English-centric corpora. This raises a fundamental question: How do LLMs achieve such multilingual capabilities? Focusing on languages written in non-Roman scripts, we investigate the role of Romanization—the representation of non-Roman scripts using Roman characters—as a potential bridge in multilingual processing. Using mechanistic interpretability techniques, we analyze next-token generation and find that intermediate layers frequently represent target words in Romanized form before transitioning to native script, a phenomenon we term Latent Romanization. Further, through activation patching experiments, we demonstrate that LLMs encode semantic concepts similarly across native and Romanized scripts, suggesting a shared underlying representation. Additionally, for translation into non-Roman script languages, our findings reveal that when the target language is in Romanized form, its representations emerge earlier in the model’s layers compared to native script. These insights contribute to a deeper understanding of multilingual representation in LLMs and highlight the implicit role of Romanization in facilitating language transfer.