Priya Mishra
2026
Chandomitra: Towards Generating Structured Sanskrit Poetry from Natural Language Inputs
Manoj Balaji Jagadeeshan | Samarth Bhatia | Pretam Ray | Harshul Raj Surana | Akhil Rajeev P | Priya Mishra | Annarao Kulkarni | Ganesh Ramakrishnan | Prathosh Ap | Pawan Goyal
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Manoj Balaji Jagadeeshan | Samarth Bhatia | Pretam Ray | Harshul Raj Surana | Akhil Rajeev P | Priya Mishra | Annarao Kulkarni | Ganesh Ramakrishnan | Prathosh Ap | Pawan Goyal
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Text Generation has achieved remarkable performance using large language models. It has also been recently well-studied that these large language models are capable of creative generation tasks but prominently for high-resource languages. This prompts a fundamental question: Is there a way to utilize these (large) language models for structured poetry generation in a low-resource language, such as Sanskrit? We present Chandomitra, an English input to structured Sanskrit Poetry translation dataset, specifically adhering to the Anushtubh meter. We benchmark various open and closed models, and scrutinize specialized techniques such as constrained decoding and instruction fine-tuning, for the proposed task. Our constrained decoding methodology achieves 99.86% syntactic accuracy in generating metrically valid Sanskrit poetry, outperforming GPT-4o (1-shot: 31.24%). Our best-performing instruction-tuned model, on the other hand, performs better in semantic coherence with the English input, at the expense of slightly lower syntactic accuracy. Human evaluation further reveals that instruction fine-tuned model is better able to capture the poetic aspects.
2025
GuideQ: Framework for Guided Questioning for progressive informational collection and classification
Priya Mishra | Suraj Racha | Kaustubh Ponkshe | Adit Akarsh | Ganesh Ramakrishnan
Findings of the Association for Computational Linguistics: NAACL 2025
Priya Mishra | Suraj Racha | Kaustubh Ponkshe | Adit Akarsh | Ganesh Ramakrishnan
Findings of the Association for Computational Linguistics: NAACL 2025
The veracity of a factoid is largely independent of the language it is written in. However, language models are inconsistent in their ability to answer the same factual question across languages. This raises questions about how LLMs represent a given fact across languages. We explore multilingual factual knowledge through two aspects: the model’s ability to answer a query consistently across languages, and the ability to ”store” answers in a shared representation for several languages. We propose a methodology to measure the extent of representation sharing across languages by repurposing knowledge editing methods. We examine LLMs with various multilingual configurations using a new multilingual dataset. We reveal that high consistency does not necessarily imply shared representation, particularly for languages with different scripts. Moreover, we find that script similarity is a dominant factor in representation sharing. Finally, we observe that if LLMs could fully share knowledge across languages, their accuracy in their best-performing language could benefit an increase of up to 150% on average. These findings highlight the need for improved multilingual knowledge representation in LLMs and suggest a path for the development of more robust and consistent multilingual LLMs.