Sonal Khosla


2025

pdf bib
OVQA: A Dataset for Visual Question Answering and Multimodal Research in Odia Language
Shantipriya Parida | Shashikanta Sahoo | Sambit Sekhar | Kalyanamalini Sahoo | Ketan Kotwal | Sonal Khosla | Satya Ranjan Dash | Aneesh Bose | Guneet Singh Kohli | Smruti Smita Lenka | Ondřej Bojar
Proceedings of the First Workshop on Natural Language Processing for Indo-Aryan and Dravidian Languages

This paper introduces OVQA, the first multimodal dataset designed for visual question-answering (VQA), visual question elicitation (VQE), and multimodal research for the low-resource Odia language. The dataset was created by manually translating 6,149 English question-answer pairs, each associated with 6,149 unique images from the Visual Genome dataset. This effort resulted in 27,809 English-Odia parallel sentences, ensuring a semantic match with the corresponding visual information. Several baseline experiments were conducted on the dataset, including visual question answering and visual question elicitation. The dataset is the first VQA dataset for the low-resource Odia language and will be released for multimodal research purposes and also help researchers extend for other low-resource languages.

pdf bib
LLM Compression: How Far Can We Go in Balancing Size and Performance?
Sahil Sk | Debashish Dhal | Sonal Khosla | Akash Dhaka | Shantipriya Parida | Sk Shahid | Sambit Shekhar | Dilip Prasad | Ondrej Bojar
Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era

Quantization is an essential and popular technique for improving the accessibility of large language models (LLMs) by reducing memory usage and computational costs while maintaining performance. In this study, we apply 4-bit Group Scaling Quantization (GSQ) and Generative Pretrained Transformer Quantization (GPTQ) to LLaMA 1B, Qwen 0.5B, and PHI 1.5B, evaluating their impact across multiple NLP tasks. We benchmark these models on MS MARCO (Information Retrieval), BoolQ (Boolean Question Answering), and GSM8K (Mathematical Reasoning) datasets, assessing both accuracy and efficiency accross various tasks. The study measures the trade-offs between model compression and task performance, analyzing key evaluation metrics namely: accuracy, inference latency, and throughput, providing insights into the suitability of low-bit quantization for real-world deployment and highlight the tradeoffs between memory, computing and latency in such settings, helping a user make suitable decisions