Rafiq Ali
2026
Do Clinical Question Answering Systems Really Need Specialised Medical Fine Tuning?
Sushant Kumar Ray | Gautam Siddharth Kashyap | Sahil Tripathi | Nipun Joshi | Vijay Govindarajan | Rafiq Ali | Jiechao Gao | Usman Naseem
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 5: Industry Track)
Sushant Kumar Ray | Gautam Siddharth Kashyap | Sahil Tripathi | Nipun Joshi | Vijay Govindarajan | Rafiq Ali | Jiechao Gao | Usman Naseem
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 5: Industry Track)
Clinical Question-Answering (CQA) industry systems are increasingly rely on Large Language Models (LLMs), yet their deployment is often guided by the assumption that domain-specific fine-tuning is essential. Although specialised medical LLMs such as BioBERT, BioGPT, and PubMedBERT remain popular, they face practical limitations including narrow coverage, high retraining costs, and limited adaptability. Efforts based on Supervised Fine-Tuning (SFT) have attempted to address these assumptions but continue to reinforce what we term the SPECIALISATION FALLACY—the belief that specialised medical LLMs are inherently superior for CQA. To address this assumption, we introduce MEDASSESS-X, a deployment-industry-oriented CQA framework that applies alignment at inference time rather than through SFT. MEDASSESS-X uses lightweight steering vectors to guide model activations toward medically consistent reasoning without updating model weights or requiring domain-specific retraining. This inference-time alignment layer stabilises CQA performance across both general-purpose and specialised medical LLMs, thereby resolving the SPECIALISATION FALLACY. Empirically, MEDASSESS-X delivers consistent gains across all LLM families, improving Accuracy by up to +6%, Factual Consistency by +7%, and reducing Safety Error Rate by as much as 50%.
Do Large Language Models Reflect Demographic Pluralism in Safety?
Usman Naseem | Gautam Siddharth Kashyap | Sushant Kumar Ray | Rafiq Ali | Ebad Shabbir | Abdullah Mohammad
Findings of the Association for Computational Linguistics: EACL 2026
Usman Naseem | Gautam Siddharth Kashyap | Sushant Kumar Ray | Rafiq Ali | Ebad Shabbir | Abdullah Mohammad
Findings of the Association for Computational Linguistics: EACL 2026
Large Language Model (LLM) safety is inherently pluralistic, reflecting variations in moral norms, cultural expectations, and demographic contexts. Yet, existing alignment datasets such as Anthropic-HH and DICES rely on demographically narrow annotator pools, overlooking variation in safety perception across communities. Demo-SafetyBench addresses this gap by modeling demographic pluralism directly at the prompt level, decoupling value framing from responses. In Stage I, prompts from DICES are reclassified into 14 safety domains (adapted from BeaverTails) using Mistral-7B-Instruct-v0.3, retaining demographic metadata and expanding low-resource domains via Llama-3.1-8B-Instruct with SimHash-based deduplication, yielding 43,050 samples. In Stage II, pluralistic sensitivity is evaluated using LLMs-as-Raters—Gemma-7B, GPT-4o, and LLaMA-2-7B—under zero-shot inference. Balanced thresholds (delta = 0.5, tau = 10) achieve high reliability (ICC = 0.87) and low demographic sensitivity (DS = 0.12), confirming that pluralistic safety evaluation can be both scalable and demographically robust. Code and data available at: https://github.com/usmaann/Demo-SafetyBench
2025
TSR@CASE 2025: Low Dimensional Multimodal Fusion Using Multiplicative Fine Tuning Modules
Sushant Kr. Ray | Rafiq Ali | Abdullah Mohammad | Ebad Shabbir | Samar Wazir
Proceedings of the 8th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Texts
Sushant Kr. Ray | Rafiq Ali | Abdullah Mohammad | Ebad Shabbir | Samar Wazir
Proceedings of the 8th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Texts
This study describes our submission to the CASE 2025 shared task on multimodal hate event detection, which focuses on hate detection, hate target identification, stance determination, and humour detection on text embedded images as classification challenges. Our submission contains entries in all of the subtasks. We propose FIMIF, a lightweight and efficient classification model that leverages frozen CLIP encoders. We utilise a feature interaction module that allows the model to exploit multiplicative interactions between features without any manual engineering. Our results demonstrate that the model achieves comparable or superior performance to larger models, despite having a significantly smaller parameter count
Truth, Trust, and Trouble: Medical AI on the Edge
Mohammad Anas Azeez | Rafiq Ali | Ebad Shabbir | Zohaib Hasan Siddiqui | Gautam Siddharth Kashyap | Jiechao Gao | Usman Naseem
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track
Mohammad Anas Azeez | Rafiq Ali | Ebad Shabbir | Zohaib Hasan Siddiqui | Gautam Siddharth Kashyap | Jiechao Gao | Usman Naseem
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track
Large Language Models (LLMs) hold significant promise for transforming digital health by enabling automated medical question answering. However, ensuring these models meet critical industry standards for factual accuracy, usefulness, and safety remains a challenge, especially for open-source solutions. We present a rigorous benchmarking framework via a dataset of over 1,000 health questions. We assess model performance across honesty, helpfulness, and harmlessness. Our results highlight trade-offs between factual reliability and safety among evaluated models—Mistral-7B, BioMistral-7B-DARE, and AlpaCare-13B. AlpaCare-13B achieves the highest accuracy (91.7%) and harmlessness (0.92), while domain-specific tuning in BioMistral-7B-DARE boosts safety (0.90) despite smaller scale. Few-shot prompting improves accuracy from 78% to 85%, and all models show reduced helpfulness on complex queries, highlighting challenges in clinical QA. Our code is available at: https://github.com/AnasAzeez/TTT
LLMs on a Budget? Say HOLA
Zohaib Hasan Siddiqui | Jiechao Gao | Ebad Shabbir | Mohammad Anas Azeez | Rafiq Ali | Gautam Siddharth Kashyap | Usman Naseem
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track
Zohaib Hasan Siddiqui | Jiechao Gao | Ebad Shabbir | Mohammad Anas Azeez | Rafiq Ali | Gautam Siddharth Kashyap | Usman Naseem
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track
Running Large Language Models (LLMs) on edge devices is constrained by high compute and memory demands—posing a barrier for real-time applications in industries like healthcare, education, and embedded systems. Current solutions such as quantization, pruning, and Retrieval-Augmented Generation (RAG) offer only partial optimizations and often compromise on speed or accuracy. We introduce HOLA, an end-to-end optimization framework for efficient LLM deployment. Internally, it leverages Hierarchical Speculative Decoding (HSD) for faster inference without quality loss. Externally, AdaComp-RAG adjusts retrieval complexity based on context needs. Together with Lo-Bi, which blends structured pruning (LoRA) and quantization, HOLA delivers significant gains: +17.6% EMA on GSM8K, +10.5% MCA on ARC, and reduced latency and memory on edge devices like Jetson Nano—proving both scalable and production-ready. Our code is available at: https://github.com/zohaibhasan066/HOLA_Codebase