Aizan Zafar


2025

pdf bib
MedEx: Enhancing Medical Question-Answering with First-Order Logic based Reasoning and Knowledge Injection
Aizan Zafar | Kshitij Mishra | Asif Ekbal
Proceedings of the 31st International Conference on Computational Linguistics

In medical question-answering, traditional knowledge triples often fail due to superfluous data and their inability to capture complex relationships between symptoms and treatments across diseases. This limits models’ ability to provide accurate, contextually relevant responses. To overcome this, we introduce MedEx, which employs First-Order Logic (FOL)-based reasoning to model intricate relationships between diseases and treatments. We construct FOL-based triplets that encode the interplay of symptoms, diseases, and treatments, capturing not only surface-level data but also the logical constraints of the medical domain. MedEx encodes the discourse (questions and context) using a transformer-based unit, enhancing context comprehension. These encodings are processed by a Knowledge Injection Cell that integrates knowledge graph triples via a Graph Attention Network. The Logic Fusion Cell then combines medical-specific logical rule triples (e.g., co-occurrence, causation, diagnosis) with knowledge triples and extracts answers through a feed-forward layer. Our analysis demonstrates MedEx’s effectiveness and generalization across medical question-answering tasks. By merging logical reasoning with knowledge, MedEx provides precise medical answers and adapts its logical rules based on training data nuances.

2024

pdf bib
MedLogic-AQA: Enhancing Medicare Question Answering with Abstractive Models Focusing on Logical Structures
Aizan Zafar | Kshitij Mishra | Asif Ekbal
Findings of the Association for Computational Linguistics: EMNLP 2024

In Medicare question-answering (QA) tasks, the need for effective systems is pivotal in delivering accurate responses to intricate medical queries. However, existing approaches often struggle to grasp the intricate logical structures and relationships inherent in medical contexts, thus limiting their capacity to furnish precise and nuanced answers. In this work, we address this gap by proposing a novel Abstractive QA system MedLogic-AQA that harnesses first-order logic-based rules extracted from both context and questions to generate well-grounded answers. Through initial experimentation, we identified six pertinent first-order logical rules, which were then used to train a Logic-Understanding (LU) model capable of generating logical triples for a given context, question, and answer. These logic triples are then integrated into the training of MediLogic-AQA, enabling reasoned and coherent reasoning during answer generation. This distinctive fusion of logical reasoning with abstractive question answering equips our system to produce answers that are logically sound, relevant, and engaging. Evaluation with respect to both automated and human-based demonstrates the robustness of MedLogic-AQA against strong baselines. Through empirical assessments and case studies, we validate the efficacy of MedLogic-AQA in elevating the quality and comprehensiveness of answers in terms of reasoning as well as informativeness.

2022

pdf bib
CDialog: A Multi-turn Covid-19 Conversation Dataset for Entity-Aware Dialog Generation
Deeksha Varshney | Aizan Zafar | Niranshu Behera | Asif Ekbal
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing