Pranav Rajpurkar
2026
The Doctor Will Agree With You Now: Sycophancy of Large Language Models in Multi-Turn Medical Conversations
Taeil Matthew Kim | Luyang Luo | Sung Eun Kim | Arjun Kumar Manrai | Eric Topol | Pranav Rajpurkar
Proceedings of the 1st Workshop on Linguistic Analysis for Health (HeaLing 2026)
Taeil Matthew Kim | Luyang Luo | Sung Eun Kim | Arjun Kumar Manrai | Eric Topol | Pranav Rajpurkar
Proceedings of the 1st Workshop on Linguistic Analysis for Health (HeaLing 2026)
Large language models (LLMs) increasingly exhibit sycophancy—the tendency to conform to user beliefs rather than provide factually accurate information—posing significant risks in healthcare applications where reliability is paramount. We evaluate sycophantic behavior in ten LLMs from OpenAI, Google, and Anthropic across multi-turn medical conversations using an escalatory pushback framework. To enable fine-grained analysis, we introduce Resistance, a metric that measures nonconformity to user stances at each conversational turn, providing insights beyond existing flip-based metrics. Evaluating on MedCaseReasoning (open-ended diagnostic questions) and PubMedQA (clear-answer biomedical questions), we find that Gemini models exhibit the highest Resistance, followed by OpenAI and Claude models. We further observe that response patterns ("Yes, but..." vs. "Yes, and...") may be more predictive of sycophancy than specific phrases. Notably, all models are more easily persuaded to change their answers on clear multiple-choice questions than on ambiguous diagnostic cases. Our findings highlight critical vulnerabilities in deploying LLMs for clinical decision support and suggest that training toward contradiction-maintaining response patterns may serve as a potential mitigation strategy.
Do Mixed-Vendor Multi-Agent LLMs Improve Clinical Diagnosis?
Grace Chang Yuan | Xiaoman Zhang | Sung Eun Kim | Pranav Rajpurkar
Proceedings of the 1st Workshop on Linguistic Analysis for Health (HeaLing 2026)
Grace Chang Yuan | Xiaoman Zhang | Sung Eun Kim | Pranav Rajpurkar
Proceedings of the 1st Workshop on Linguistic Analysis for Health (HeaLing 2026)
Multi-agent large language model (LLM) systems have emerged as a promising approach for clinical diagnosis, leveraging collaboration among agents to refine medical reasoning. However, most existing frameworks rely on single-vendor teams (e.g., multiple agents from the same model family), which risk correlated failure modes that reinforce shared biases rather than correcting them. We investigate the impact of vendor diversity by comparing Single-LLM, Single-Vendor, and Mixed-Vendor Multi-Agent Conversation (MAC) frameworks. Using three doctor agents instantiated with o4-mini, Gemini-2.5-Pro, and Claude-4.5-Sonnet, we evaluate performance on RareBench and DiagnosisArena. Mixed-vendor configurations consistently outperform single-vendor counterparts, achieving state-of-the-art recall and accuracy. Overlap analysis reveals the underlying mechanism: mixed-vendor teams pool complementary inductive biases, surfacing correct diagnoses that individual models or homogeneous teams collectively miss. These results highlight vendor diversity as a key design principle for robust clinical diagnostic systems.
2023
Style-Aware Radiology Report Generation with RadGraph and Few-Shot Prompting
Benjamin Yan | Ruochen Liu | David Kuo | Subathra Adithan | Eduardo Reis | Stephen Kwak | Vasantha Venugopal | Chloe O’Connell | Agustina Saenz | Pranav Rajpurkar | Michael Moor
Findings of the Association for Computational Linguistics: EMNLP 2023
Benjamin Yan | Ruochen Liu | David Kuo | Subathra Adithan | Eduardo Reis | Stephen Kwak | Vasantha Venugopal | Chloe O’Connell | Agustina Saenz | Pranav Rajpurkar | Michael Moor
Findings of the Association for Computational Linguistics: EMNLP 2023
Automatically generated reports from medical images promise to improve the workflow of radiologists. Existing methods consider an image-to-report modeling task by directly generating a fully-fledged report from an image. However, this conflates the content of the report (e.g., findings and their attributes) with its style (e.g., format and choice of words), which can lead to clinically inaccurate reports. To address this, we propose a two-step approach for radiology report generation. First, we extract the content from an image; then, we verbalize the extracted content into a report that matches the style of a specific radiologist. For this, we leverage RadGraph—a graph representation of reports—together with large language models (LLMs). In our quantitative evaluations, we find that our approach leads to beneficial performance. Our human evaluation with clinical raters highlights that the AI-generated reports are indistinguishably tailored to the style of individual radiologist despite leveraging only a few examples as context.
Exploring the Boundaries of GPT-4 in Radiology
Qianchu Liu | Stephanie Hyland | Shruthi Bannur | Kenza Bouzid | Daniel Castro | Maria Wetscherek | Robert Tinn | Harshita Sharma | Fernando Pérez-García | Anton Schwaighofer | Pranav Rajpurkar | Sameer Khanna | Hoifung Poon | Naoto Usuyama | Anja Thieme | Aditya Nori | Matthew Lungren | Ozan Oktay | Javier Alvarez-Valle
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Qianchu Liu | Stephanie Hyland | Shruthi Bannur | Kenza Bouzid | Daniel Castro | Maria Wetscherek | Robert Tinn | Harshita Sharma | Fernando Pérez-García | Anton Schwaighofer | Pranav Rajpurkar | Sameer Khanna | Hoifung Poon | Naoto Usuyama | Anja Thieme | Aditya Nori | Matthew Lungren | Ozan Oktay | Javier Alvarez-Valle
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
The recent success of general-domain large language models (LLMs) has significantly changed the natural language processing paradigm towards a unified foundation model across domains and applications. In this paper, we focus on assessing the performance of GPT-4, the most capable LLM so far, on the text-based applications for radiology reports, comparing against state-of-the-art (SOTA) radiology-specific models. Exploring various prompting strategies, we evaluated GPT-4 on a diverse range of common radiology tasks and we found GPT-4 either outperforms or is on par with current SOTA radiology models. With zero-shot prompting, GPT-4 already obtains substantial gains (≈ 10% absolute improvement) over radiology models in temporal sentence similarity classification (accuracy) and natural language inference (F1). For tasks that require learning dataset-specific style or schema (e.g. findings summarisation), GPT-4 improves with example-based prompting and matches supervised SOTA. Our extensive error analysis with a board-certified radiologist shows GPT-4 has a sufficient level of radiology knowledge with only occasional errors in complex context that require nuanced domain knowledge. For findings summarisation, GPT-4 outputs are found to be overall comparable with existing manually-written impressions.
2020
Combining Automatic Labelers and Expert Annotations for Accurate Radiology Report Labeling Using BERT
Akshay Smit | Saahil Jain | Pranav Rajpurkar | Anuj Pareek | Andrew Ng | Matthew Lungren
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Akshay Smit | Saahil Jain | Pranav Rajpurkar | Anuj Pareek | Andrew Ng | Matthew Lungren
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
The extraction of labels from radiology text reports enables large-scale training of medical imaging models. Existing approaches to report labeling typically rely either on sophisticated feature engineering based on medical domain knowledge or manual annotations by experts. In this work, we introduce a BERT-based approach to medical image report labeling that exploits both the scale of available rule-based systems and the quality of expert annotations. We demonstrate superior performance of a biomedically pretrained BERT model first trained on annotations of a rule-based labeler and then finetuned on a small set of expert annotations augmented with automated backtranslation. We find that our final model, CheXbert, is able to outperform the previous best rules-based labeler with statistical significance, setting a new SOTA for report labeling on one of the largest datasets of chest x-rays.
2018
Know What You Don’t Know: Unanswerable Questions for SQuAD
Pranav Rajpurkar | Robin Jia | Percy Liang
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Pranav Rajpurkar | Robin Jia | Percy Liang
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Extractive reading comprehension systems can often locate the correct answer to a question in a context document, but they also tend to make unreliable guesses on questions for which the correct answer is not stated in the context. Existing datasets either focus exclusively on answerable questions, or use automatically generated unanswerable questions that are easy to identify. To address these weaknesses, we present SQuADRUn, a new dataset that combines the existing Stanford Question Answering Dataset (SQuAD) with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuADRUn, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. SQuADRUn is a challenging natural language understanding task for existing models: a strong neural system that gets 86% F1 on SQuAD achieves only 66% F1 on SQuADRUn. We release SQuADRUn to the community as the successor to SQuAD.
2016
Search
Fix author
Co-authors
- Sung Eun Kim 2
- Percy Liang 2
- Matthew Lungren 2
- Subathra Adithan 1
- Javier Alvarez-Valle 1
- Shruthi Bannur 1
- Kenza Bouzid 1
- Daniel Castro 1
- Stephanie Hyland 1
- Saahil Jain 1
- Robin Jia 1
- Sameer Khanna 1
- Taeil Matthew Kim 1
- David Kuo 1
- Stephen Kwak 1
- Ruochen Liu 1
- Qianchu Liu 1
- Konstantin Lopyrev 1
- Luyang Luo 1
- Arjun Kumar Manrai 1
- Michael Moor 1
- Andrew Y. Ng 1
- Aditya Nori 1
- Ozan Oktay 1
- Chloe O’Connell 1
- Anuj Pareek 1
- Hoifung Poon 1
- Fernando Pérez-García 1
- Eduardo Reis 1
- Agustina Saenz 1
- Anton Schwaighofer 1
- Harshita Sharma 1
- Akshay Smit 1
- Anja Thieme 1
- Robert Tinn 1
- Eric Topol 1
- Naoto Usuyama 1
- Vasantha Venugopal 1
- Maria Wetscherek 1
- Benjamin Yan 1
- Grace Chang Yuan 1
- Jian Zhang 1
- Xiaoman Zhang 1