Denis McInerney
2024
Towards Reducing Diagnostic Errors with Interpretable Risk Prediction
Denis McInerney
|
William Dickinson
|
Lucy Flynn
|
Andrea Young
|
Geoffrey Young
|
Jan-Willem van de Meent
|
Byron Wallace
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Many diagnostic errors occur because clinicians cannot easily access relevant information in patient Electronic Health Records (EHRs). In this work we propose a method to use LLMs to identify pieces of evidence in patient EHR data that indicate increased or decreased risk of specific diagnoses; our ultimate aim is to increase access to evidence and reduce diagnostic errors. In particular, we propose a Neural Additive Model to make predictions backed by evidence with individualized risk estimates at time-points where clinicians are still uncertain, aiming to specifically mitigate delays in diagnosis and errors stemming from an incomplete differential. To train such a model, it is necessary to infer temporally fine-grained retrospective labels of eventual “true” diagnoses. We do so with LLMs, to ensure that the input text is from before a confident diagnosis can be made. We use an LLM to retrieve an initial pool of evidence, but then refine this set of evidence according to correlations learned by the model. We conduct an in-depth evaluation of the usefulness of our approach by simulating how it might be used by a clinician to decide between a pre-defined list of differential diagnoses.
2023
CHiLL: Zero-shot Custom Interpretable Feature Extraction from Clinical Notes with Large Language Models
Denis McInerney
|
Geoffrey Young
|
Jan-Willem van de Meent
|
Byron Wallace
Findings of the Association for Computational Linguistics: EMNLP 2023
We propose CHiLL (Crafting High-Level Latents), an approach for natural-language specification of features for linear models. CHiLL prompts LLMs with expert-crafted queries to generate interpretable features from health records. The resulting noisy labels are then used to train a simple linear classifier. Generating features based on queries to an LLM can empower physicians to use their domain expertise to craft features that are clinically meaningful for a downstream task of interest, without having to manually extract these from raw EHR. We are motivated by a real-world risk prediction task, but as a reproducible proxy, we use MIMIC-III and MIMIC-CXR data and standard predictive tasks (e.g., 30-day readmission) to evaluate this approach. We find that linear models using automatically extracted features are comparably performant to models using reference features, and provide greater interpretability than linear models using “Bag-of-Words” features. We verify that learned feature weights align well with clinical expectations.
Search
Co-authors
- Geoffrey Young 2
- Jan-Willem van de Meent 2
- Byron C. Wallace 2
- William Dickinson 1
- Lucy Flynn 1
- show all...