Atieh Pajouhi
2022
A Cross-document Coreference Dataset for Longitudinal Tracking across Radiology Reports
Surabhi Datta
|
Hio Cheng Lam
|
Atieh Pajouhi
|
Sunitha Mogalla
|
Kirk Roberts
Proceedings of the Thirteenth Language Resources and Evaluation Conference
This paper proposes a new cross-document coreference resolution (CDCR) dataset for identifying co-referring radiological findings and medical devices across a patient’s radiology reports. Our annotated corpus contains 5872 mentions (findings and devices) spanning 638 MIMIC-III radiology reports across 60 patients, covering multiple imaging modalities and anatomies. There are a total of 2292 mention chains. We describe the annotation process in detail, highlighting the complexities involved in creating a sizable and realistic dataset for radiology CDCR. We apply two baseline methods–string matching and transformer language models (BERT)–to identify cross-report coreferences. Our results indicate the requirement of further model development targeting better understanding of domain language and context to address this challenging and unexplored task. This dataset can serve as a resource to develop more advanced natural language processing CDCR methods in the future. This is one of the first attempts focusing on CDCR in the clinical domain and holds potential in benefiting physicians and clinical research through long-term tracking of radiology findings.
RadQA: A Question Answering Dataset to Improve Comprehension of Radiology Reports
Sarvesh Soni
|
Meghana Gudala
|
Atieh Pajouhi
|
Kirk Roberts
Proceedings of the Thirteenth Language Resources and Evaluation Conference
We present a radiology question answering dataset, RadQA, with 3074 questions posed against radiology reports and annotated with their corresponding answer spans (resulting in a total of 6148 question-answer evidence pairs) by physicians. The questions are manually created using the clinical referral section of the reports that take into account the actual information needs of ordering physicians and eliminate bias from seeing the answer context (and, further, organically create unanswerable questions). The answer spans are marked within the Findings and Impressions sections of a report. The dataset aims to satisfy the complex clinical requirements by including complete (yet concise) answer phrases (which are not just entities) that can span multiple lines. We conduct a thorough analysis of the proposed dataset by examining the broad categories of disagreement in annotation (providing insights on the errors made by humans) and the reasoning requirements to answer a question (uncovering the huge dependence on medical knowledge for answering the questions). The advanced transformer language models achieve the best F1 score of 63.55 on the test set, however, the best human performance is 90.31 (with an average of 84.52). This demonstrates the challenging nature of RadQA that leaves ample scope for future method research.
Search
Co-authors
- Kirk Roberts 2
- Surabhi Datta 1
- Hio Cheng Lam 1
- Sunitha Mogalla 1
- Sarvesh Soni 1
- show all...
Venues
- lrec2