A Hybrid Deep Learning Approach for Spatial Trigger Extraction from Radiology Reports

Surabhi Datta, Kirk Roberts


Abstract
Radiology reports contain important clinical information about patients which are often tied through spatial expressions. Spatial expressions (or triggers) are mainly used to describe the positioning of radiographic findings or medical devices with respect to some anatomical structures. As the expressions result from the mental visualization of the radiologist’s interpretations, they are varied and complex. The focus of this work is to automatically identify the spatial expression terms from three different radiology sub-domains. We propose a hybrid deep learning-based NLP method that includes – 1) generating a set of candidate spatial triggers by exact match with the known trigger terms from the training data, 2) applying domain-specific constraints to filter the candidate triggers, and 3) utilizing a BERT-based classifier to predict whether a candidate trigger is a true spatial trigger or not. The results are promising, with an improvement of 24 points in the average F1 measure compared to a standard BERT-based sequence labeler.
Anthology ID:
2020.splu-1.6
Volume:
Proceedings of the Third International Workshop on Spatial Language Understanding
Month:
November
Year:
2020
Address:
Online
Venue:
SpLU
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
50–55
Language:
URL:
https://aclanthology.org/2020.splu-1.6
DOI:
10.18653/v1/2020.splu-1.6
Bibkey:
Cite (ACL):
Surabhi Datta and Kirk Roberts. 2020. A Hybrid Deep Learning Approach for Spatial Trigger Extraction from Radiology Reports. In Proceedings of the Third International Workshop on Spatial Language Understanding, pages 50–55, Online. Association for Computational Linguistics.
Cite (Informal):
A Hybrid Deep Learning Approach for Spatial Trigger Extraction from Radiology Reports (Datta & Roberts, SpLU 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.splu-1.6.pdf
Video:
 https://slideslive.com/38940080