A Dual-View Approach to Classifying Radiology Reports by Co-Training

Yutong Han, Yan Yuan, Lili Mou


Abstract
Radiology report analysis provides valuable information that can aid with public health initiatives, and has been attracting increasing attention from the research community. In this work, we present a novel insight that the structure of a radiology report (namely, the Findings and Impression sections) offers different views of a radiology scan. Based on this intuition, we further propose a co-training approach, where two machine learning models are built upon the Findings and Impression sections, respectively, and use each other’s information to boost performance with massive unlabeled data in a semi-supervised manner. We conducted experiments in a public health surveillance study, and results show that our co-training approach is able to improve performance using the dual views and surpass competing supervised and semi-supervised methods.
Anthology ID:
2024.lrec-main.42
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
477–483
Language:
URL:
https://aclanthology.org/2024.lrec-main.42
DOI:
Bibkey:
Cite (ACL):
Yutong Han, Yan Yuan, and Lili Mou. 2024. A Dual-View Approach to Classifying Radiology Reports by Co-Training. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 477–483, Torino, Italia. ELRA and ICCL.
Cite (Informal):
A Dual-View Approach to Classifying Radiology Reports by Co-Training (Han et al., LREC-COLING 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.lrec-main.42.pdf