Cross-modal Contrastive Attention Model for Medical Report Generation

Xiao Song, Xiaodan Zhang, Junzhong Ji, Ying Liu, Pengxu Wei


Abstract
Medical report automatic generation has gained increasing interest recently as a way to help radiologists write reports more efficiently. However, this image-to-text task is rather challenging due to the typical data biases: 1) Normal physiological structures dominate the images, with only tiny abnormalities; 2) Normal descriptions accordingly dominate the reports. Existing methods have attempted to solve these problems, but they neglect to exploit useful information from similar historical cases. In this paper, we propose a novel Cross-modal Contrastive Attention (CMCA) model to capture both visual and semantic information from similar cases, with mainly two modules: a Visual Contrastive Attention Module for refining the unique abnormal regions compared to the retrieved case images; a Cross-modal Attention Module for matching the positive semantic information from the case reports. Extensive experiments on two widely-used benchmarks, IU X-Ray and MIMIC-CXR, demonstrate that the proposed model outperforms the state-of-the-art methods on almost all metrics. Further analyses also validate that our proposed model is able to improve the reports with more accurate abnormal findings and richer descriptions.
Anthology ID:
2022.coling-1.210
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
2388–2397
Language:
URL:
https://aclanthology.org/2022.coling-1.210
DOI:
Bibkey:
Cite (ACL):
Xiao Song, Xiaodan Zhang, Junzhong Ji, Ying Liu, and Pengxu Wei. 2022. Cross-modal Contrastive Attention Model for Medical Report Generation. In Proceedings of the 29th International Conference on Computational Linguistics, pages 2388–2397, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
Cross-modal Contrastive Attention Model for Medical Report Generation (Song et al., COLING 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.coling-1.210.pdf
Data
CheXpertMIMIC-CXR