Joy Wu
2020
Towards Visual Dialog for Radiology
Olga Kovaleva
|
Chaitanya Shivade
|
Satyananda Kashyap
|
Karina Kanjaria
|
Joy Wu
|
Deddeh Ballah
|
Adam Coy
|
Alexandros Karargyris
|
Yufan Guo
|
David Beymer Beymer
|
Anna Rumshisky
|
Vandana Mukherjee Mukherjee
Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing
Current research in machine learning for radiology is focused mostly on images. There exists limited work in investigating intelligent interactive systems for radiology. To address this limitation, we introduce a realistic and information-rich task of Visual Dialog in radiology, specific to chest X-ray images. Using MIMIC-CXR, an openly available database of chest X-ray images, we construct both a synthetic and a real-world dataset and provide baseline scores achieved by state-of-the-art models. We show that incorporating medical history of the patient leads to better performance in answering questions as opposed to conventional visual question answering model which looks only at the image. While our experiments show promising results, they indicate that the task is extremely challenging with significant scope for improvement. We make both the datasets (synthetic and gold standard) and the associated code publicly available to the research community.
Search