Semen Kiselev
2023
Cross-Modal Conceptualization in Bottleneck Models
Danis Alukaev
|
Semen Kiselev
|
Ilya Pershin
|
Bulat Ibragimov
|
Vladimir Ivanov
|
Alexey Kornaev
|
Ivan Titov
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Concept Bottleneck Models (CBMs) assume that training examples (e.g., x-ray images) are annotated with high-level concepts (e.g., types of abnormalities), and perform classification by first predicting the concepts, followed by predicting the label relying on these concepts. However, the primary challenge in employing CBMs lies in the requirement of defining concepts predictive of the label and annotating training examples with these concepts. In our approach, we adopt a more moderate assumption and instead use text descriptions (e.g., radiology reports), accompanying the images, to guide the induction of concepts. Our crossmodal approach treats concepts as discrete latent variables and promotes concepts that (1) are predictive of the label, and (2) can be predicted reliably from both the image and text. Through experiments conducted on datasets ranging from synthetic datasets (e.g., synthetic images with generated descriptions) to realistic medical imaging datasets, we demonstrate that crossmodal learning encourages the induction of interpretable concepts while also facilitating disentanglement.
Search
Co-authors
- Danis Alukaev 1
- Ilya Pershin 1
- Bulat Ibragimov 1
- Vladimir Ivanov 1
- Alexey Kornaev 1
- show all...