Unifying Text, Tables, and Images for Multimodal Question Answering

Haohao Luo, Ying Shen, Yang Deng


Abstract
Multimodal question answering (MMQA), which aims to derive the answer from multiple knowledge modalities (e.g., text, tables, and images), has received increasing attention due to its board applications. Current approaches to MMQA often rely on single-modal or bi-modal QA models, which limits their ability to effectively integrate information across all modalities and leverage the power of pre-trained language models. To address these limitations, we propose a novel framework called UniMMQA, which unifies three different input modalities into a text-to-text format by employing position-enhanced table linearization and diversified image captioning techniques. Additionally, we enhance cross-modal reasoning by incorporating a multimodal rationale generator, which produces textual descriptions of cross-modal relations for adaptation into the text-to-text generation process. Experimental results on three MMQA benchmark datasets show the superiority of UniMMQA in both supervised and unsupervised settings.
Anthology ID:
2023.findings-emnlp.626
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9355–9367
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.626
DOI:
10.18653/v1/2023.findings-emnlp.626
Bibkey:
Cite (ACL):
Haohao Luo, Ying Shen, and Yang Deng. 2023. Unifying Text, Tables, and Images for Multimodal Question Answering. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 9355–9367, Singapore. Association for Computational Linguistics.
Cite (Informal):
Unifying Text, Tables, and Images for Multimodal Question Answering (Luo et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.626.pdf