Kunal Kartik


2024

pdf bib
Knowledge-Aware Reasoning over Multimodal Semi-structured Tables
Suyash Mathur | Jainit Bafna | Kunal Kartik | Harshita Khandelwal | Manish Shrivastava | Vivek Gupta | Mohit Bansal | Dan Roth
Findings of the Association for Computational Linguistics: EMNLP 2024

Existing datasets for tabular question answering typically focus exclusively on text within cells. However, real-world data is inherently multimodal, often blending images such as symbols, faces, icons, patterns, and charts with textual content in tables. With the evolution of AI models capable of multimodal reasoning, it is pertinent to assess their efficacy in handling such structured data. This study investigates whether current AI models can perform knowledge-aware reasoning on multimodal structured data. We explore their ability to reason on tables that integrate both images and text, introducing MMTabQA, a new dataset designed for this purpose. Our experiments highlight substantial challenges for current AI models in effectively integrating and interpreting multiple text and image inputs, understanding visual context, and comparing visual content across images. These findings establish our dataset as a robust benchmark for advancing AI’s comprehension and capabilities in analyzing multimodal structured data.