Jia Qing Tan
2022
OpenCQA: Open-ended Question Answering with Charts
Shankar Kantharaj
|
Xuan Long Do
|
Rixie Tiffany Leong
|
Jia Qing Tan
|
Enamul Hoque
|
Shafiq Joty
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Charts are very popular to analyze data and convey important insights. People often analyze visualizations to answer open-ended questions that require explanatory answers. Answering such questions are often difficult and time-consuming as it requires a lot of cognitive and perceptual efforts. To address this challenge, we introduce a new task called OpenCQA, where the goal is to answer an open-ended question about a chart with descriptive texts. We present the annotation process and an in-depth analysis of our dataset. We implement and evaluate a set of baselines under three practical settings. In the first setting, a chart and the accompanying article is provided as input to the model. The second setting provides only the relevant paragraph(s) to the chart instead of the entire article, whereas the third setting requires the model to generate an answer solely based on the chart. Our analysis of the results show that the top performing models generally produce fluent and coherent text while they struggle to perform complex logical and arithmetic reasoning.
ChartQA: A Benchmark for Question Answering about Charts with Visual and Logical Reasoning
Ahmed Masry
|
Xuan Long Do
|
Jia Qing Tan
|
Shafiq Joty
|
Enamul Hoque
Findings of the Association for Computational Linguistics: ACL 2022
Charts are very popular for analyzing data. When exploring charts, people often ask a variety of complex reasoning questions that involve several logical and arithmetic operations. They also commonly refer to visual features of a chart in their questions. However, most existing datasets do not focus on such complex reasoning questions as their questions are template-based and answers come from a fixed-vocabulary. In this work, we present a large-scale benchmark covering 9.6K human-written questions as well as 23.1K questions generated from human-written chart summaries. To address the unique challenges in our benchmark involving visual and logical reasoning over charts, we present two transformer-based models that combine visual features and the data table of the chart in a unified way to answer questions. While our models achieve the state-of-the-art results on the previous datasets as well as on our benchmark, the evaluation also reveals several challenges in answering complex reasoning questions.
Search
Co-authors
- Xuan Long Do 2
- Enamul Hoque 2
- Shafiq Joty 2
- Shankar Kantharaj 1
- Rixie Tiffany Leong 1
- show all...