TVQACML: Benchmarking Text-Centric Visual Question Answering in Multilingual Chinese Minority Languages

Sha Jiu, Yu Weng, Mengxiao Zhu, Chong Feng, Zheng Liu, Jialedongzhu


Abstract
Text-Centric Visual Question Answering (TEC-VQA) is a critical research area that requires semantic interactions between objects and scene texts. However, most existing TEC-VQA benchmarks focus on high-resource languages like English and Chinese. Although few works expanding multilingual QA pairs in non-text-centric VQA datasets through translation, which encounters a substantial “visual-textual misalignment” problem when applied to TEC-VQA. Moreover, the open-source nature of these benchmarks and the broad sources of training data for MLLMs have inevitably led to benchmark contamination, resulting in unreliable evaluation results. To alleviate this issue, we propose a contamination-free and more challenging TEC-VQA benchmark called Text-Centric Visual Question Answering in Multilingual Chinese Minority Languages(TVQACML), which involves eight languages, including Standard Chinese, Korean, and six minority languages. TVQACML supports a wide range of tasks, such as Text Recognition, Scene Text-Centric VQA, Document-Oriented VQA, Key Information Extraction (KIE), and Handwritten Mathematical Expression Recognition (HMER), featuring 32,000 question-answer pairs across 8,000 images. Extensive experiments on TVQACML across multiple MLLMs demonstrate the effectiveness of evaluating the MLLMs and enhancing multilingual TEC-VQA performance with fine-tuning.
Anthology ID:
2025.emnlp-main.705
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13968–13978
Language:
URL:
https://aclanthology.org/2025.emnlp-main.705/
DOI:
Bibkey:
Cite (ACL):
Sha Jiu, Yu Weng, Mengxiao Zhu, Chong Feng, Zheng Liu, and Jialedongzhu. 2025. TVQACML: Benchmarking Text-Centric Visual Question Answering in Multilingual Chinese Minority Languages. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 13968–13978, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
TVQACML: Benchmarking Text-Centric Visual Question Answering in Multilingual Chinese Minority Languages (Jiu et al., EMNLP 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.emnlp-main.705.pdf
Checklist:
 2025.emnlp-main.705.checklist.pdf