Yuanxing Xu
2024
AC-EVAL: Evaluating Ancient Chinese Language Understanding in Large Language Models
Yuting Wei
|
Yuanxing Xu
|
Xinru Wei
|
Yangsimin Yangsimin
|
Yangfu Zhu
|
Yuqing Li
|
Di Liu
|
Bin Wu
Findings of the Association for Computational Linguistics: EMNLP 2024
Given the importance of ancient Chinese in capturing the essence of rich historical and cultural heritage, the rapid advancements in Large Language Models (LLMs) necessitate benchmarks that can effectively evaluate their understanding of ancient contexts. To meet this need, we present AC-EVAL, an innovative benchmark designed to assess the advanced knowledge and reasoning capabilities of LLMs within the context of ancient Chinese. AC-EVAL is structured across three levels of difficulty reflecting different facets of language comprehension: general historical knowledge, short text understanding, and long text comprehension. The benchmark comprises 13 tasks, spanning historical facts, geography, social customs, art, philosophy, classical poetry and prose, providing a comprehensive assessment framework. Our extensive evaluation of top-performing LLMs, tailored for both English and Chinese, reveals a substantial potential for enhancing ancient text comprehension. By highlighting the strengths and weaknesses of LLMs, AC-EVAL aims to promote their development and application forward in the realms of ancient Chinese language education and scholarly research.
Exploring Question Guidance and Answer Calibration for Visually Grounded Video Question Answering
Yuanxing Xu
|
Yuting Wei
|
Shuai Zhong
|
Xinming Chen
|
Jinsheng Qi
|
Bin Wu
Findings of the Association for Computational Linguistics: EMNLP 2024
Video Question Answering (VideoQA) tasks require not only correct answers but also visual evidence. The “localize-then-answer” strategy, while enhancing accuracy and interpretability, faces challenges due to the lack of temporal localization labels in VideoQA datasets. Existing methods often train the models’ localization capabilities indirectly using QA labels, leading to inaccurate localization. Moreover, our experiments show that despite high accuracy, current models depend too heavily on language shortcuts or spurious correlations with irrelevant visual context. To address these issues, we propose a Question-Guided and Answer-Calibrated TRansformer (QGAC-TR), which guides and calibrates localization using question and option texts without localization labels. Furthermore, we design two self-supervised learning tasks to further enhance the model’s refined localization capabilities. Extensive experiments on three public datasets focused on temporal and causal reasoning show that our model not only achieves accuracy comparable to large-scale pretrained models but also leads in localization aspects. Code will be available on GitHub.
Search
Co-authors
- Yuting Wei 2
- Bin Wu 2
- Xinru Wei 1
- Yangsimin Yangsimin 1
- Yangfu Zhu 1
- show all...