Zhijian Xu
2024
Revisiting Automated Evaluation for Long-form Table Question Answering
Yuqi Wang
|
Lyuhao Chen
|
Songcheng Cai
|
Zhijian Xu
|
Yilun Zhao
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
In the era of data-driven decision-making, Long-Form Table Question Answering (LFTQA) is essential for integrating structured data with complex reasoning. Despite recent advancements in Large Language Models (LLMs) for LFTQA, evaluating their effectiveness remains a significant challenge. We introduce LFTQA-Eval, a meta-evaluation dataset comprising 2,988 human-annotated examples, to rigorously assess the efficacy of current automated metrics in assessing LLM-based LFTQA systems, with a focus on faithfulness and comprehensiveness. Our findings reveal that existing automatic metrics poorly correlate with human judgments and fail to consistently differentiate between factually accurate responses and those that are coherent but factually incorrect. Additionally, our in-depth examination of the limitations associated with automated evaluation methods provides essential insights for the improvement of LFTQA automated evaluation.
OpenT2T: An Open-Source Toolkit for Table-to-Text Generation
Haowei Zhang
|
Shengyun Si
|
Yilun Zhao
|
Lujing Xie
|
Zhijian Xu
|
Lyuhao Chen
|
Linyong Nan
|
Pengcheng Wang
|
Xiangru Tang
|
Arman Cohan
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
Table data is pervasive in various industries, and its comprehension and manipulation demand significant time and effort for users seeking to extract relevant information. Consequently, an increasing number of studies have been directed towards table-to-text generation tasks. However, most existing methods are benchmarked solely on a limited number of datasets with varying configurations, leading to a lack of unified, standardized, fair, and comprehensive comparison between methods. This paper presents OpenT2T, the first open-source toolkit for table-to-text generation, designed to reproduce existing large language models (LLMs) for performance comparison and expedite the development of new models.We have implemented and compared a wide range of LLMs under zero- and few-shot settings on 9 table-to-text generation datasets, covering data insight generation, table summarization, and free-form table question answering. Additionally, we maintain a public leaderboard to provide insights for future work into how to choose appropriate table-to-text generation systems for real-world scenarios.
Search
Co-authors
- Lyuhao Chen 2
- Yilun Zhao 2
- Yuqi Wang 1
- Songcheng Cai 1
- Haowei Zhang 1
- show all...