Is This a Bad Table? A Closer Look at the Evaluation of Table Generation from Text

Pritika Ramu, Aparna Garimella, Sambaran Bandyopadhyay


Abstract
Understanding whether a generated table is of good quality is important to be able to use it in creating or editing documents using automatic methods. In this work, we underline that existing measures for table quality evaluation fail to capture the overall semantics of the tables, and sometimes unfairly penalize good tables and reward bad ones. We propose TabEval, a novel table evaluation strategy that captures table semantics by first breaking down a table into a list of natural language atomic statements and then compares them with ground truth statements using entailment-based measures. To validate our approach, we curate a dataset comprising of text descriptions for 1,250 diverse Wikipedia tables, covering a range of topics and structures, in contrast to the limited scope of existing datasets. We compare TabEval with existing metrics using unsupervised and supervised text-to-table generation methods, demonstrating its stronger correlation with human judgments of table quality across four datasets.
Anthology ID:
2024.emnlp-main.1239
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
22206–22216
Language:
URL:
https://aclanthology.org/2024.emnlp-main.1239
DOI:
10.18653/v1/2024.emnlp-main.1239
Bibkey:
Cite (ACL):
Pritika Ramu, Aparna Garimella, and Sambaran Bandyopadhyay. 2024. Is This a Bad Table? A Closer Look at the Evaluation of Table Generation from Text. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 22206–22216, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Is This a Bad Table? A Closer Look at the Evaluation of Table Generation from Text (Ramu et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.1239.pdf