Large Language Models are few(1)-shot Table Reasoners

Wenhu Chen


Abstract
Recent literature has shown that large language models (LLMs) are generally excellent few-shot reasoners to solve text reasoning tasks. However, the capability of LLMs on table reasoning tasks is yet to be explored. In this paper, we aim at understanding how well LLMs can perform table-related tasks with few-shot in-context learning. Specifically, we evaluated LLMs on popular table QA and fact verification datasets like WikiTableQuestion, FetaQA, TabFact, and FEVEROUS and found that LLMs are competent at complex reasoning over table structures, though these models are not pre-trained on any table corpus. When combined with ‘chain of thoughts’ prompting, LLMs can achieve very strong performance with only a 1-shot demonstration, even on par with some SoTA models. We show that LLMs are even more competent at generating comprehensive long-form answers on FetaQA than tuned T5-large. We further manually studied the reasoning chains elicited from LLMs and found that these reasoning chains are highly consistent with the underlying semantic form. We believe that LLMs can serve as a simple yet generic baseline for future research. The code and data are released in https://github.com/wenhuchen/TableCoT.
Anthology ID:
2023.findings-eacl.83
Volume:
Findings of the Association for Computational Linguistics: EACL 2023
Month:
May
Year:
2023
Address:
Dubrovnik, Croatia
Editors:
Andreas Vlachos, Isabelle Augenstein
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1120–1130
Language:
URL:
https://aclanthology.org/2023.findings-eacl.83
DOI:
10.18653/v1/2023.findings-eacl.83
Bibkey:
Cite (ACL):
Wenhu Chen. 2023. Large Language Models are few(1)-shot Table Reasoners. In Findings of the Association for Computational Linguistics: EACL 2023, pages 1120–1130, Dubrovnik, Croatia. Association for Computational Linguistics.
Cite (Informal):
Large Language Models are few(1)-shot Table Reasoners (Chen, Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-eacl.83.pdf
Video:
 https://aclanthology.org/2023.findings-eacl.83.mp4