TuringQ: Benchmarking AI Comprehension in Theory of Computation

Pardis Zahraei, Ehsaneddin Asgari


Abstract
We present TuringQ, the first benchmark designed to evaluate the reasoning capabilities of large language models (LLMs) in the theory of computation. TuringQ consists of 4,006 undergraduate and graduate-level question-answer pairs, categorized into four difficulty levels and covering seven core theoretical areas. We evaluate several open-source LLMs, as well as GPT-4, using Chain of Thought prompting and expert human assessment. Additionally, we propose an automated LLM-based evaluation system that demonstrates competitive accuracy when compared to human evaluation. Fine-tuning a Llama3-8B model on TuringQ shows measurable improvements in reasoning ability and out-of-domain tasks such as algebra. TuringQ serves as both a benchmark and a resource for enhancing LLM performance in complex computational reasoning tasks. Our analysis offers insights into LLM capabilities and advances in AI comprehension of theoretical computer science.
Anthology ID:
2024.findings-emnlp.715
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12267–12280
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.715
DOI:
Bibkey:
Cite (ACL):
Pardis Zahraei and Ehsaneddin Asgari. 2024. TuringQ: Benchmarking AI Comprehension in Theory of Computation. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 12267–12280, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
TuringQ: Benchmarking AI Comprehension in Theory of Computation (Zahraei & Asgari, Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.715.pdf
Data:
 2024.findings-emnlp.715.data.zip