Chuanyang Zheng
2023
TRIGO: Benchmarking Formal Mathematical Proof Reduction for Generative Language Models
Jing Xiong
|
Jianhao Shen
|
Ye Yuan
|
Haiming Wang
|
Yichun Yin
|
Zhengying Liu
|
Lin Li
|
Zhijiang Guo
|
Qingxing Cao
|
Yinya Huang
|
Chuanyang Zheng
|
Xiaodan Liang
|
Ming Zhang
|
Qun Liu
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Automated theorem proving (ATP) has become an appealing domain for exploring the reasoning ability of the recent successful generative language models. However, current ATP benchmarks are mainly focus on symbolic inference, but rarely involve the understanding of complex number combination reasoning. In this work, we propose TRIGO, an ATP benchmark that not only requires a model to reduce a trigonometric expression with step-by-step proof but also evaluates a generative LM’s reasoning ability on formulas and capability to manipulate, group, and factor number terms. We gather trigonometric expressions and their reduced forms from web, annotate the simplification process manually, and translate it into the “Lean” formal language system. We then automatically generate additional examples from the annotated samples to expand the dataset. Furthermore, we also create three automatically generated training and testing datasets of varying difficulty and distributions. Our extensive experiments show our proposed TRIGO poses a new challenge for advanced generative LM’s including GPT-4 which is pre-trained on a considerable amount of open-source formal theorem-proving language data, and provide a new tool to study the generative LM’s ability on both formal and mathematical reasoning.
Search
Co-authors
- Jing Xiong 1
- Jianhao Shen 1
- Ye Yuan 1
- Haiming Wang 1
- Yichun Yin 1
- show all...