TextGraphs-16 Natural Language Premise Selection Task: Zero-Shot Premise Selection with Prompting Generative Language Models

Liubov Kovriguina, Roman Teucher, Robert Wardenga


Abstract
Automated theorem proving can benefit a lot from methods employed in natural language processing, knowledge graphs and information retrieval: this non-trivial task combines formal languages understanding, reasoning, similarity search. We tackle this task by enhancing semantic similarity ranking with prompt engineering, which has become a new paradigm in natural language understanding. None of our approaches requires additional training. Despite encouraging results reported by prompt engineering approaches for a range of NLP tasks, for the premise selection task vanilla re-ranking by prompting GPT-3 doesn’t outperform semantic similarity ranking with SBERT, but merging of the both rankings shows better results.
Anthology ID:
2022.textgraphs-1.15
Volume:
Proceedings of TextGraphs-16: Graph-based Methods for Natural Language Processing
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Editors:
Dmitry Ustalov, Yanjun Gao, Alexander Panchenko, Marco Valentino, Mokanarangan Thayaparan, Thien Huu Nguyen, Gerald Penn, Arti Ramesh, Abhik Jana
Venue:
TextGraphs
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
127–132
Language:
URL:
https://aclanthology.org/2022.textgraphs-1.15
DOI:
Bibkey:
Cite (ACL):
Liubov Kovriguina, Roman Teucher, and Robert Wardenga. 2022. TextGraphs-16 Natural Language Premise Selection Task: Zero-Shot Premise Selection with Prompting Generative Language Models. In Proceedings of TextGraphs-16: Graph-based Methods for Natural Language Processing, pages 127–132, Gyeongju, Republic of Korea. Association for Computational Linguistics.
Cite (Informal):
TextGraphs-16 Natural Language Premise Selection Task: Zero-Shot Premise Selection with Prompting Generative Language Models (Kovriguina et al., TextGraphs 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.textgraphs-1.15.pdf