PromISe: Releasing the Capabilities of LLMs with Prompt Introspective Search

Minzheng Wang, Nan Xu, Jiahao Zhao, Yin Luo, Wenji Mao


Abstract
The development of large language models (LLMs) raises the importance of assessing the fairness and completeness of various evaluation benchmarks. Regrettably, these benchmarks predominantly utilize uniform manual prompts, which may not fully capture the expansive capabilities of LLMs—potentially leading to an underestimation of their performance. To unlock the potential of LLMs, researchers pay attention to automated prompt search methods, which employ LLMs as optimizers to discover optimal prompts. However, previous methods generate the solutions implicitly, which overlook the underlying thought process and lack explicit feedback. In this paper, we propose a novel prompt introspective search framework, namely PromISe, to better release the capabilities of LLMs. It converts the process of optimizing prompts into an explicit chain of thought, through a step-by-step procedure that integrates self-introspect and self-refine. Extensive experiments, conducted over 73 tasks on two major benchmarks, demonstrate that our proposed PromISe significantly boosts the performance of 12 well-known LLMs compared to the baseline approach. Moreover, our study offers enhanced insights into the interaction between humans and LLMs, potentially serving as a foundation for future designs and implementations. Keywords: large language models, prompt search, self-introspect, self-refine
Anthology ID:
2024.lrec-main.1149
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
13120–13130
Language:
URL:
https://aclanthology.org/2024.lrec-main.1149
DOI:
Bibkey:
Cite (ACL):
Minzheng Wang, Nan Xu, Jiahao Zhao, Yin Luo, and Wenji Mao. 2024. PromISe: Releasing the Capabilities of LLMs with Prompt Introspective Search. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 13120–13130, Torino, Italia. ELRA and ICCL.
Cite (Informal):
PromISe: Releasing the Capabilities of LLMs with Prompt Introspective Search (Wang et al., LREC-COLING 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.lrec-main.1149.pdf
Optional supplementary material:
 2024.lrec-main.1149.OptionalSupplementaryMaterial.zip