Anbang Hu
2024
AMPO: Automatic Multi-Branched Prompt Optimization
Sheng Yang
|
Yurong Wu
|
Yan Gao
|
Zineng Zhou
|
Bin Zhu
|
Xiaodi Sun
|
Jian-Guang Lou
|
Zhiming Ding
|
Anbang Hu
|
Yuan Fang
|
Yunsong Li
|
Junyan Chen
|
Linjun Yang
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Prompt engineering is very important to enhance the performance of large language models (LLMs). When dealing with complex issues, prompt engineers tend to distill multiple patterns from examples and inject relevant solutions to optimize the prompts, achieving satisfying results. However, existing automatic prompt optimization techniques are only limited to producing single flow instructions, struggling with handling diverse patterns. In this paper, we present AMPO, an automatic prompt optimization method that can iteratively develop a multi-branched prompt using failure cases as feedback. Our goal is to explore a novel way of structuring prompts with multi-branches to better handle multiple patterns in complex tasks, for which we introduce three modules: Pattern Recognition, Branch Adjustment, and Branch Pruning. In experiments across five tasks, AMPO consistently achieves the best results. Additionally, our approach demonstrates significant optimization efficiency due to our adoption of a minimal search strategy.
Search
Co-authors
- Sheng Yang 1
- Yurong Wu 1
- Yan Gao 1
- Zineng Zhou 1
- Bin Zhu 1
- show all...