Failures Pave the Way: Enhancing Large Language Models through Tuning-free Rule Accumulation

Zeyuan Yang, Peng Li, Yang Liu


Abstract
Large Language Models (LLMs) have showcased impressive performance. However, due to their inability to capture relationships among samples, these frozen LLMs inevitably keep repeating similar mistakes. In this work, we propose our Tuning-free Rule Accumulation (TRAN) framework, which guides LLMs in improving their performance by learning from previous mistakes. Considering data arrives sequentially, LLMs gradually accumulate rules from incorrect cases, forming a rule collection. These rules are then utilized by the LLMs to avoid making similar mistakes when processing subsequent inputs. Moreover, the rules remain independent of the primary prompts, seamlessly complementing prompt design strategies. Experimentally, we show that TRAN improves over recent baselines by a large margin.
Anthology ID:
2023.emnlp-main.109
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1751–1777
Language:
URL:
https://aclanthology.org/2023.emnlp-main.109
DOI:
10.18653/v1/2023.emnlp-main.109
Bibkey:
Cite (ACL):
Zeyuan Yang, Peng Li, and Yang Liu. 2023. Failures Pave the Way: Enhancing Large Language Models through Tuning-free Rule Accumulation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 1751–1777, Singapore. Association for Computational Linguistics.
Cite (Informal):
Failures Pave the Way: Enhancing Large Language Models through Tuning-free Rule Accumulation (Yang et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.109.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.109.mp4