ExpNote: Black-box Large Language Models are better Task Solvers with Experience Notebook

Wangtao Sun, Xuanqing Yu, Shizhu He, Jun Zhao, Kang Liu


Abstract
Black-box Large Language Models (LLMs) have shown great power in solving various tasks and are considered general problem solvers. However, LLMs still fail in many specific tasks although understand the task instruction. In this paper, we focus on the problem of boosting the ability of black-box LLMs to solve downstream tasks. We propose ExpNote, an automated framework to help LLMs better adapt to unfamiliar tasks through reflecting and noting experiences from training data and retrieving them from external memory during testing. We evaluate ExpNote on multiple tasks and the experimental results demonstrate that the proposed method significantly improves the performance of black-box LLMs. The data and code are available at https://github.com/forangel2014/ExpNote.
Anthology ID:
2023.findings-emnlp.1034
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15470–15481
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.1034
DOI:
10.18653/v1/2023.findings-emnlp.1034
Bibkey:
Cite (ACL):
Wangtao Sun, Xuanqing Yu, Shizhu He, Jun Zhao, and Kang Liu. 2023. ExpNote: Black-box Large Language Models are better Task Solvers with Experience Notebook. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 15470–15481, Singapore. Association for Computational Linguistics.
Cite (Informal):
ExpNote: Black-box Large Language Models are better Task Solvers with Experience Notebook (Sun et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-emnlp.1034.pdf