MoT: Memory-of-Thought Enables ChatGPT to Self-Improve

Xiaonan Li, Xipeng Qiu


Abstract
Large Language Models (LLMs) have shown impressive abilities on various tasks. However, fundamentally improving them depends on high-quality datasets or computationally expensive fine-tuning. On the contrary, humans can easily improve themselves by self-thinking and memory, without external resources. In this paper, we propose a framework, **MoT**, to let the LLM self-improve through **M**emory **o**f **T**houghts, without annotated datasets and parameter updates. Specifically, MoT is divided into two stages: 1. before the test stage, the LLM pre-thinks on the unlabeled dataset and saves the high-confidence thoughts as external memory; 2. During the test stage, given a test question, the LLM recalls relevant memory to help itself reason and answer it. Experimental results show that MoT can help ChatGPT significantly improve its abilities in arithmetic reasoning, commonsense reasoning, factual reasoning, and natural language inference. Further analyses show that each component contributes critically to the improvements and MoT can lead to consistent improvements across various CoT methods and LLMs.
Anthology ID:
2023.emnlp-main.392
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6354–6374
Language:
URL:
https://aclanthology.org/2023.emnlp-main.392
DOI:
10.18653/v1/2023.emnlp-main.392
Bibkey:
Cite (ACL):
Xiaonan Li and Xipeng Qiu. 2023. MoT: Memory-of-Thought Enables ChatGPT to Self-Improve. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 6354–6374, Singapore. Association for Computational Linguistics.
Cite (Informal):
MoT: Memory-of-Thought Enables ChatGPT to Self-Improve (Li & Qiu, EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.392.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.392.mp4