Continual Learning Using Only Large Language Model Prompting

Jiabao Qiu, Zixuan Ke, Bing Liu


Abstract
We introduce CLOB, a novel continual learning (CL) paradigm wherein a large language model (LLM) is regarded as a black box. Learning is done incrementally via only verbal prompting. CLOB does not fine-tune any part of the LLM or add any trainable parameters to it. It is particularly suitable for LLMs that are accessible via APIs. We also propose a new CL technique, called CIS, based on incremental summarization that also overcomes the LLM’s input length limit. Experiments show CIS outperforms baselines by a very large margin.
Anthology ID:
2025.coling-main.402
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6014–6023
Language:
URL:
https://aclanthology.org/2025.coling-main.402/
DOI:
Bibkey:
Cite (ACL):
Jiabao Qiu, Zixuan Ke, and Bing Liu. 2025. Continual Learning Using Only Large Language Model Prompting. In Proceedings of the 31st International Conference on Computational Linguistics, pages 6014–6023, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Continual Learning Using Only Large Language Model Prompting (Qiu et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.402.pdf