Tree Prompting: Efficient Task Adaptation without Fine-Tuning

Chandan Singh, John Morris, Alexander Rush, Jianfeng Gao, Yuntian Deng


Abstract
Prompting language models (LMs) is the main interface for applying them to new tasks. However, for smaller LMs, prompting provides low accuracy compared to gradient-based fine-tuning. Tree Prompting is an approach to prompting which builds a decision tree of prompts, linking multiple prompt-LM calls together to solve a task. At inference time, each call to the LM is determined by efficiently routing the outcome of the previous call using the tree. Experiments on classification datasets show that Tree Prompting improves accuracy over competing methods and is competitive with fine-tuning. We also show that variants of Tree Prompting allow inspection of a model’s decision-making process.
Anthology ID:
2023.emnlp-main.384
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6253–6267
Language:
URL:
https://aclanthology.org/2023.emnlp-main.384
DOI:
10.18653/v1/2023.emnlp-main.384
Bibkey:
Cite (ACL):
Chandan Singh, John Morris, Alexander Rush, Jianfeng Gao, and Yuntian Deng. 2023. Tree Prompting: Efficient Task Adaptation without Fine-Tuning. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 6253–6267, Singapore. Association for Computational Linguistics.
Cite (Informal):
Tree Prompting: Efficient Task Adaptation without Fine-Tuning (Singh et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.384.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.384.mp4