Revisiting Automated Prompting: Are We Actually Doing Better?

Yulin Zhou, Yiren Zhao, Ilia Shumailov, Robert Mullins, Yarin Gal


Abstract
Current literature demonstrates that Large Language Models (LLMs) are great few-shot learners, and prompting significantly increases their performance on a range of downstream tasks in a few-shot learning setting. An attempt to automate human-led prompting followed, with some progress achieved. In particular, subsequent work demonstrates that automation can outperform fine-tuning in certain K-shot learning scenarios. In this paper, we revisit techniques for automated prompting on six different downstream tasks and a larger range of K-shot learning settings. We find that automated prompting does not consistently outperform simple manual prompting. Our work suggests that, in addition to fine-tuning, manual prompting should be used as a baseline in this line of research.
Anthology ID:
2023.acl-short.155
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1822–1832
Language:
URL:
https://aclanthology.org/2023.acl-short.155
DOI:
10.18653/v1/2023.acl-short.155
Bibkey:
Cite (ACL):
Yulin Zhou, Yiren Zhao, Ilia Shumailov, Robert Mullins, and Yarin Gal. 2023. Revisiting Automated Prompting: Are We Actually Doing Better?. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 1822–1832, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Revisiting Automated Prompting: Are We Actually Doing Better? (Zhou et al., ACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.acl-short.155.pdf
Video:
 https://aclanthology.org/2023.acl-short.155.mp4