A Prompting Assignment for Exploring Pretrained LLMs

Carolyn Anderson


Abstract
As the scale of publicly-available large language models (LLMs) has increased, so has interest in few-shot prompting methods. This paper presents an assignment that asks students to explore three aspects of large language model capabilities (commonsense reasoning, factuality, and wordplay) with a prompt engineering focus. The assignment consists of three tasks designed to share a common programming framework, so that students can reuse and adapt code from earlier tasks. Two of the tasks also involve dataset construction: students are asked to construct a simple dataset for the wordplay task, and a more challenging dataset for the factuality task. In addition, the assignment includes reflection questions that ask students to think critically about what they observe.
Anthology ID:
2024.teachingnlp-1.12
Volume:
Proceedings of the Sixth Workshop on Teaching NLP
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Sana Al-azzawi, Laura Biester, György Kovács, Ana Marasović, Leena Mathur, Margot Mieskes, Leonie Weissweiler
Venues:
TeachingNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
81–84
Language:
URL:
https://aclanthology.org/2024.teachingnlp-1.12
DOI:
Bibkey:
Cite (ACL):
Carolyn Anderson. 2024. A Prompting Assignment for Exploring Pretrained LLMs. In Proceedings of the Sixth Workshop on Teaching NLP, pages 81–84, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
A Prompting Assignment for Exploring Pretrained LLMs (Anderson, TeachingNLP-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.teachingnlp-1.12.pdf