Exploring Automated Keyword Mnemonics Generation with Large Language Models via Overgenerate-and-Rank

Jaewook Lee, Hunter McNichols, Andrew Lan


Abstract
In this paper, we study an under-explored area of language and vocabulary learning: keyword mnemonics, a technique for memorizing vocabulary through memorable associations with a target word via a verbal cue. Typically, creating verbal cues requires extensive human effort and is quite time-consuming, necessitating an automated method that is more scalable. We propose a novel overgenerate-and-rank method via prompting large language models (LLMs) to generate verbal cues and then ranking them according to psycholinguistic measures and takeaways from a pilot user study. To assess cue quality, we conduct both an automated evaluation of imageability and coherence, as well as a human evaluation involving English teachers and learners. Results show that LLM-generated mnemonics are comparable to human-generated ones in terms of imageability, coherence, and perceived usefulness, but there remains plenty of room for improvement due to the diversity in background and preference among language learners.
Anthology ID:
2024.findings-emnlp.316
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5521–5542
Language:
URL:
https://aclanthology.org/2024.findings-emnlp.316
DOI:
Bibkey:
Cite (ACL):
Jaewook Lee, Hunter McNichols, and Andrew Lan. 2024. Exploring Automated Keyword Mnemonics Generation with Large Language Models via Overgenerate-and-Rank. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 5521–5542, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Exploring Automated Keyword Mnemonics Generation with Large Language Models via Overgenerate-and-Rank (Lee et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-emnlp.316.pdf