Empirical Study of Zero-shot Keyphrase Extraction with Large Language Models

Byungha Kang, Youhyun Shin


Abstract
This study investigates the effectiveness of Large Language Models (LLMs) for zero-shot keyphrase extraction (KE). We propose and evaluate four prompting strategies: vanilla, role prompting, candidate-based prompting, and hybrid prompting. Experiments conducted on six widely-used KE benchmark datasets demonstrate that Llama3-8B-Instruct with vanilla prompting outperforms state-of-the-art unsupervised methods, PromptRank, by an average of 9.43%, 7.68%, and 4.82% in F1@5, F1@10, and F1@15, respectively. Hybrid prompting, which combines the strengths of vanilla and candidate-based prompting, further enhances overall performance. Moreover role prompting, which assigns a task-related role to LLMs, consistently improves performance across various prompting strategies. We also explore the impact of model size and different LLM series: GPT-4o, Gemma2, and Qwen2. Results show that Llama3 and Gemma2 demonstrate the strongest zero-shot KE performance, with hybrid prompting consistently enhancing results across most LLMs. We hope this study provides insights to researchers exploring LLMs in KE tasks, as well as practical guidance for model selection in real-world applications. Our code is available at https://github.com/kangnlp/Zero-shot-KPE-with-LLMs.
Anthology ID:
2025.coling-main.248
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3670–3686
Language:
URL:
https://aclanthology.org/2025.coling-main.248/
DOI:
Bibkey:
Cite (ACL):
Byungha Kang and Youhyun Shin. 2025. Empirical Study of Zero-shot Keyphrase Extraction with Large Language Models. In Proceedings of the 31st International Conference on Computational Linguistics, pages 3670–3686, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Empirical Study of Zero-shot Keyphrase Extraction with Large Language Models (Kang & Shin, COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.248.pdf