Pragmatic Metacognitive Prompting Improves LLM Performance on Sarcasm Detection

Joshua Lee, Wyatt Fong, Alexander Le, Sur Shah, Kevin Han, Kevin Zhu


Abstract
Sarcasm detection is a significant challenge in sentiment analysis due to the nuanced and context-dependent nature of verbiage. We introduce Pragmatic Metacognitive Prompting (PMP) to improve the performance of Large Language Models (LLMs) in sarcasm detection, which leverages principles from pragmatics and reflection helping LLMs interpret implied meanings, consider contextual cues, and reflect on discrepancies to identify sarcasm. Using state-of-the-art LLMs such as LLaMA-3-8B, GPT-4o, and Claude 3.5 Sonnet, PMP achieves state-of-the-art performance on GPT-4o on MUStARD and SemEval2018. This study demonstrates that integrating pragmatic reasoning and metacognitive strategies into prompting significantly enhances LLMs’ ability to detect sarcasm, offering a promising direction for future research in sentiment analysis.
Anthology ID:
2025.chum-1.7
Volume:
Proceedings of the 1st Workshop on Computational Humor (CHum)
Month:
January
Year:
2025
Address:
Online
Editors:
Christian F. Hempelmann, Julia Rayz, Tiansi Dong, Tristan Miller
Venues:
chum | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
63–70
Language:
URL:
https://aclanthology.org/2025.chum-1.7/
DOI:
Bibkey:
Cite (ACL):
Joshua Lee, Wyatt Fong, Alexander Le, Sur Shah, Kevin Han, and Kevin Zhu. 2025. Pragmatic Metacognitive Prompting Improves LLM Performance on Sarcasm Detection. In Proceedings of the 1st Workshop on Computational Humor (CHum), pages 63–70, Online. Association for Computational Linguistics.
Cite (Informal):
Pragmatic Metacognitive Prompting Improves LLM Performance on Sarcasm Detection (Lee et al., chum 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.chum-1.7.pdf