Exploring Euphemism Detection in Few-Shot and Zero-Shot Settings

Sedrick Scott Keh


Abstract
This work builds upon the Euphemism Detection Shared Task proposed in the EMNLP 2022 FigLang Workshop, and extends it to few-shot and zero-shot settings. We demonstrate a few-shot and zero-shot formulation using the dataset from the shared task, and we conduct experiments in these settings using RoBERTa and GPT-3. Our results show that language models are able to classify euphemistic terms relatively well even on new terms unseen during training, indicating that it is able to capture higher-level concepts related to euphemisms.
Anthology ID:
2022.flp-1.24
Volume:
Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates (Hybrid)
Editors:
Debanjan Ghosh, Beata Beigman Klebanov, Smaranda Muresan, Anna Feldman, Soujanya Poria, Tuhin Chakrabarty
Venue:
Fig-Lang
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
167–172
Language:
URL:
https://aclanthology.org/2022.flp-1.24
DOI:
10.18653/v1/2022.flp-1.24
Bibkey:
Cite (ACL):
Sedrick Scott Keh. 2022. Exploring Euphemism Detection in Few-Shot and Zero-Shot Settings. In Proceedings of the 3rd Workshop on Figurative Language Processing (FLP), pages 167–172, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
Cite (Informal):
Exploring Euphemism Detection in Few-Shot and Zero-Shot Settings (Keh, Fig-Lang 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.flp-1.24.pdf