Improvements and Extensions on Metaphor Detection

Weicheng Ma, Ruibo Liu, Lili Wang, Soroush Vosoughi


Abstract
Metaphors are ubiquitous in human language. The metaphor detection task (MD) aims at detecting and interpreting metaphors from written language, which is crucial in natural language understanding (NLU) research. In this paper, we introduce a pre-trained Transformer-based model into MD. Our model outperforms the previous state-of-the-art models by large margins in our evaluations, with relative improvements on the F-1 score from 5.33% to 28.39%. Second, we extend MD to a classification task about the metaphoricity of an entire piece of text to make MD applicable in more general NLU scenes. Finally, we clean up the improper or outdated annotations in one of the MD benchmark datasets and re-benchmark it with our Transformer-based model. This approach could be applied to other existing MD datasets as well, since the metaphoricity annotations in these benchmark datasets may be outdated. Future research efforts are also necessary to build an up-to-date and well-annotated dataset consisting of longer and more complex texts.
Anthology ID:
2021.unimplicit-1.5
Volume:
Proceedings of the 1st Workshop on Understanding Implicit and Underspecified Language
Month:
August
Year:
2021
Address:
Online
Editors:
Michael Roth, Reut Tsarfaty, Yoav Goldberg
Venue:
unimplicit
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
33–42
Language:
URL:
https://aclanthology.org/2021.unimplicit-1.5
DOI:
10.18653/v1/2021.unimplicit-1.5
Bibkey:
Cite (ACL):
Weicheng Ma, Ruibo Liu, Lili Wang, and Soroush Vosoughi. 2021. Improvements and Extensions on Metaphor Detection. In Proceedings of the 1st Workshop on Understanding Implicit and Underspecified Language, pages 33–42, Online. Association for Computational Linguistics.
Cite (Informal):
Improvements and Extensions on Metaphor Detection (Ma et al., unimplicit 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.unimplicit-1.5.pdf