An analysis of language models for metaphor recognition

Arthur Neidlein, Philip Wiesenbach, Katja Markert


Abstract
We conduct a linguistic analysis of recent metaphor recognition systems, all of which are based on language models. We show that their performance, although reaching high F-scores, has considerable gaps from a linguistic perspective. First, they perform substantially worse on unconventional metaphors than on conventional ones. Second, they struggle with handling rarer word types. These two findings together suggest that a large part of the systems’ success is due to optimising the disambiguation of conventionalised, metaphoric word senses for specific words instead of modelling general properties of metaphors. As a positive result, the systems show increasing capabilities to recognise metaphoric readings of unseen words if synonyms or morphological variations of these words have been seen before, leading to enhanced generalisation beyond word sense disambiguation.
Anthology ID:
2020.coling-main.332
Volume:
Proceedings of the 28th International Conference on Computational Linguistics
Month:
December
Year:
2020
Address:
Barcelona, Spain (Online)
Editors:
Donia Scott, Nuria Bel, Chengqing Zong
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
3722–3736
Language:
URL:
https://aclanthology.org/2020.coling-main.332
DOI:
10.18653/v1/2020.coling-main.332
Bibkey:
Cite (ACL):
Arthur Neidlein, Philip Wiesenbach, and Katja Markert. 2020. An analysis of language models for metaphor recognition. In Proceedings of the 28th International Conference on Computational Linguistics, pages 3722–3736, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Cite (Informal):
An analysis of language models for metaphor recognition (Neidlein et al., COLING 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.coling-main.332.pdf