Predicting Humorousness and Metaphor Novelty with Gaussian Process Preference Learning

Edwin Simpson, Erik-Lân Do Dinh, Tristan Miller, Iryna Gurevych


Abstract
The inability to quantify key aspects of creative language is a frequent obstacle to natural language understanding. To address this, we introduce novel tasks for evaluating the creativeness of language—namely, scoring and ranking text by humorousness and metaphor novelty. To sidestep the difficulty of assigning discrete labels or numeric scores, we learn from pairwise comparisons between texts. We introduce a Bayesian approach for predicting humorousness and metaphor novelty using Gaussian process preference learning (GPPL), which achieves a Spearman’s ρ of 0.56 against gold using word embeddings and linguistic features. Our experiments show that given sparse, crowdsourced annotation data, ranking using GPPL outperforms best–worst scaling. We release a new dataset for evaluating humour containing 28,210 pairwise comparisons of 4,030 texts, and make our software freely available.
Anthology ID:
P19-1572
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5716–5728
Language:
URL:
https://aclanthology.org/P19-1572
DOI:
10.18653/v1/P19-1572
Bibkey:
Cite (ACL):
Edwin Simpson, Erik-Lân Do Dinh, Tristan Miller, and Iryna Gurevych. 2019. Predicting Humorousness and Metaphor Novelty with Gaussian Process Preference Learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5716–5728, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Predicting Humorousness and Metaphor Novelty with Gaussian Process Preference Learning (Simpson et al., ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-1572.pdf
Video:
 https://vimeo.com/385226013
Code
 ukplab/acl2019-GPPL-humour-metaphor