Human Judgement as a Compass to Navigate Automatic Metrics for Formality Transfer

Huiyuan Lai, Jiali Mao, Antonio Toral, Malvina Nissim


Abstract
Although text style transfer has witnessed rapid development in recent years, there is as yet no established standard for evaluation, which is performed using several automatic metrics, lacking the possibility of always resorting to human judgement. We focus on the task of formality transfer, and on the three aspects that are usually evaluated: style strength, content preservation, and fluency. To cast light on how such aspects are assessed by common and new metrics, we run a human-based evaluation and perform a rich correlation analysis. We are then able to offer some recommendations on the use of such metrics in formality transfer, also with an eye to their generalisability (or not) to related tasks.
Anthology ID:
2022.humeval-1.9
Volume:
Proceedings of the 2nd Workshop on Human Evaluation of NLP Systems (HumEval)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Anya Belz, Maja Popović, Ehud Reiter, Anastasia Shimorina
Venue:
HumEval
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
102–115
Language:
URL:
https://aclanthology.org/2022.humeval-1.9
DOI:
10.18653/v1/2022.humeval-1.9
Bibkey:
Cite (ACL):
Huiyuan Lai, Jiali Mao, Antonio Toral, and Malvina Nissim. 2022. Human Judgement as a Compass to Navigate Automatic Metrics for Formality Transfer. In Proceedings of the 2nd Workshop on Human Evaluation of NLP Systems (HumEval), pages 102–115, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Human Judgement as a Compass to Navigate Automatic Metrics for Formality Transfer (Lai et al., HumEval 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.humeval-1.9.pdf
Video:
 https://aclanthology.org/2022.humeval-1.9.mp4
Code
 laihuiyuan/eval-formality-transfer
Data
GYAFC