Wei-Ting Lu
2020
Gradations of Error Severity in Automatic Image Descriptions
Emiel van Miltenburg
|
Wei-Ting Lu
|
Emiel Krahmer
|
Albert Gatt
|
Guanyi Chen
|
Lin Li
|
Kees van Deemter
Proceedings of the 13th International Conference on Natural Language Generation
Earlier research has shown that evaluation metrics based on textual similarity (e.g., BLEU, CIDEr, Meteor) do not correlate well with human evaluation scores for automatically generated text. We carried out an experiment with Chinese speakers, where we systematically manipulated image descriptions to contain different kinds of errors. Because our manipulated descriptions form minimal pairs with the reference descriptions, we are able to assess the impact of different kinds of errors on the perceived quality of the descriptions. Our results show that different kinds of errors elicit significantly different evaluation scores, even though all erroneous descriptions differ in only one character from the reference descriptions. Evaluation metrics based solely on textual similarity are unable to capture these differences, which (at least partially) explains their poor correlation with human judgments. Our work provides the foundations for future work, where we aim to understand why different errors are seen as more or less severe.
Search
Co-authors
- Emiel Van Miltenburg 1
- Emiel Krahmer 1
- Albert Gatt 1
- Guanyi Chen 1
- Lin Li 1
- show all...
Venues
- inlg1