Kuan-Chieh Lo
2022
Learning to Rank Visual Stories From Human Ranking Data
Chi-Yang Hsu
|
Yun-Wei Chu
|
Vincent Chen
|
Kuan-Chieh Lo
|
Chacha Chen
|
Ting-Hao Huang
|
Lun-Wei Ku
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Visual storytelling (VIST) is a typical vision and language task that has seen extensive development in the natural language generation research domain. However, it remains unclear whether conventional automatic evaluation metrics for text generation are applicable on VIST. In this paper, we present the VHED (VIST Human Evaluation Data) dataset, which first re-purposes human evaluation results for automatic evaluation; hence we develop Vrank (VIST Ranker), a novel reference-free VIST metric for story evaluation. We first show that the results from commonly adopted automatic metrics for text generation have little correlation with those obtained from human evaluation, which motivates us to directly utilize human evaluation results to learn the automatic evaluation model. In the experiments, we evaluate the generated texts to predict story ranks using our model as well as other reference-based and reference-free metrics. Results show that Vrank prediction is significantly more aligned to human evaluation than other metrics with almost 30% higher accuracy when ranking story pairs. Moreover, we demonstrate that only Vrank shows human-like behavior in its strong ability to find better stories when the quality gap between two stories is high. Finally, we show the superiority of Vrank by its generalizability to pure textual stories, and conclude that this reuse of human evaluation results puts Vrank in a strong position for continued future advances.
Search
Co-authors
- Chi-Yang Hsu 1
- Yun-Wei Chu 1
- Vincent Chen 1
- Chacha Chen 1
- Ting-Hao Huang 1
- show all...
Venues
- acl1