%0 Conference Proceedings %T Agreement is overrated: A plea for correlation to assess human evaluation reliability %A Amidei, Jacopo %A Piwek, Paul %A Willis, Alistair %Y van Deemter, Kees %Y Lin, Chenghua %Y Takamura, Hiroya %S Proceedings of the 12th International Conference on Natural Language Generation %D 2019 %8 oct–nov %I Association for Computational Linguistics %C Tokyo, Japan %F amidei-etal-2019-agreement %X Inter-Annotator Agreement (IAA) is used as a means of assessing the quality of NLG evaluation data, in particular, its reliability. According to existing scales of IAA interpretation – see, for example, Lommel et al. (2014), Liu et al. (2016), Sedoc et al. (2018) and Amidei et al. (2018a) – most data collected for NLG evaluation fail the reliability test. We confirmed this trend by analysing papers published over the last 10 years in NLG-specific conferences (in total 135 papers that included some sort of human evaluation study). Following Sampson and Babarczy (2008), Lommel et al. (2014), Joshi et al. (2016) and Amidei et al. (2018b), such phenomena can be explained in terms of irreducible human language variability. Using three case studies, we show the limits of considering IAA as the only criterion for checking evaluation reliability. Given human language variability, we propose that for human evaluation of NLG, correlation coefficients and agreement coefficients should be used together to obtain a better assessment of the evaluation data reliability. This is illustrated using the three case studies. %R 10.18653/v1/W19-8642 %U https://aclanthology.org/W19-8642 %U https://doi.org/10.18653/v1/W19-8642 %P 344-354