2024
pdf
bib
abs
ReproHum #0033-3: Comparable Relative Results with Lower Absolute Values in a Reproduction Study
Yiru Li
|
Huiyuan Lai
|
Antonio Toral
|
Malvina Nissim
Proceedings of the Fourth Workshop on Human Evaluation of NLP Systems (HumEval) @ LREC-COLING 2024
In the context of the ReproHum project aimed at assessing the reliability of human evaluation, we replicated the human evaluation conducted in “Generating Scientific Definitions with Controllable Complexity” by August et al. (2022). Specifically, humans were asked to assess the fluency of automatically generated scientific definitions by three different models, with output complexity varying according to target audience. Evaluation conditions were kept as close as possible to the original study, except of necessary and minor adjustments. Our results, despite yielding lower absolute performance, show that relative performance across the three tested systems remains comparable to what was observed in the original paper. On the basis of lower inter-annotator agreement and feedback received from annotators in our experiment, we also observe that the ambiguity of the concept being evaluated may play a substantial role in human assessment.
2023
pdf
bib
abs
Same Trends, Different Answers: Insights from a Replication Study of Human Plausibility Judgments on Narrative Continuations
Yiru Li
|
Huiyuan Lai
|
Antonio Toral
|
Malvina Nissim
Proceedings of the 3rd Workshop on Human Evaluation of NLP Systems
We reproduced the human-based evaluation of the continuation of narratives task presented by Chakrabarty et al. (2022). This experiment is performed as part of the ReproNLP Shared Task on Reproducibility of Evaluations in NLP (Track C). Our main goal is to reproduce the original study under conditions as similar as possible. Specifically, we follow the original experimental design and perform human evaluations of the data from the original study, while describing the differences between the two studies. We then present the results of these two studies together with an analysis of similarities between them. Inter-annotator agreement (Krippendorff’s alpha) in the reproduction study is lower than in the original study, while the human evaluation results of both studies have the same trends, that is, our results support the findings in the original study.
pdf
bib
abs
Missing Information, Unresponsive Authors, Experimental Flaws: The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLP
Anya Belz
|
Craig Thomson
|
Ehud Reiter
|
Gavin Abercrombie
|
Jose M. Alonso-Moral
|
Mohammad Arvan
|
Anouck Braggaar
|
Mark Cieliebak
|
Elizabeth Clark
|
Kees van Deemter
|
Tanvi Dinkar
|
Ondřej Dušek
|
Steffen Eger
|
Qixiang Fang
|
Mingqi Gao
|
Albert Gatt
|
Dimitra Gkatzia
|
Javier González-Corbelle
|
Dirk Hovy
|
Manuela Hürlimann
|
Takumi Ito
|
John D. Kelleher
|
Filip Klubicka
|
Emiel Krahmer
|
Huiyuan Lai
|
Chris van der Lee
|
Yiru Li
|
Saad Mahamood
|
Margot Mieskes
|
Emiel van Miltenburg
|
Pablo Mosteiro
|
Malvina Nissim
|
Natalie Parde
|
Ondřej Plátek
|
Verena Rieser
|
Jie Ruan
|
Joel Tetreault
|
Antonio Toral
|
Xiaojun Wan
|
Leo Wanner
|
Lewis Watson
|
Diyi Yang
Proceedings of the Fourth Workshop on Insights from Negative Results in NLP
We report our efforts in identifying a set of previous human evaluations in NLP that would be suitable for a coordinated study examining what makes human evaluations in NLP more/less reproducible. We present our results and findings, which include that just 13% of papers had (i) sufficiently low barriers to reproduction, and (ii) enough obtainable information, to be considered for reproduction, and that all but one of the experiments we selected for reproduction was discovered to have flaws that made the meaningfulness of conducting a reproduction questionable. As a result, we had to change our coordinated study design from a reproduce approach to a standardise-then-reproduce-twice approach. Our overall (negative) finding that the great majority of human evaluations in NLP is not repeatable and/or not reproducible and/or too flawed to justify reproduction, paints a dire picture, but presents an opportunity for a rethink about how to design and report human evaluations in NLP.