Luou Wen
2023
Barriers and enabling factors for error analysis in NLG research
Emiel van Miltenburg
|
Miruna Clinciu
|
Ondřej Dušek
|
Dimitra Gkatzia
|
Stephanie Inglis
|
Leo Leppänen
|
Saad Mahamood
|
Stephanie Schoch
|
Craig Thomson
|
Luou Wen
Northern European Journal of Language Technology, Volume 9
Earlier research has shown that few studies in Natural Language Generation (NLG) evaluate their system outputs using an error analysis, despite known limitations of automatic evaluation metrics and human ratings. This position paper takes the stance that error analyses should be encouraged, and discusses several ways to do so. This paper is based on our shared experience as authors as well as a survey we distributed as a means of public consultation. We provide an overview of existing barriers to carrying out error analyses, and propose changes to improve error reporting in the NLG literature.
2021
Underreporting of errors in NLG output, and what to do about it
Emiel van Miltenburg
|
Miruna Clinciu
|
Ondřej Dušek
|
Dimitra Gkatzia
|
Stephanie Inglis
|
Leo Leppänen
|
Saad Mahamood
|
Emma Manning
|
Stephanie Schoch
|
Craig Thomson
|
Luou Wen
Proceedings of the 14th International Conference on Natural Language Generation
We observe a severe under-reporting of the different kinds of errors that Natural Language Generation systems make. This is a problem, because mistakes are an important indicator of where systems should still be improved. If authors only report overall performance metrics, the research community is left in the dark about the specific weaknesses that are exhibited by ‘state-of-the-art’ research. Next to quantifying the extent of error under-reporting, this position paper provides recommendations for error identification, analysis and reporting.
Search
Co-authors
- Emiel Van Miltenburg 2
- Miruna Clinciu 2
- Ondřej Dušek 2
- Dimitra Gkatzia 2
- Stephanie Inglis 2
- show all...