%0 Conference Proceedings %T A General Benchmarking Framework for Text Generation %A Moussallem, Diego %A Kaur, Paramjot %A Ferreira, Thiago %A van der Lee, Chris %A Shimorina, Anastasia %A Conrads, Felix %A Röder, Michael %A Speck, René %A Gardent, Claire %A Mille, Simon %A Ilinykh, Nikolai %A Ngonga Ngomo, Axel-Cyrille %Y Castro Ferreira, Thiago %Y Gardent, Claire %Y Ilinykh, Nikolai %Y van der Lee, Chris %Y Mille, Simon %Y Moussallem, Diego %Y Shimorina, Anastasia %S Proceedings of the 3rd International Workshop on Natural Language Generation from the Semantic Web (WebNLG+) %D 2020 %8 December %I Association for Computational Linguistics %C Dublin, Ireland (Virtual) %F moussallem-etal-2020-general %X The RDF-to-text task has recently gained substantial attention due to the continuous growth of RDF knowledge graphs in number and size. Recent studies have focused on systematically comparing RDF-to-text approaches on benchmarking datasets such as WebNLG. Although some evaluation tools have already been proposed for text generation, none of the existing solutions abides by the Findability, Accessibility, Interoperability, and Reusability (FAIR) principles and involves RDF data for the knowledge extraction task. In this paper, we present BENG, a FAIR benchmarking platform for Natural Language Generation (NLG) and Knowledge Extraction systems with focus on RDF data. BENG builds upon the successful benchmarking platform GERBIL, is opensource and is publicly available along with the data it contains. %U https://aclanthology.org/2020.webnlg-1.3 %P 27-33