%0 Conference Proceedings %T NUIG-DSI’s submission to The GEM Benchmark 2021 %A Pasricha, Nivranshu %A Arcan, Mihael %A Buitelaar, Paul %Y Bosselut, Antoine %Y Durmus, Esin %Y Gangal, Varun Prashant %Y Gehrmann, Sebastian %Y Jernite, Yacine %Y Perez-Beltrachini, Laura %Y Shaikh, Samira %Y Xu, Wei %S Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021) %D 2021 %8 August %I Association for Computational Linguistics %C Online %F pasricha-etal-2021-nuig %X This paper describes the submission by NUIG-DSI to the GEM benchmark 2021. We participate in the modeling shared task where we submit outputs on four datasets for data-to-text generation, namely, DART, WebNLG (en), E2E and CommonGen. We follow an approach similar to the one described in the GEM benchmark paper where we use the pre-trained T5-base model for our submission. We train this model on additional monolingual data where we experiment with different masking strategies specifically focused on masking entities, predicates and concepts as well as a random masking strategy for pre-training. In our results we find that random masking performs the best in terms of automatic evaluation metrics, though the results are not statistically significantly different compared to other masking strategies. %R 10.18653/v1/2021.gem-1.13 %U https://aclanthology.org/2021.gem-1.13 %U https://doi.org/10.18653/v1/2021.gem-1.13 %P 148-154