Self-supervised Graph Masking Pre-training for Graph-to-Text Generation

Jiuzhou Han, Ehsan Shareghi


Abstract
Large-scale pre-trained language models (PLMs) have advanced Graph-to-Text (G2T) generation by processing the linearised version of a graph. However, the linearisation is known to ignore the structural information. Additionally, PLMs are typically pre-trained on free text which introduces domain mismatch between pre-training and downstream G2T generation tasks. To address these shortcomings, we propose graph masking pre-training strategies that neither require supervision signals nor adjust the architecture of the underlying pre-trained encoder-decoder model. When used with a pre-trained T5, our approach achieves new state-of-the-art results on WebNLG+2020 and EventNarrative G2T generation datasets. Our method also shows to be very effective in the low-resource setting.
Anthology ID:
2022.emnlp-main.321
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4845–4853
Language:
URL:
https://aclanthology.org/2022.emnlp-main.321
DOI:
10.18653/v1/2022.emnlp-main.321
Bibkey:
Cite (ACL):
Jiuzhou Han and Ehsan Shareghi. 2022. Self-supervised Graph Masking Pre-training for Graph-to-Text Generation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 4845–4853, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Self-supervised Graph Masking Pre-training for Graph-to-Text Generation (Han & Shareghi, EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.321.pdf