Structural Information Preserving for Graph-to-Text Generation

Linfeng Song, Ante Wang, Jinsong Su, Yue Zhang, Kun Xu, Yubin Ge, Dong Yu


Abstract
The task of graph-to-text generation aims at producing sentences that preserve the meaning of input graphs. As a crucial defect, the current state-of-the-art models may mess up or even drop the core structural information of input graphs when generating outputs. We propose to tackle this problem by leveraging richer training signals that can guide our model for preserving input information. In particular, we introduce two types of autoencoding losses, each individually focusing on different aspects (a.k.a. views) of input graphs. The losses are then back-propagated to better calibrate our model via multi-task training. Experiments on two benchmarks for graph-to-text generation show the effectiveness of our approach over a state-of-the-art baseline.
Anthology ID:
2020.acl-main.712
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7987–7998
Language:
URL:
https://aclanthology.org/2020.acl-main.712
DOI:
10.18653/v1/2020.acl-main.712
Bibkey:
Cite (ACL):
Linfeng Song, Ante Wang, Jinsong Su, Yue Zhang, Kun Xu, Yubin Ge, and Dong Yu. 2020. Structural Information Preserving for Graph-to-Text Generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7987–7998, Online. Association for Computational Linguistics.
Cite (Informal):
Structural Information Preserving for Graph-to-Text Generation (Song et al., ACL 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.acl-main.712.pdf
Video:
 http://slideslive.com/38928740
Code
 Soistesimmer/AMR-multiview
Data
WebNLG