Modeling Graph Structure via Relative Position for Text Generation from Knowledge Graphs

Martin Schmitt, Leonardo F. R. Ribeiro, Philipp Dufter, Iryna Gurevych, Hinrich Schütze


Abstract
We present Graformer, a novel Transformer-based encoder-decoder architecture for graph-to-text generation. With our novel graph self-attention, the encoding of a node relies on all nodes in the input graph - not only direct neighbors - facilitating the detection of global patterns. We represent the relation between two nodes as the length of the shortest path between them. Graformer learns to weight these node-node relations differently for different attention heads, thus virtually learning differently connected views of the input graph. We evaluate Graformer on two popular graph-to-text generation benchmarks, AGENDA and WebNLG, where it achieves strong performance while using many fewer parameters than other approaches.
Anthology ID:
2021.textgraphs-1.2
Volume:
Proceedings of the Fifteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-15)
Month:
June
Year:
2021
Address:
Mexico City, Mexico
Venues:
NAACL | TextGraphs
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10–21
Language:
URL:
https://aclanthology.org/2021.textgraphs-1.2
DOI:
10.18653/v1/2021.textgraphs-1.2
Bibkey:
Cite (ACL):
Martin Schmitt, Leonardo F. R. Ribeiro, Philipp Dufter, Iryna Gurevych, and Hinrich Schütze. 2021. Modeling Graph Structure via Relative Position for Text Generation from Knowledge Graphs. In Proceedings of the Fifteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-15), pages 10–21, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Modeling Graph Structure via Relative Position for Text Generation from Knowledge Graphs (Schmitt et al., TextGraphs 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.textgraphs-1.2.pdf
Data
AGENDADBpediaWebNLG