%0 Conference Proceedings
%T Modeling Graph Structure via Relative Position for Text Generation from Knowledge Graphs
%A Schmitt, Martin
%A Ribeiro, Leonardo F. R.
%A Dufter, Philipp
%A Gurevych, Iryna
%A Schütze, Hinrich
%S Proceedings of the Fifteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-15)
%D 2021
%8 June
%I Association for Computational Linguistics
%C Mexico City, Mexico
%F schmitt-etal-2021-modeling
%X We present Graformer, a novel Transformer-based encoder-decoder architecture for graph-to-text generation. With our novel graph self-attention, the encoding of a node relies on all nodes in the input graph - not only direct neighbors - facilitating the detection of global patterns. We represent the relation between two nodes as the length of the shortest path between them. Graformer learns to weight these node-node relations differently for different attention heads, thus virtually learning differently connected views of the input graph. We evaluate Graformer on two popular graph-to-text generation benchmarks, AGENDA and WebNLG, where it achieves strong performance while using many fewer parameters than other approaches.
%R 10.18653/v1/2021.textgraphs-1.2
%U https://aclanthology.org/2021.textgraphs-1.2
%U https://doi.org/10.18653/v1/2021.textgraphs-1.2
%P 10-21