%0 Conference Proceedings %T Lightweight, Dynamic Graph Convolutional Networks for AMR-to-Text Generation %A Zhang, Yan %A Guo, Zhijiang %A Teng, Zhiyang %A Lu, Wei %A Cohen, Shay B. %A Liu, Zuozhu %A Bing, Lidong %Y Webber, Bonnie %Y Cohn, Trevor %Y He, Yulan %Y Liu, Yang %S Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) %D 2020 %8 November %I Association for Computational Linguistics %C Online %F zhang-etal-2020-lightweight %X AMR-to-text generation is used to transduce Abstract Meaning Representation structures (AMR) into text. A key challenge in this task is to efficiently learn effective graph representations. Previously, Graph Convolution Networks (GCNs) were used to encode input AMRs, however, vanilla GCNs are not able to capture non-local information and additionally, they follow a local (first-order) information aggregation scheme. To account for these issues, larger and deeper GCN models are required to capture more complex interactions. In this paper, we introduce a dynamic fusion mechanism, proposing Lightweight Dynamic Graph Convolutional Networks (LDGCNs) that capture richer non-local interactions by synthesizing higher order information from the input graphs. We further develop two novel parameter saving strategies based on the group graph convolutions and weight tied convolutions to reduce memory usage and model complexity. With the help of these strategies, we are able to train a model with fewer parameters while maintaining the model capacity. Experiments demonstrate that LDGCNs outperform state-of-the-art models on two benchmark datasets for AMR-to-text generation with significantly fewer parameters. %R 10.18653/v1/2020.emnlp-main.169 %U https://aclanthology.org/2020.emnlp-main.169 %U https://doi.org/10.18653/v1/2020.emnlp-main.169 %P 2162-2172