%0 Conference Proceedings %T Compositional Generalization via Semantic Tagging %A Zheng, Hao %A Lapata, Mirella %Y Moens, Marie-Francine %Y Huang, Xuanjing %Y Specia, Lucia %Y Yih, Scott Wen-tau %S Findings of the Association for Computational Linguistics: EMNLP 2021 %D 2021 %8 November %I Association for Computational Linguistics %C Punta Cana, Dominican Republic %F zheng-lapata-2021-compositional-generalization %X Although neural sequence-to-sequence models have been successfully applied to semantic parsing, they fail at compositional generalization, i.e., they are unable to systematically generalize to unseen compositions of seen components. Motivated by traditional semantic parsing where compositionality is explicitly accounted for by symbolic grammars, we propose a new decoding framework that preserves the expressivity and generality of sequence-to-sequence models while featuring lexicon-style alignments and disentangled information processing. Specifically, we decompose decoding into two phases where an input utterance is first tagged with semantic symbols representing the meaning of individual words, and then a sequence-to-sequence model is used to predict the final meaning representation conditioning on the utterance and the predicted tag sequence. Experimental results on three semantic parsing datasets show that the proposed approach consistently improves compositional generalization across model architectures, domains, and semantic formalisms. %R 10.18653/v1/2021.findings-emnlp.88 %U https://aclanthology.org/2021.findings-emnlp.88 %U https://doi.org/10.18653/v1/2021.findings-emnlp.88 %P 1022-1032