%0 Conference Proceedings %T An Auto-Encoder Matching Model for Learning Utterance-Level Semantic Dependency in Dialogue Generation %A Luo, Liangchen %A Xu, Jingjing %A Lin, Junyang %A Zeng, Qi %A Sun, Xu %Y Riloff, Ellen %Y Chiang, David %Y Hockenmaier, Julia %Y Tsujii, Jun’ichi %S Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing %D 2018 %8 oct nov %I Association for Computational Linguistics %C Brussels, Belgium %F luo-etal-2018-auto %X Generating semantically coherent responses is still a major challenge in dialogue generation. Different from conventional text generation tasks, the mapping between inputs and responses in conversations is more complicated, which highly demands the understanding of utterance-level semantic dependency, a relation between the whole meanings of inputs and outputs. To address this problem, we propose an Auto-Encoder Matching (AEM) model to learn such dependency. The model contains two auto-encoders and one mapping module. The auto-encoders learn the semantic representations of inputs and responses, and the mapping module learns to connect the utterance-level representations. Experimental results from automatic and human evaluations demonstrate that our model is capable of generating responses of high coherence and fluency compared to baseline models. %R 10.18653/v1/D18-1075 %U https://aclanthology.org/D18-1075 %U https://doi.org/10.18653/v1/D18-1075 %P 702-707