Learning to Abstract for Memory-augmented Conversational Response Generation

Zhiliang Tian, Wei Bi, Xiaopeng Li, Nevin L. Zhang


Abstract
Neural generative models for open-domain chit-chat conversations have become an active area of research in recent years. A critical issue with most existing generative models is that the generated responses lack informativeness and diversity. A few researchers attempt to leverage the results of retrieval models to strengthen the generative models, but these models are limited by the quality of the retrieval results. In this work, we propose a memory-augmented generative model, which learns to abstract from the training corpus and saves the useful information to the memory to assist the response generation. Our model clusters query-response samples, extracts characteristics of each cluster, and learns to utilize these characteristics for response generation. Experimental results show that our model outperforms other competitive baselines.
Anthology ID:
P19-1371
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3816–3825
Language:
URL:
https://aclanthology.org/P19-1371
DOI:
10.18653/v1/P19-1371
Bibkey:
Cite (ACL):
Zhiliang Tian, Wei Bi, Xiaopeng Li, and Nevin L. Zhang. 2019. Learning to Abstract for Memory-augmented Conversational Response Generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3816–3825, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Learning to Abstract for Memory-augmented Conversational Response Generation (Tian et al., ACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/P19-1371.pdf
Code
 tianzhiliang/MemoryAugDialog