Pretrained Language Models for Dialogue Generation with Multiple Input Sources

Yu Cao, Wei Bi, Meng Fang, Dacheng Tao


Abstract
Large-scale pretrained language models have achieved outstanding performance on natural language understanding tasks. However, it is still under investigating how to apply them to dialogue generation tasks, especially those with responses conditioned on multiple sources. Previous work simply concatenates all input sources or averages information from different input sources. In this work, we study dialogue models with multiple input sources adapted from the pretrained language model GPT2. We explore various methods to fuse multiple separate attention information corresponding to different sources. Our experimental results show that proper fusion methods deliver higher relevance with dialogue history than simple fusion baselines.
Anthology ID:
2020.findings-emnlp.81
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Editors:
Trevor Cohn, Yulan He, Yang Liu
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
909–917
Language:
URL:
https://aclanthology.org/2020.findings-emnlp.81
DOI:
10.18653/v1/2020.findings-emnlp.81
Bibkey:
Cite (ACL):
Yu Cao, Wei Bi, Meng Fang, and Dacheng Tao. 2020. Pretrained Language Models for Dialogue Generation with Multiple Input Sources. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 909–917, Online. Association for Computational Linguistics.
Cite (Informal):
Pretrained Language Models for Dialogue Generation with Multiple Input Sources (Cao et al., Findings 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.findings-emnlp.81.pdf
Code
 caoyu-noob/Multi-GPT2