Yujie Xing
2022
Balancing Multi-Domain Corpora Learning for Open-Domain Response Generation
Yujie Xing
|
Jinglun Cai
|
Nils Barlaug
|
Peng Liu
|
Jon Atle Gulla
Findings of the Association for Computational Linguistics: NAACL 2022
Open-domain conversational systems are assumed to generate equally good responses on multiple domains. Previous work achieved good performance on the single corpus, but training and evaluating on multiple corpora from different domains are less studied. This paper explores methods of generating relevant responses for each of multiple multi-domain corpora. We first examine interleaved learning which intermingles multiple corpora as the baseline. We then investigate two multi-domain learning methods, labeled learning and multi-task labeled learning, which encode each corpus through a unique corpus embedding. Furthermore, we propose Domain-specific Frequency (DF), a novel word-level importance weight that measures the relative importance of a word for a specific corpus compared to other corpora. Based on DF, we propose weighted learning, a method that integrates DF to the loss function. We also adopt DF as a new evaluation metric. Extensive experiments show that our methods gain significant improvements on both automatic and human evaluation. We share our code and data for reproducibility.
2018
Automatic Evaluation of Neural Personality-based Chatbots
Yujie Xing
|
Raquel Fernández
Proceedings of the 11th International Conference on Natural Language Generation
Stylistic variation is critical to render the utterances generated by conversational agents natural and engaging. In this paper, we focus on sequence-to-sequence models for open-domain dialogue response generation and propose a new method to evaluate the extent to which such models are able to generate responses that reflect different personality traits.
Search