Jia Cheng
2024
DiffusionDialog: A Diffusion Model for Diverse Dialog Generation with Latent Space
Jianxiang Xiang
|
Zhenhua Liu
|
Haodong Liu
|
Yin Bai
|
Jia Cheng
|
Wenliang Chen
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
In real-life conversations, the content is diverse, and there exist one-to-many problems that require diverse generation. Previous studies attempted to introduce discrete or Gaussian-based latent variables to address the one-to-many problem, but the diversity is limited. Recently, diffusion models have made breakthroughs in computer vision and some attempts have been made in natural language processing. In this paper, we propose DiffusionDialog, a novel approach to enhance the diversity of dialogue generation with the help of diffusion model. In our approach, we introduce the continuous latent variables in the diffusion model instead of the discrete ones or VAE, which are often used in the previous studies. The problem of using discrete variables in dialog task is how to build a effective prior of latent space and inferring process to infer the proper latent given the context. Combining the encoder and latent-based diffusion model, we encode the latent of response in a continuous space as the prior instead of fixed Gaussian distribution in VAE or simply discrete ones, and we infer the latent by denoising step by step with diffusion model. The experimental results show that our model greatly enhance the diversity of dialog response while keeping the coherence. In further analysis, we find that our diffusion model achieved high inference efficiency which is the main challenge of applying diffusion model in natural language processing.
MoPE: Mixture of Prefix Experts for Zero-Shot Dialogue State Tracking
Tianwen Tang
|
Tong Zhu
|
Haodong Liu
|
Yin Bai
|
Jia Cheng
|
Wenliang Chen
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Zero-shot dialogue state tracking (DST) transfers knowledge to unseen domains, reducing the cost of annotating new datasets. Previous zero-shot DST models mainly suffer from domain transferring and partial prediction problems. To address these challenges, we propose Mixture of Prefix Experts (MoPE) to establish connections between similar slots in different domains, which strengthens the model transfer performance in unseen domains. Empirical results demonstrate that MoPE-DST achieves the joint goal accuracy of 57.13% on MultiWOZ2.1 and 55.4.
Search
Co-authors
- Haodong Liu 2
- Yin Bai 2
- Wenliang Chen 2
- Jianxiang Xiang 1
- Zhenhua Liu 1
- show all...