Mitigating Contradictions in Dialogue Based on Contrastive Learning

Weizhao Li, Junsheng Kong, Ben Liao, Yi Cai


Abstract
Chatbot models have achieved remarkable progress in recent years but tend to yield contradictory responses. In this paper, we exploit the advantage of contrastive learning technique to mitigate this issue. To endow the model with the ability of discriminating contradictory patterns, we minimize the similarity between the target response and contradiction related negative example. The negative example is generated with learnable latent noise, which receives contradiction related feedback from the pretrained critic. Experimental results show that our method helps to avoid contradictions in response generation while preserving response fluency, outperforming existing methods on both automatic and human evaluation.
Anthology ID:
2022.findings-acl.219
Volume:
Findings of the Association for Computational Linguistics: ACL 2022
Month:
May
Year:
2022
Address:
Dublin, Ireland
Venues:
ACL | Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2781–2788
Language:
URL:
https://aclanthology.org/2022.findings-acl.219
DOI:
10.18653/v1/2022.findings-acl.219
Bibkey:
Cite (ACL):
Weizhao Li, Junsheng Kong, Ben Liao, and Yi Cai. 2022. Mitigating Contradictions in Dialogue Based on Contrastive Learning. In Findings of the Association for Computational Linguistics: ACL 2022, pages 2781–2788, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Mitigating Contradictions in Dialogue Based on Contrastive Learning (Li et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-acl.219.pdf
Software:
 2022.findings-acl.219.software.zip