DialAug: Mixing up Dialogue Contexts in Contrastive Learning for Robust Conversational Modeling

Lahari Poddar, Peiyao Wang, Julia Reinspach


Abstract
Retrieval-based conversational systems learn to rank response candidates for a given dialogue context by computing the similarity between their vector representations. However, training on a single textual form of the multi-turn context limits the ability of a model to learn representations that generalize to natural perturbations seen during inference. In this paper we propose a framework that incorporates augmented versions of a dialogue context into the learning objective. We utilize contrastive learning as an auxiliary objective to learn robust dialogue context representations that are invariant to perturbations injected through the augmentation method. We experiment with four benchmark dialogue datasets and demonstrate that our framework combines well with existing augmentation methods and can significantly improve over baseline BERT-based ranking architectures. Furthermore, we propose a novel data augmentation method, ConMix, that adds token level perturbations through stochastic mixing of tokens from other contexts in the batch. We show that our proposed augmentation method outperforms previous data augmentation approaches, and provides dialogue representations that are more robust to common perturbations seen during inference.
Anthology ID:
2022.coling-1.35
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
441–450
Language:
URL:
https://aclanthology.org/2022.coling-1.35
DOI:
Bibkey:
Cite (ACL):
Lahari Poddar, Peiyao Wang, and Julia Reinspach. 2022. DialAug: Mixing up Dialogue Contexts in Contrastive Learning for Robust Conversational Modeling. In Proceedings of the 29th International Conference on Computational Linguistics, pages 441–450, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
DialAug: Mixing up Dialogue Contexts in Contrastive Learning for Robust Conversational Modeling (Poddar et al., COLING 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.coling-1.35.pdf
Data
DSTC7 Task 1