DialogConv: A Lightweight Fully Convolutional Network for Multi-view Response Selection

Yongkang Liu, Shi Feng, Wei Gao, Daling Wang, Yifei Zhang


Abstract
Current end-to-end retrieval-based dialogue systems are mainly based on Recurrent Neural Networks or Transformers with attention mechanisms. Although promising results have been achieved, these models often suffer from slow inference or huge number of parameters. In this paper, we propose a novel lightweight fully convolutional architecture, called DialogConv, for response selection. DialogConv is exclusively built on top of convolution to extract matching features of context and response. Dialogues are modeled in 3D views, where DialogConv performs convolution operations on embedding view, word view and utterance view to capture richer semantic information from multiple contextual views. On the four benchmark datasets, compared with state-of-the-art baselines, DialogConv is on average about 8.5x smaller in size, and 79.39x and 10.64x faster on CPU and GPU devices, respectively. At the same time, DialogConv achieves the competitive effectiveness of response selection.
Anthology ID:
2022.emnlp-main.828
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12086–12098
Language:
URL:
https://aclanthology.org/2022.emnlp-main.828
DOI:
10.18653/v1/2022.emnlp-main.828
Bibkey:
Cite (ACL):
Yongkang Liu, Shi Feng, Wei Gao, Daling Wang, and Yifei Zhang. 2022. DialogConv: A Lightweight Fully Convolutional Network for Multi-view Response Selection. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 12086–12098, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
DialogConv: A Lightweight Fully Convolutional Network for Multi-view Response Selection (Liu et al., EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.828.pdf