How to Represent Context Better? An Empirical Study on Context Modeling for Multi-turn Response Selection

Jiazhan Feng, Chongyang Tao, Chang Liu, Rui Yan, Dongyan Zhao


Abstract
Building retrieval-based dialogue models that can predict appropriate responses based on the understanding of multi-turn context messages is a challenging problem. Early models usually concatenate all utterances or independently encode each dialogue turn, which may lead to an inadequate understanding of dialogue status. Although a few researchers have noticed the importance of context modeling in multi-turn response prediction, there is no systematic comparison to analyze how to model context effectively and no framework to unify those methods. In this paper, instead of configuring new architectures, we investigate how to improve existing models with a better context modeling method. Specifically, we heuristically summarize three categories of turn-aware context modeling strategies which model the context messages from the perspective of sequential relationship, local relationship, and query-aware manner respectively. A Turn-Aware Context Modeling (TACM) layer is explored to flexibly adapt and unify these context modeling strategies to several advanced response selection models. Evaluation results on three public data sets indicate that employing each individual context modeling strategy or multiple strategies can consistently improve the performance of existing models.
Anthology ID:
2022.findings-emnlp.539
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7285–7298
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.539
DOI:
10.18653/v1/2022.findings-emnlp.539
Bibkey:
Cite (ACL):
Jiazhan Feng, Chongyang Tao, Chang Liu, Rui Yan, and Dongyan Zhao. 2022. How to Represent Context Better? An Empirical Study on Context Modeling for Multi-turn Response Selection. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 7285–7298, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
How to Represent Context Better? An Empirical Study on Context Modeling for Multi-turn Response Selection (Feng et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-emnlp.539.pdf