Dialogue State Tracking with Incremental Reasoning

Lizi Liao, Le Hong Long, Yunshan Ma, Wenqiang Lei, Tat-Seng Chua


Abstract
Tracking dialogue states to better interpret user goals and feed downstream policy learning is a bottleneck in dialogue management. Common practice has been to treat it as a problem of classifying dialogue content into a set of pre-defined slot-value pairs, or generating values for different slots given the dialogue history. Both have limitations on considering dependencies that occur on dialogues, and are lacking of reasoning capabilities. This paper proposes to track dialogue states gradually with reasoning over dialogue turns with the help of the back-end data. Empirical results demonstrate that our method outperforms the state-of-the-art methods in terms of joint belief accuracy for MultiWOZ 2.1, a large-scale human–human dialogue dataset across multiple domains.
Anthology ID:
2021.tacl-1.34
Volume:
Transactions of the Association for Computational Linguistics, Volume 9
Month:
Year:
2021
Address:
Cambridge, MA
Editors:
Brian Roark, Ani Nenkova
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
557–569
Language:
URL:
https://aclanthology.org/2021.tacl-1.34
DOI:
10.1162/tacl_a_00384
Bibkey:
Cite (ACL):
Lizi Liao, Le Hong Long, Yunshan Ma, Wenqiang Lei, and Tat-Seng Chua. 2021. Dialogue State Tracking with Incremental Reasoning. Transactions of the Association for Computational Linguistics, 9:557–569.
Cite (Informal):
Dialogue State Tracking with Incremental Reasoning (Liao et al., TACL 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.tacl-1.34.pdf