MoNET: Tackle State Momentum via Noise-Enhanced Training for Dialogue State Tracking

Haoning Zhang, Junwei Bao, Haipeng Sun, Youzheng Wu, Wenye Li, Shuguang Cui, Xiaodong He


Abstract
Dialogue state tracking (DST) aims to convert the dialogue history into dialogue states which consist of slot-value pairs. As condensed structural information memorizes all history information, the dialogue state in the previous turn is typically adopted as the input for predicting the current state by DST models. However, these models tend to keep the predicted slot values unchanged, which is defined as state momentum in this paper. Specifically, the models struggle to update slot values that need to be changed and correct wrongly predicted slot values in the previous turn. To this end, we propose MoNET to tackle state momentum via noise-enhanced training. First, the previous state of each turn in the training data is noised via replacing some of its slot values. Then, the noised previous state is used as the input to learn to predict the current state, improving the model’s ability to update and correct slot values. Furthermore, a contrastive contextmatching framework is designed to narrow the representation distance between a state and itscorresponding noised variant, which reduces the impact of noised state and makes the model better understand the dialogue history. Experimental results on MultiWOZ datasets show that MoNET outperforms previous DST methods. Ablations and analysis verify the effectiveness of MoNET in alleviating state momentum issues and improving the anti-noise ability.
Anthology ID:
2023.findings-acl.33
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
520–534
Language:
URL:
https://aclanthology.org/2023.findings-acl.33
DOI:
10.18653/v1/2023.findings-acl.33
Bibkey:
Cite (ACL):
Haoning Zhang, Junwei Bao, Haipeng Sun, Youzheng Wu, Wenye Li, Shuguang Cui, and Xiaodong He. 2023. MoNET: Tackle State Momentum via Noise-Enhanced Training for Dialogue State Tracking. In Findings of the Association for Computational Linguistics: ACL 2023, pages 520–534, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
MoNET: Tackle State Momentum via Noise-Enhanced Training for Dialogue State Tracking (Zhang et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.33.pdf
Video:
 https://aclanthology.org/2023.findings-acl.33.mp4