Haofen Wang
2024
Rewarding What Matters: Step-by-Step Reinforcement Learning for Task-Oriented Dialogue
Huifang Du
|
Shuqin Li
|
Minghao Wu
|
Xuejing Feng
|
Yuan-Fang Li
|
Haofen Wang
Findings of the Association for Computational Linguistics: EMNLP 2024
Reinforcement learning (RL) is a powerful approach to enhance task-oriented dialogue (TOD) systems. However, existing RL methods tend to mainly focus on generation tasks, such as dialogue policy learning (DPL) or response generation (RG), while neglecting dialogue state tracking (DST) for understanding. This narrow focus limits the systems to achieve globally optimal performance by overlooking the interdependence between understanding and generation. Additionally, RL methods face challenges with sparse and delayed rewards, which complicates training and optimization. To address these issues, we extend RL into both understanding and generation tasks by introducing step-by-step rewards throughout the token generation. The understanding reward increases as more slots are correctly filled in DST, while the generation reward grows with the accurate inclusion of user requests. Our approach provides a balanced optimization aligned with task completion. Experimental results demonstrate that our approach effectively enhances the performance of TOD systems and achieves new state-of-the-art results on three widely used datasets, including MultiWOZ2.0, MultiWOZ2.1, and In-Car. Our approach also shows superior few-shot ability in low-resource settings compared to current models.
2015
The GuanXi network: a new multilingual LLOD for Language Learning applications
Ismail El Maarouf
|
Hatem Mousselly-Sergieh
|
Eugene Alferov
|
Haofen Wang
|
Zhijia Fang
|
Doug Cooper
Proceedings of the Second Workshop on Natural Language Processing and Linked Open Data
Search
Co-authors
- Huifang Du 1
- Shuqin Li 1
- Minghao Wu 1
- Xuejing Feng 1
- Yuan-Fang Li 1
- show all...