%0 Conference Proceedings %T Imperfect also Deserves Reward: Multi-Level and Sequential Reward Modeling for Better Dialog Management %A Hou, Zhengxu %A Liu, Bang %A Zhao, Ruihui %A Ou, Zijing %A Liu, Yafei %A Chen, Xi %A Zheng, Yefeng %Y Toutanova, Kristina %Y Rumshisky, Anna %Y Zettlemoyer, Luke %Y Hakkani-Tur, Dilek %Y Beltagy, Iz %Y Bethard, Steven %Y Cotterell, Ryan %Y Chakraborty, Tanmoy %Y Zhou, Yichao %S Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies %D 2021 %8 June %I Association for Computational Linguistics %C Online %F hou-etal-2021-imperfect %X For task-oriented dialog systems, training a Reinforcement Learning (RL) based Dialog Management module suffers from low sample efficiency and slow convergence speed due to the sparse rewards in RL. To solve this problem, many strategies have been proposed to give proper rewards when training RL, but their rewards lack interpretability and cannot accurately estimate the distribution of state-action pairs in real dialogs. In this paper, we propose a multi-level reward modeling approach that factorizes a reward into a three-level hierarchy: domain, act, and slot. Based on inverse adversarial reinforcement learning, our designed reward model can provide more accurate and explainable reward signals for state-action pairs. Extensive evaluations show that our approach can be applied to a wide range of reinforcement learning-based dialog systems and significantly improves both the performance and the speed of convergence. %R 10.18653/v1/2021.naacl-main.238 %U https://aclanthology.org/2021.naacl-main.238 %U https://doi.org/10.18653/v1/2021.naacl-main.238 %P 2993-3001