Generalization in Text-based Games via Hierarchical Reinforcement Learning

Yunqiu Xu, Meng Fang, Ling Chen, Yali Du, Chengqi Zhang


Abstract
Deep reinforcement learning provides a promising approach for text-based games in studying natural language communication between humans and artificial agents. However, the generalization still remains a big challenge as the agents depend critically on the complexity and variety of training tasks. In this paper, we address this problem by introducing a hierarchical framework built upon the knowledge graph-based RL agent. In the high level, a meta-policy is executed to decompose the whole game into a set of subtasks specified by textual goals, and select one of them based on the KG. Then a sub-policy in the low level is executed to conduct goal-conditioned reinforcement learning. We carry out experiments on games with various difficulty levels and show that the proposed method enjoys favorable generalizability.
Anthology ID:
2021.findings-emnlp.116
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2021
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Venue:
Findings
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
1343–1353
Language:
URL:
https://aclanthology.org/2021.findings-emnlp.116
DOI:
10.18653/v1/2021.findings-emnlp.116
Bibkey:
Cite (ACL):
Yunqiu Xu, Meng Fang, Ling Chen, Yali Du, and Chengqi Zhang. 2021. Generalization in Text-based Games via Hierarchical Reinforcement Learning. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 1343–1353, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Generalization in Text-based Games via Hierarchical Reinforcement Learning (Xu et al., Findings 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.findings-emnlp.116.pdf
Video:
 https://aclanthology.org/2021.findings-emnlp.116.mp4
Code
 yunqiuxu/h-kga