Reader: Model-based language-instructed reinforcement learning

Nicola Dainese, Pekka Marttinen, Alexander Ilin


Abstract
We explore how we can build accurate world models, which are partially specified by language, and how we can plan with them in the face of novelty and uncertainty. We propose the first model-based reinforcement learning approach to tackle the environment Read To Fight Monsters (Zhong et al., 2019), a grounded policy learning problem. In RTFM an agent has to reason over a set of rules and a goal, both described in a language manual, and the observations, while taking into account the uncertainty arising from the stochasticity of the environment, in order to generalize successfully its policy to test episodes. We demonstrate the superior performance and sample efficiency of our model-based approach to the existing model-free SOTA agents in eight variants of RTFM. Furthermore, we show how the agent’s plans can be inspected, which represents progress towards more interpretable agents.
Anthology ID:
2023.emnlp-main.1032
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
16583–16599
Language:
URL:
https://aclanthology.org/2023.emnlp-main.1032
DOI:
10.18653/v1/2023.emnlp-main.1032
Bibkey:
Cite (ACL):
Nicola Dainese, Pekka Marttinen, and Alexander Ilin. 2023. Reader: Model-based language-instructed reinforcement learning. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 16583–16599, Singapore. Association for Computational Linguistics.
Cite (Informal):
Reader: Model-based language-instructed reinforcement learning (Dainese et al., EMNLP 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.emnlp-main.1032.pdf
Video:
 https://aclanthology.org/2023.emnlp-main.1032.mp4