Production-based Cognitive Models as a Test Suite for Reinforcement Learning Algorithms
Adrian
Brasoveanu
author
Jakub
Dotlacil
author
2020-11
text
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
Emmanuele
Chersoni
editor
Cassandra
Jacobs
editor
Yohei
Oseki
editor
Laurent
Prévot
editor
Enrico
Santus
editor
Association for Computational Linguistics
Online
conference publication
We introduce a framework in which production-rule based computational cognitive modeling and Reinforcement Learning can systematically interact and inform each other. We focus on linguistic applications because the sophisticated rule-based cognitive models needed to capture linguistic behavioral data promise to provide a stringent test suite for RL algorithms, connecting RL algorithms to both accuracy and reaction-time experimental data. Thus, we open a path towards assembling an experimentally rigorous and cognitively realistic benchmark for RL algorithms. We extend our previous work on lexical decision tasks and tabular RL algorithms (Brasoveanu and Dotlačil, 2020b) with a discussion of neural-network based approaches, and a discussion of how parsing can be formalized as an RL problem.
brasoveanu-dotlacil-2020-production
10.18653/v1/2020.cmcl-1.3
https://aclanthology.org/2020.cmcl-1.3
2020-11
28
37