%0 Conference Proceedings %T Comparing Character-level Neural Language Models Using a Lexical Decision Task %A Le Godais, Gaël %A Linzen, Tal %A Dupoux, Emmanuel %Y Lapata, Mirella %Y Blunsom, Phil %Y Koller, Alexander %S Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers %D 2017 %8 April %I Association for Computational Linguistics %C Valencia, Spain %F le-godais-etal-2017-comparing %X What is the information captured by neural network models of language? We address this question in the case of character-level recurrent neural language models. These models do not have explicit word representations; do they acquire implicit ones? We assess the lexical capacity of a network using the lexical decision task common in psycholinguistics: the system is required to decide whether or not a string of characters forms a word. We explore how accuracy on this task is affected by the architecture of the network, focusing on cell type (LSTM vs. SRN), depth and width. We also compare these architectural properties to a simple count of the parameters of the network. The overall number of parameters in the network turns out to be the most important predictor of accuracy; in particular, there is little evidence that deeper networks are beneficial for this task. %U https://aclanthology.org/E17-2020 %P 125-130