%0 Conference Proceedings %T Like a Baby: Visually Situated Neural Language Acquisition %A Ororbia, Alexander %A Mali, Ankur %A Kelly, Matthew %A Reitter, David %Y Korhonen, Anna %Y Traum, David %Y Màrquez, Lluís %S Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics %D 2019 %8 July %I Association for Computational Linguistics %C Florence, Italy %F ororbia-etal-2019-like %X We examine the benefits of visual context in training neural language models to perform next-word prediction. A multi-modal neural architecture is introduced that outperform its equivalent trained on language alone with a 2% decrease in perplexity, even when no visual context is available at test. Fine-tuning the embeddings of a pre-trained state-of-the-art bidirectional language model (BERT) in the language modeling framework yields a 3.5% improvement. The advantage for training with visual context when testing without is robust across different languages (English, German and Spanish) and different models (GRU, LSTM, Delta-RNN, as well as those that use BERT embeddings). Thus, language models perform better when they learn like a baby, i.e, in a multi-modal environment. This finding is compatible with the theory of situated cognition: language is inseparable from its physical context. %R 10.18653/v1/P19-1506 %U https://aclanthology.org/P19-1506 %U https://doi.org/10.18653/v1/P19-1506 %P 5127-5136