%0 Conference Proceedings %T Evaluating Textual Representations through Image Generation %A Spinks, Graham %A Moens, Marie-Francine %Y Linzen, Tal %Y Chrupała, Grzegorz %Y Alishahi, Afra %S Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP %D 2018 %8 November %I Association for Computational Linguistics %C Brussels, Belgium %F spinks-moens-2018-evaluating %X We present a methodology for determining the quality of textual representations through the ability to generate images from them. Continuous representations of textual input are ubiquitous in modern Natural Language Processing techniques either at the core of machine learning algorithms or as the by-product at any given layer of a neural network. While current techniques to evaluate such representations focus on their performance on particular tasks, they don’t provide a clear understanding of the level of informational detail that is stored within them, especially their ability to represent spatial information. The central premise of this paper is that visual inspection or analysis is the most convenient method to quickly and accurately determine information content. Through the use of text-to-image neural networks, we propose a new technique to compare the quality of textual representations by visualizing their information content. The method is illustrated on a medical dataset where the correct representation of spatial information and shorthands are of particular importance. For four different well-known textual representations, we show with a quantitative analysis that some representations are consistently able to deliver higher quality visualizations of the information content. Additionally, we show that the quantitative analysis technique correlates with the judgment of a human expert evaluator in terms of alignment. %R 10.18653/v1/W18-5405 %U https://aclanthology.org/W18-5405 %U https://doi.org/10.18653/v1/W18-5405 %P 30-39