%0 Conference Proceedings %T The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for NLP Models %A Tenney, Ian %A Wexler, James %A Bastings, Jasmijn %A Bolukbasi, Tolga %A Coenen, Andy %A Gehrmann, Sebastian %A Jiang, Ellen %A Pushkarna, Mahima %A Radebaugh, Carey %A Reif, Emily %A Yuan, Ann %Y Liu, Qun %Y Schlangen, David %S Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations %D 2020 %8 October %I Association for Computational Linguistics %C Online %F tenney-etal-2020-language %X We present the Language Interpretability Tool (LIT), an open-source platform for visualization and understanding of NLP models. We focus on core questions about model behavior: Why did my model make this prediction? When does it perform poorly? What happens under a controlled change in the input? LIT integrates local explanations, aggregate analysis, and counterfactual generation into a streamlined, browser-based interface to enable rapid exploration and error analysis. We include case studies for a diverse set of workflows, including exploring counterfactuals for sentiment analysis, measuring gender bias in coreference systems, and exploring local behavior in text generation. LIT supports a wide range of models—including classification, seq2seq, and structured prediction—and is highly extensible through a declarative, framework-agnostic API. LIT is under active development, with code and full documentation available at https://github.com/pair-code/lit. %R 10.18653/v1/2020.emnlp-demos.15 %U https://aclanthology.org/2020.emnlp-demos.15 %U https://doi.org/10.18653/v1/2020.emnlp-demos.15 %P 107-118