%0 Conference Proceedings %T T3-Vis: visual analytic for Training and fine-Tuning Transformers in NLP %A Li, Raymond %A Xiao, Wen %A Wang, Lanjun %A Jang, Hyeju %A Carenini, Giuseppe %Y Adel, Heike %Y Shi, Shuming %S Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations %D 2021 %8 November %I Association for Computational Linguistics %C Online and Punta Cana, Dominican Republic %F li-etal-2021-t3 %X Transformers are the dominant architecture in NLP, but their training and fine-tuning is still very challenging. In this paper, we present the design and implementation of a visual analytic framework for assisting researchers in such process, by providing them with valuable insights about the model’s intrinsic properties and behaviours. Our framework offers an intuitive overview that allows the user to explore different facets of the model (e.g., hidden states, attention) through interactive visualization, and allows a suite of built-in algorithms that compute the importance of model components and different parts of the input sequence. Case studies and feedback from a user focus group indicate that the framework is useful, and suggest several improvements. Our framework is available at: https://github.com/raymondzmc/T3-Vis. %R 10.18653/v1/2021.emnlp-demo.26 %U https://aclanthology.org/2021.emnlp-demo.26 %U https://doi.org/10.18653/v1/2021.emnlp-demo.26 %P 220-230