%0 Conference Proceedings %T TextFlint: Unified Multilingual Robustness Evaluation Toolkit for Natural Language Processing %A Wang, Xiao %A Liu, Qin %A Gui, Tao %A Zhang, Qi %A Zou, Yicheng %A Zhou, Xin %A Ye, Jiacheng %A Zhang, Yongxin %A Zheng, Rui %A Pang, Zexiong %A Wu, Qinzhuo %A Li, Zhengyan %A Zhang, Chong %A Ma, Ruotian %A Fei, Zichu %A Cai, Ruijian %A Zhao, Jun %A Hu, Xingwu %A Yan, Zhiheng %A Tan, Yiding %A Hu, Yuan %A Bian, Qiyuan %A Liu, Zhihua %A Qin, Shan %A Zhu, Bolin %A Xing, Xiaoyu %A Fu, Jinlan %A Zhang, Yue %A Peng, Minlong %A Zheng, Xiaoqing %A Zhou, Yaqian %A Wei, Zhongyu %A Qiu, Xipeng %A Huang, Xuanjing %Y Ji, Heng %Y Park, Jong C. %Y Xia, Rui %S Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations %D 2021 %8 August %I Association for Computational Linguistics %C Online %F wang-etal-2021-textflint %X TextFlint is a multilingual robustness evaluation toolkit for NLP tasks that incorporates universal text transformation, task-specific transformation, adversarial attack, subpopulation, and their combinations to provide comprehensive robustness analyses. This enables practitioners to automatically evaluate their models from various aspects or to customize their evaluations as desired with just a few lines of code. TextFlint also generates complete analytical reports as well as targeted augmented data to address the shortcomings of the model in terms of its robustness. To guarantee acceptability, all the text transformations are linguistically based and all the transformed data selected (up to 100,000 texts) scored highly under human evaluation. To validate the utility, we performed large-scale empirical evaluations (over 67,000) on state-of-the-art deep learning models, classic supervised methods, and real-world systems. The toolkit is already available at https://github.com/textflint with all the evaluation results demonstrated at textflint.io. %R 10.18653/v1/2021.acl-demo.41 %U https://aclanthology.org/2021.acl-demo.41 %U https://doi.org/10.18653/v1/2021.acl-demo.41 %P 347-355