%0 Conference Proceedings %T Alignment Rationale for Natural Language Inference %A Jiang, Zhongtao %A Zhang, Yuanzhe %A Yang, Zhao %A Zhao, Jun %A Liu, Kang %Y Zong, Chengqing %Y Xia, Fei %Y Li, Wenjie %Y Navigli, Roberto %S Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) %D 2021 %8 August %I Association for Computational Linguistics %C Online %F jiang-etal-2021-alignment %X Deep learning models have achieved great success on the task of Natural Language Inference (NLI), though only a few attempts try to explain their behaviors. Existing explanation methods usually pick prominent features such as words or phrases from the input text. However, for NLI, alignments among words or phrases are more enlightening clues to explain the model. To this end, this paper presents AREC, a post-hoc approach to generate alignment rationale explanations for co-attention based models in NLI. The explanation is based on feature selection, which keeps few but sufficient alignments while maintaining the same prediction of the target model. Experimental results show that our method is more faithful and human-readable compared with many existing approaches. We further study and re-evaluate three typical models through our explanation beyond accuracy, and propose a simple method that greatly improves the model robustness. %R 10.18653/v1/2021.acl-long.417 %U https://aclanthology.org/2021.acl-long.417 %U https://doi.org/10.18653/v1/2021.acl-long.417 %P 5372-5387