Extending Logic Explained Networks to Text Classification

Rishabh Jain, Gabriele Ciravegna, Pietro Barbiero, Francesco Giannini, Davide Buffelli, Pietro Lio


Abstract
Recently, Logic Explained Networks (LENs) have been proposed as explainable-by-design neural models providing logic explanations for their predictions. However, these models have only been applied to vision and tabular data, and they mostly favour the generation of global explanations, while local ones tend to be noisy and verbose. For these reasons, we propose LENp, improving local explanations by perturbing input words, and we test it on text classification. Our results show that (i) LENp provides better local explanations than LIME in terms of sensitivity and faithfulness, and (ii) its logic explanations are more useful and user-friendly than the feature scoring provided by LIME as attested by a human survey.
Anthology ID:
2022.emnlp-main.604
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8838–8857
Language:
URL:
https://aclanthology.org/2022.emnlp-main.604
DOI:
10.18653/v1/2022.emnlp-main.604
Bibkey:
Cite (ACL):
Rishabh Jain, Gabriele Ciravegna, Pietro Barbiero, Francesco Giannini, Davide Buffelli, and Pietro Lio. 2022. Extending Logic Explained Networks to Text Classification. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 8838–8857, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Extending Logic Explained Networks to Text Classification (Jain et al., EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.604.pdf