Are self-explanations from Large Language Models faithful?

Andreas Madsen, Sarath Chandar, Siva Reddy


Abstract
Instruction-tuned Large Language Models (LLMs) excel at many tasks and will even explain their reasoning, so-called self-explanations. However, convincing and wrong self-explanations can lead to unsupported confidence in LLMs, thus increasing risk. Therefore, it’s important to measure if self-explanations truly reflect the model’s behavior. Such a measure is called interpretability-faithfulness and is challenging to perform since the ground truth is inaccessible, and many LLMs only have an inference API. To address this, we propose employing self-consistency checks to measure faithfulness. For example, if an LLM says a set of words is important for making a prediction, then it should not be able to make its prediction without these words. While self-consistency checks are a common approach to faithfulness, they have not previously been successfully applied to LLM self-explanations for counterfactual, feature attribution, and redaction explanations. Our results demonstrate that faithfulness is explanation, model, and task-dependent, showing self-explanations should not be trusted in general. For example, with sentiment classification, counterfactuals are more faithful for Llama2, feature attribution for Mistral, and redaction for Falcon 40B.
Anthology ID:
2024.findings-acl.19
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
295–337
Language:
URL:
https://aclanthology.org/2024.findings-acl.19
DOI:
Bibkey:
Cite (ACL):
Andreas Madsen, Sarath Chandar, and Siva Reddy. 2024. Are self-explanations from Large Language Models faithful?. In Findings of the Association for Computational Linguistics ACL 2024, pages 295–337, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Are self-explanations from Large Language Models faithful? (Madsen et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.19.pdf