CREST: A Joint Framework for Rationalization and Counterfactual Text Generation

Marcos Treviso, Alexis Ross, Nuno M. Guerreiro, André Martins


Abstract
Selective rationales and counterfactual examples have emerged as two effective, complementary classes of interpretability methods for analyzing and training NLP models. However, prior work has not explored how these methods can be integrated to combine their complementary advantages. We overcome this limitation by introducing CREST (ContRastive Edits with Sparse raTionalization), a joint framework for selective rationalization and counterfactual text generation, and show that this framework leads to improvements in counterfactual quality, model robustness, and interpretability. First, CREST generates valid counterfactuals that are more natural than those produced by previous methods, and subsequently can be used for data augmentation at scale, reducing the need for human-generated examples. Second, we introduce a new loss function that leverages CREST counterfactuals to regularize selective rationales and show that this regularization improves both model robustness and rationale quality, compared to methods that do not leverage CREST counterfactuals. Our results demonstrate that CREST successfully bridges the gap between selective rationales and counterfactual examples, addressing the limitations of existing methods and providing a more comprehensive view of a model’s predictions.
Anthology ID:
2023.acl-long.842
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15109–15126
Language:
URL:
https://aclanthology.org/2023.acl-long.842
DOI:
10.18653/v1/2023.acl-long.842
Bibkey:
Cite (ACL):
Marcos Treviso, Alexis Ross, Nuno M. Guerreiro, and André Martins. 2023. CREST: A Joint Framework for Rationalization and Counterfactual Text Generation. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15109–15126, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
CREST: A Joint Framework for Rationalization and Counterfactual Text Generation (Treviso et al., ACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.acl-long.842.pdf
Video:
 https://aclanthology.org/2023.acl-long.842.mp4