Rationalizing Transformer Predictions via End-To-End Differentiable Self-Training

Marc Brinner, Sina Zarrieß


Abstract
We propose an end-to-end differentiable training paradigm for stable training of a rationalized transformer classifier. Our approach results in a single model that simultaneously classifies a sample and scores input tokens based on their relevance to the classification. To this end, we build on the widely-used three-player-game for training rationalized models, which typically relies on training a rationale selector, a classifier and a complement classifier. We simplify this approach by making a single model fulfill all three roles, leading to a more efficient training paradigm that is not susceptible to the common training instabilities that plague existing approaches. Further, we extend this paradigm to produce class-wise rationales while incorporating recent advances in parameterizing and regularizing the resulting rationales, thus leading to substantially improved and state-of-the-art alignment with human annotations without any explicit supervision.
Anthology ID:
2024.emnlp-main.664
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11894–11907
Language:
URL:
https://aclanthology.org/2024.emnlp-main.664
DOI:
Bibkey:
Cite (ACL):
Marc Brinner and Sina Zarrieß. 2024. Rationalizing Transformer Predictions via End-To-End Differentiable Self-Training. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 11894–11907, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Rationalizing Transformer Predictions via End-To-End Differentiable Self-Training (Brinner & Zarrieß, EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.664.pdf