ER-Test: Evaluating Explanation Regularization Methods for Language Models

Brihi Joshi, Aaron Chan, Ziyi Liu, Shaoliang Nie, Maziar Sanjabi, Hamed Firooz, Xiang Ren


Abstract
By explaining how humans would solve a given task, human rationales can provide strong learning signal for neural language models (NLMs). Explanation regularization (ER) aims to improve NLM generalization by pushing the NLM’s machine rationales (Which input tokens did the NLM focus on?) to align with human rationales (Which input tokens would humans focus on). Though prior works primarily study ER via in-distribution (ID) evaluation, out-of-distribution (OOD) generalization is often more critical in real-world scenarios, yet ER’s effect on OOD generalization has been underexplored.In this paper, we introduce ER-Test, a framework for evaluating ER models’ OOD generalization along three dimensions: unseen datasets, contrast set tests, and functional tests. Using ER-Test, we comprehensively analyze how ER models’ OOD generalization varies with the rationale alignment criterion (loss function), human rationale type (instance-level v/s task-level), number and choice of rationale-annotated instances, and time budget for rationale annotation. Across two tasks and six datasets, we show that ER has little impact on ID performance but yields large OOD performance gains, with the best ER criterion being task-dependent. Also, ER can improve OOD performance even with task-level or few human rationales. Finally, we find that rationale annotation is more time-efficient than label annotation for improving OOD performance. Our results with ER-Test help demonstrate ER’s utility and establish best practices for using ER effectively.
Anthology ID:
2022.findings-emnlp.242
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3315–3336
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.242
DOI:
10.18653/v1/2022.findings-emnlp.242
Bibkey:
Cite (ACL):
Brihi Joshi, Aaron Chan, Ziyi Liu, Shaoliang Nie, Maziar Sanjabi, Hamed Firooz, and Xiang Ren. 2022. ER-Test: Evaluating Explanation Regularization Methods for Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 3315–3336, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
ER-Test: Evaluating Explanation Regularization Methods for Language Models (Joshi et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-emnlp.242.pdf