Robustifying Sentiment Classification by Maximally Exploiting Few Counterfactuals

Maarten De Raedt, Fréderic Godin, Chris Develder, Thomas Demeester


Abstract
For text classification tasks, finetuned language models perform remarkably well. Yet, they tend to rely on spurious patterns in training data, thus limiting their performance on out-of-distribution (OOD) test data. Among recent models aiming to avoid this spurious pattern problem, adding extra counterfactual samples to the training data has proven to be very effective. Yet, counterfactual data generation is costly since it relies on human annotation. Thus, we propose a novel solution that only requires annotation of a small fraction (e.g., 1%) of the original training data, and uses automatic generation of extra counterfactuals in an encoding vector space. We demonstrate the effectiveness of our approach in sentiment classification, using IMDb data for training and other sets for OOD tests (i.e., Amazon, SemEval and Yelp). We achieve noticeable accuracy improvements by adding only 1% manual counterfactuals: +3% compared to adding +100% in-distribution training samples, +1.3% compared to alternate counterfactual approaches.
Anthology ID:
2022.emnlp-main.783
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11386–11400
Language:
URL:
https://aclanthology.org/2022.emnlp-main.783
DOI:
10.18653/v1/2022.emnlp-main.783
Bibkey:
Cite (ACL):
Maarten De Raedt, Fréderic Godin, Chris Develder, and Thomas Demeester. 2022. Robustifying Sentiment Classification by Maximally Exploiting Few Counterfactuals. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11386–11400, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Robustifying Sentiment Classification by Maximally Exploiting Few Counterfactuals (De Raedt et al., EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.783.pdf