Outlier-Aware Training for Improving Group Accuracy Disparities

Li-Kuang Chen, Canasai Kruengkrai, Junichi Yamagishi


Abstract
Methods addressing spurious correlations such as Just Train Twice (JTT, Liu et al. 2021) involve reweighting a subset of the training set to maximize the worst-group accuracy. However, the reweighted set of examples may potentially contain unlearnable examples that hamper the model’s learning. We propose mitigating this by detecting outliers to the training set and removing them before reweighting. Our experiments show that our method achieves competitive or better accuracy compared with JTT and can detect and remove annotation errors in the subset being reweighted in JTT.
Anthology ID:
2022.aacl-srw.8
Volume:
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing: Student Research Workshop
Month:
November
Year:
2022
Address:
Online
Editors:
Yan Hanqi, Yang Zonghan, Sebastian Ruder, Wan Xiaojun
Venues:
AACL | IJCNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
54–60
Language:
URL:
https://aclanthology.org/2022.aacl-srw.8
DOI:
Bibkey:
Cite (ACL):
Li-Kuang Chen, Canasai Kruengkrai, and Junichi Yamagishi. 2022. Outlier-Aware Training for Improving Group Accuracy Disparities. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing: Student Research Workshop, pages 54–60, Online. Association for Computational Linguistics.
Cite (Informal):
Outlier-Aware Training for Improving Group Accuracy Disparities (Chen et al., AACL-IJCNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.aacl-srw.8.pdf