Measuring and Mitigating Local Instability in Deep Neural Networks

Arghya Datta, Subhrangshu Nandi, Jingcheng Xu, Greg Ver Steeg, He Xie, Anoop Kumar, Aram Galstyan


Abstract
Deep Neural Networks (DNNs) are becoming integral components of real world services relied upon by millions of users. Unfortunately, architects of these systems can find it difficult to ensure reliable performance as irrelevant details like random initialization can unexpectedly change the outputs of a trained system with potentially disastrous consequences. We formulate the model stability problem by studying how the predictions of a model change, even when it is retrained on the same data, as a consequence of stochasticity in the training process. For Natural Language Understanding (NLU) tasks, we find instability in predictions for a significant fraction of queries. We formulate principled metrics, like per-sample “label entropy” across training runs or within a single training run, to quantify this phenomenon. Intriguingly, we find that unstable predictions do not appear at random, but rather appear to be clustered in data-specific ways. We study data-agnostic regularization methods to improve stability and propose new data-centric methods that exploit our local stability estimates. We find that our localized data-specific mitigation strategy dramatically outperforms data-agnostic methods, and comes within 90% of the gold standard, achieved by ensembling, at a fraction of the computational cost.
Anthology ID:
2023.findings-acl.176
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2810–2823
Language:
URL:
https://aclanthology.org/2023.findings-acl.176
DOI:
10.18653/v1/2023.findings-acl.176
Bibkey:
Cite (ACL):
Arghya Datta, Subhrangshu Nandi, Jingcheng Xu, Greg Ver Steeg, He Xie, Anoop Kumar, and Aram Galstyan. 2023. Measuring and Mitigating Local Instability in Deep Neural Networks. In Findings of the Association for Computational Linguistics: ACL 2023, pages 2810–2823, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Measuring and Mitigating Local Instability in Deep Neural Networks (Datta et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.176.pdf