DROWN: Towards Tighter LiRPA-based Robustness Certification

Yunruo Zhang, Tianyu Du, Shouling Ji, Shanqing Guo


Abstract
The susceptibility of deep neural networks to adversarial attacks is a well-established concern. To address this problem, robustness certification is proposed, which, unfortunately, suffers from precision or scalability issues. In this paper, we present DROWN (Dual CROWN), a novel method for certifying the robustness of DNNs. The advantage of DROWN is that it tightens classic LiRPA-based methods yet maintains similar scalability, which comes from refining pre-activation bounds of ReLU relaxations using two pairs of linear bounds derived from different relaxations of ReLU units in previous layers. The extensive evaluations show that DROWN achieves up to 83.39% higher certified robust accuracy than the baseline on CNNs and up to 4.68 times larger certified radii than the baseline on Transformers. Meanwhile, the running time of DROWN is about twice that of the baseline.
Anthology ID:
2025.coling-main.415
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6212–6229
Language:
URL:
https://aclanthology.org/2025.coling-main.415/
DOI:
Bibkey:
Cite (ACL):
Yunruo Zhang, Tianyu Du, Shouling Ji, and Shanqing Guo. 2025. DROWN: Towards Tighter LiRPA-based Robustness Certification. In Proceedings of the 31st International Conference on Computational Linguistics, pages 6212–6229, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
DROWN: Towards Tighter LiRPA-based Robustness Certification (Zhang et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.415.pdf