Dim-Krum: Backdoor-Resistant Federated Learning for NLP with Dimension-wise Krum-Based Aggregation

Zhiyuan Zhang, Qi Su, Xu Sun


Abstract
Despite the potential of federated learning, it is known to be vulnerable to backdoor attacks. Many robust federated aggregation methods are proposed to reduce the potential backdoor risk. However, they are mainly validated in the CV field. In this paper, we find that NLP backdoors are hard to defend against than CV, and we provide a theoretical analysis that the malicious update detection error probabilities are determined by the relative backdoor strengths. NLP attacks tend to have small relative backdoor strengths, which may result in the failure of robust federated aggregation methods for NLP attacks. Inspired by the theoretical results, we can choose some dimensions with higher backdoor strengths to settle this issue. We propose a novel federated aggregation algorithm, Dim-Krum, for NLP tasks, and experimental results validate its effectiveness.
Anthology ID:
2022.findings-emnlp.25
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
339–354
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.25
DOI:
10.18653/v1/2022.findings-emnlp.25
Bibkey:
Cite (ACL):
Zhiyuan Zhang, Qi Su, and Xu Sun. 2022. Dim-Krum: Backdoor-Resistant Federated Learning for NLP with Dimension-wise Krum-Based Aggregation. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 339–354, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Dim-Krum: Backdoor-Resistant Federated Learning for NLP with Dimension-wise Krum-Based Aggregation (Zhang et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-emnlp.25.pdf
Video:
 https://aclanthology.org/2022.findings-emnlp.25.mp4