Navigation as Attackers Wish? Towards Building Robust Embodied Agents under Federated Learning

Yunchao Zhang, Zonglin Di, Kaiwen Zhou, Cihang Xie, Xin Wang


Abstract
Federated embodied agent learning protects the data privacy of individual visual environments by keeping data locally at each client (the individual environment) during training. However, since the local data is inaccessible to the server under federated learning, attackers may easily poison the training data of the local client to build a backdoor in the agent without notice. Deploying such an agent raises the risk of potential harm to humans, as the attackers may easily navigate and control the agent as they wish via the backdoor. Towards Byzantine-robust federated embodied agent learning, in this paper, we study the attack and defense for the task of vision-and-language navigation (VLN), where the agent is required to follow natural language instructions to navigate indoor environments. First, we introduce a simple but effective attack strategy, Navigation as Wish (NAW), in which the malicious client manipulates local trajectory data to implant a backdoor into the global model. Results on two VLN datasets (R2R and RxR) show that NAW can easily navigate the deployed VLN agent regardless of the language instruction, without affecting its performance on normal test sets. Then, we propose a new Prompt-Based Aggregation (PBA) to defend against the NAW attack in federated VLN, which provides the server with a ”prompt” of the vision-and-language alignment variance between the benign and malicious clients so that they can be distinguished during training. We validate the effectiveness of the PBA method on protecting the global model from the NAW attack, which outperforms other state-of-the-art defense methods by a large margin in the defense metrics on R2R and RxR.
Anthology ID:
2024.naacl-long.57
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1002–1016
Language:
URL:
https://aclanthology.org/2024.naacl-long.57
DOI:
Bibkey:
Cite (ACL):
Yunchao Zhang, Zonglin Di, Kaiwen Zhou, Cihang Xie, and Xin Wang. 2024. Navigation as Attackers Wish? Towards Building Robust Embodied Agents under Federated Learning. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 1002–1016, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Navigation as Attackers Wish? Towards Building Robust Embodied Agents under Federated Learning (Zhang et al., NAACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.naacl-long.57.pdf
Copyright:
 2024.naacl-long.57.copyright.pdf