Gradient Inversion Attack in Federated Learning: Exposing Text Data through Discrete Optimization

Ying Gao, Yuxin Xie, Huanghao Deng, Zukun Zhu


Abstract
Federated learning has emerged as a potential solution to overcome the bottleneck posed by the near exhaustion of public text data in training large language models. There are claims that the strategy of exchanging gradients allows using text data including private information. Although recent studies demonstrate that data can be reconstructed from gradients, the threat for text data seems relatively small due to its sensitivity to even a few token errors. However, we propose a novel attack method FET, indicating that it is possible to Fully Expose Text data from gradients. Unlike previous methods that optimize continuous embedding vectors, we directly search for a text sequence with gradients that match the known gradients. First, we infer the total number of tokens and the unique tokens in the target text data from the gradients of the embedding layer. Then we develop a discrete optimization algorithm, which globally explores the solution space and precisely refines the obtained solution, incorporating both global and local search strategies. We also find that gradients of the fully connected layer are dominant, providing sufficient guidance for the optimization process. Our experiments show a significant improvement in attack performance, with an average increase of 39% for TinyBERT-6, 20% for BERT-base and 15% for BERT-large in exact match rates across three datasets. These findings highlight serious privacy risks in text data, suggesting that using smaller models is not an effective privacy-preserving strategy.
Anthology ID:
2025.coling-main.176
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2582–2591
Language:
URL:
https://aclanthology.org/2025.coling-main.176/
DOI:
Bibkey:
Cite (ACL):
Ying Gao, Yuxin Xie, Huanghao Deng, and Zukun Zhu. 2025. Gradient Inversion Attack in Federated Learning: Exposing Text Data through Discrete Optimization. In Proceedings of the 31st International Conference on Computational Linguistics, pages 2582–2591, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
Gradient Inversion Attack in Federated Learning: Exposing Text Data through Discrete Optimization (Gao et al., COLING 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.coling-main.176.pdf