Zijie Zhang
2024
Fisher Information-based Efficient Curriculum Federated Learning with Large Language Models
Ji Liu
|
Jiaxiang Ren
|
Ruoming Jin
|
Zijie Zhang
|
Yang Zhou
|
Patrick Valduriez
|
Dejing Dou
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
As a promising paradigm to collaboratively train models with decentralized data, Federated Learning (FL) can be exploited to fine-tune Large Language Models (LLMs). While LLMs correspond to huge size, the scale of the training data significantly increases, which leads to tremendous amounts of computation and communication costs. The training data is generally non-Independent and Identically Distributed (non-IID), which requires adaptive data processing within each device. Although Low-Rank Adaptation (LoRA) can significantly reduce the scale of parameters to update in the fine-tuning process, it still takes unaffordable time to transfer the low-rank parameters of all the layers in LLMs. In this paper, we propose a Fisher Information-based Efficient Curriculum Federated Learning framework (FibecFed) with two novel methods, i.e., adaptive federated curriculum learning and efficient sparse parameter update. First, we propose a fisher information-based method to adaptively sample data within each device to improve the effectiveness of the FL fine-tuning process. Second, we dynamically select the proper layers for global aggregation and sparse parameters for local update with LoRA so as to improve the efficiency of the FL fine-tuning process. Extensive experimental results based on 10 datasets demonstrate that FibecFed yields excellent performance (up to 45.35% in terms of accuracy) and superb fine-tuning speed (up to 98.61% faster) compared with 17 baseline approaches).
2021
Adversarial Attack against Cross-lingual Knowledge Graph Alignment
Zeru Zhang
|
Zijie Zhang
|
Yang Zhou
|
Lingfei Wu
|
Sixing Wu
|
Xiaoying Han
|
Dejing Dou
|
Tianshi Che
|
Da Yan
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Recent literatures have shown that knowledge graph (KG) learning models are highly vulnerable to adversarial attacks. However, there is still a paucity of vulnerability analyses of cross-lingual entity alignment under adversarial attacks. This paper proposes an adversarial attack model with two novel attack techniques to perturb the KG structure and degrade the quality of deep cross-lingual entity alignment. First, an entity density maximization method is employed to hide the attacked entities in dense regions in two KGs, such that the derived perturbations are unnoticeable. Second, an attack signal amplification method is developed to reduce the gradient vanishing issues in the process of adversarial attacks for further improving the attack effectiveness.
Search
Co-authors
- Yang Zhou 2
- Dejing Dou 2
- Ji Liu 1
- Jiaxiang Ren 1
- Ruoming Jin 1
- show all...