Enhancing Training Data Attribution for Large Language Models with Fitting Error Consideration

Kangxi Wu, Liang Pang, Huawei Shen, Xueqi Cheng


Abstract
The black-box nature of large language models (LLMs) poses challenges in interpreting results, impacting issues such as data intellectual property protection and hallucination tracing. Training data attribution (TDA) methods are considered effective solutions to address these challenges.Most recent TDA methods rely on influence functions, assuming the model achieves minimized empirical risk. However, achieving this criterion is difficult, and sourcing accuracy can be compromised by fitting errors during model training. In this paper, we introduce a novel TDA method called Debias and Denoise Attribution (DDA), which enhances influence functions by addressing fitting errors. Specifically, the debias strategy seeks to improve the performance of influence functions by eliminating the knowledge bias present in the base model before fine-tuning, while the denoise strategy aims to reduce discrepancies in influence scores arising from varying degrees of fitting during the training process through smoothing techniques.Experimental results demonstrate that our method significantly outperforms existing approaches, achieving an averaged AUC of 91.64%. Moreover, DDA exhibits strong generality and scalability across various sources and different-scale models like LLaMA2, QWEN2, and Mistral.
Anthology ID:
2024.emnlp-main.782
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14131–14143
Language:
URL:
https://aclanthology.org/2024.emnlp-main.782
DOI:
10.18653/v1/2024.emnlp-main.782
Bibkey:
Cite (ACL):
Kangxi Wu, Liang Pang, Huawei Shen, and Xueqi Cheng. 2024. Enhancing Training Data Attribution for Large Language Models with Fitting Error Consideration. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 14131–14143, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Enhancing Training Data Attribution for Large Language Models with Fitting Error Consideration (Wu et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.782.pdf