Data Contamination Calibration for Black-box LLMs

Wentao Ye, Jiaqi Hu, Liyao Li, Haobo Wang, Gang Chen, Junbo Zhao


Abstract
The rapid advancements of Large Language Models (LLMs) tightly associate with the expansion of the training data size. However, the unchecked ultra-large-scale training sets introduce a series of potential risks like data contamination, i.e. the benchmark data is used for training. In this work, we propose a holistic method named Polarized Augment Calibration (PAC) along with a new to-be-released dataset to detect the contaminated data and diminish the contamination effect. PAC extends the popular MIA (Membership Inference Attack) — from machine learning community — by forming a more global target at detecting training data to Clarify invisible training data. As a pioneering work, PAC is very much plug-and-play that can be integrated with most (if not all) current white- and black-box LLMs. By extensive experiments, PAC outperforms existing methods by at least 4.5%, towards data contamination detection on more 4 dataset formats, with more than 10 base LLMs. Besides, our application in real-world scenarios highlights the prominent presence of contamination and related issues.
Anthology ID:
2024.findings-acl.644
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10845–10861
Language:
URL:
https://aclanthology.org/2024.findings-acl.644
DOI:
Bibkey:
Cite (ACL):
Wentao Ye, Jiaqi Hu, Liyao Li, Haobo Wang, Gang Chen, and Junbo Zhao. 2024. Data Contamination Calibration for Black-box LLMs. In Findings of the Association for Computational Linguistics ACL 2024, pages 10845–10861, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Data Contamination Calibration for Black-box LLMs (Ye et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.644.pdf