pdf
bib
Proceedings of the 1st Workshop on Data Contamination (CONDA)
Oscar Sainz
|
Iker García Ferrero
|
Eneko Agirre
|
Jon Ander Campos
|
Alon Jacovi
|
Yanai Elazar
|
Yoav Goldberg
pdf
bib
abs
Evaluating Chinese Large Language Models on Discipline Knowledge Acquisition via Memorization and Robustness Assessment
Chuang Liu
|
Renren Jin
|
Mark Steedman
|
Deyi Xiong
Chinese LLMs demonstrate impressive performance on NLP tasks, particularly on discipline knowledge benchmarks, with some results approaching those of GPT-4. Previous research has viewed these advancements as potential outcomes of data contamination or leakage, prompting efforts to create new detection methods and address evaluation issues in LLM benchmarks. However, there has been a lack of comprehensive assessment of the evolution of Chinese LLMs. To address this gap, this paper offers a thorough investigation of Chinese LLMs on discipline knowledge evaluation, delving into the advancements of various LLMs, including a group of related models and others. Specifically, we have conducted six assessments ranging from knowledge memorization to comprehension for robustness, encompassing tasks like predicting incomplete questions and options, identifying behaviors by the contaminational fine-tuning, and answering rephrased questions. Experimental findings indicate a positive correlation between the release time of LLMs and their memorization capabilities, but they struggle with variations in original question-options pairs. Additionally, our findings suggest that question descriptions have a more significant impact on LLMs’ performance.
pdf
bib
abs
Confounders in Instance Variation for the Analysis of Data Contamination
Behzad Mehrbakhsh
|
Dario Garigliotti
|
Fernando Martínez-Plumed
|
Jose Hernandez-Orallo
Test contamination is a serious problem for the evaluation of large language models (LLMs) because it leads to the overestimation of their performance and a quick saturation of benchmarks, even before the actual capability is achieved. One strategy to address this issue is the (adversarial) generation of variations, by including different exemplars and different rephrasings of the questions. However, these two interventions can lead to instances that can be more difficult (accumulating on the expected loss of performance by partly removing the contamination) but also to instances that can be less difficult (cancelling the expected loss of performance), which would make contamination undetectable. Understanding these two phenomena in terms of instance difficulty is critical to determine and measure contamination. In this paper we conduct a comprehensive analysis of these two interventions on an addition task with fine-tuned LLAMA-2 models.
pdf
bib
abs
A Taxonomy for Data Contamination in Large Language Models
Medha Palavalli
|
Amanda Bertsch
|
Matthew Gormley
Large language models pretrained on extensive web corpora demonstrate remarkable performance across a wide range of downstream tasks. However, a growing concern is data contamination, where evaluation datasets may unintentionally be contained in the pretraining corpus, inflating model performance. Decontamination, the process of detecting and removing such data, is a potential solution; yet these contaminants may originate from altered versions of the test set, evading detection during decontamination. How different types of contamination impact the performance of language models on downstream tasks is not fully understood. We present a taxonomy that categorizes the various types of contamination encountered by LLMs during the pretraining phase and identify which types pose the highest risk. We analyze the impact of contamination on two key NLP tasks—summarization and question answering—revealing how different types of contamination influence task performance during evaluation.
pdf
bib
abs
Data Contamination Report from the 2024 CONDA Shared Task
Oscar Sainz
|
Iker García-Ferrero
|
Alon Jacovi
|
Jon Ander Campos
|
Yanai Elazar
|
Eneko Agirre
|
Yoav Goldberg
|
Wei-Lin Chen
|
Jenny Chim
|
Leshem Choshen
|
Luca D’Amico-Wong
|
Melissa Dell
|
Run-Ze Fan
|
Shahriar Golchin
|
Yucheng Li
|
Pengfei Liu
|
Bhavish Pahwa
|
Ameya Prabhu
|
Suryansh Sharma
|
Emily Silcock
|
Kateryna Solonko
|
David Stap
|
Mihai Surdeanu
|
Yu-Min Tseng
|
Vishaal Udandarao
|
Zengzhi Wang
|
Ruijie Xu
|
Jinglin Yang
The 1st Workshop on Data Contamination (CONDA 2024) focuses on all relevant aspects of data contamination in natural language processing, where data contamination is understood as situations where evaluation data is included in pre-training corpora used to train large scale models, compromising evaluation results. The workshop fostered a shared task to collect evidence on data contamination in current available datasets and models. The goal of the shared task and associated database is to assist the community in understanding the extent of the problem and to assist researchers in avoiding reporting evaluation results on known contaminated resources. The shared task provides a structured, centralized public database for the collection of contamination evidence, open to contributions from the community via GitHub pool requests. This first compilation paper is based on 566 reported entries over 91 contaminated sources from a total of 23 contributors. The details of the individual contamination events are available in the platform. The platform continues to be online, open to contributions from the community.