Hanxu Hu
2024
CLEAN–EVAL: Clean Evaluation on Contaminated Large Language Models
Wenhong Zhu
|
Hongkun Hao
|
Zhiwei He
|
Yun-Ze Song
|
Jiao Yueyang
|
Yumeng Zhang
|
Hanxu Hu
|
Yiran Wei
|
Rui Wang
|
Hongyuan Lu
Findings of the Association for Computational Linguistics: NAACL 2024
We are currently in an era of fierce competition among various large language models (LLMs), continuously pushing the boundaries of benchmark performance. However, genuinely assessing the capabilities of these LLMs has become a challenging and critical issue due to potential data contamination. In this paper, we propose a novel and valuable method, Clean-Eval, which mitigates the issue of data contamination and evaluates the LLMs more cleanly. Clean-Eval employs a neural-based model to paraphrase and back-translate the contaminated data into a candidate set, generating expressions with the same meaning but in different surface forms. A semantic detector is then used to filter those generated low-quality samples to narrow down this candidate set. Candidates with moderate BLEURT scores against the original samples are selected as the final evaluation set. According to human assessment, this set is almost semantically equivalent to the original contamination set but expressed differently. We conduct experiments on 20 existing benchmarks across diverse tasks, and results demonstrate that Clean-Eval substantially restores the actual evaluation results on contaminated LLMs under both few-shot learning and fine-tuning scenarios.
2023
Improving User Controlled Table-To-Text Generation Robustness
Hanxu Hu
|
Yunqing Liu
|
Zhongyi Yu
|
Laura Perez-Beltrachini
Findings of the Association for Computational Linguistics: EACL 2023
In this work we study user controlled table-to-text generation where users explore the content in a table by selecting cells and reading a natural language description thereof automatically produce by a natural language generator. Such generation models usually learn from carefully selected cell combinations (clean cell selections); however, in practice users may select unexpected, redundant, or incoherent cell combinations (noisy cell selections). In experiments, we find that models perform well on test sets coming from the same distribution as the train data but their performance drops when evaluated on realistic noisy user inputs. We propose a fine-tuning regime with additional user-simulated noisy cell selections. Models fine-tuned with the proposed regime gain 4.85 BLEU points on user noisy test cases and 1.4 on clean test cases; and achieve comparable state-of-the-art performance on the ToTTo dataset.
Meta-learning For Vision-and-language Cross-lingual Transfer
Hanxu Hu
|
Frank Keller
Proceedings of the 3rd Workshop on Multi-lingual Representation Learning (MRL)
Search
Co-authors
- Yunqing Liu 1
- Zhongyi Yu 1
- Laura Perez-Beltrachini 1
- Wenhong Zhu 1
- Hongkun Hao 1
- show all...