Ziche Liu
2024
Humans or LLMs as the Judge? A Study on Judgement Bias
Guiming Hardy Chen
|
Shunian Chen
|
Ziche Liu
|
Feng Jiang
|
Benyou Wang
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Adopting human and large language models (LLM) as judges (*a.k.a* human- and LLM-as-a-judge) for evaluating the performance of LLMs has recently gained attention. Nonetheless, this approach concurrently introduces potential biases from human and LLMs, questioning the reliability of the evaluation results. In this paper, we propose a novel framework that is free from referencing groundtruth annotations for investigating **Misinformation Oversight Bias**, **Gender Bias**, **Authority Bias** and **Beauty Bias** on LLM and human judges. We curate a dataset referring to the revised Bloom’s Taxonomy and conduct thousands of evaluations. Results show that human and LLM judges are vulnerable to perturbations to various degrees, and that even the cutting-edge judges possess considerable biases. We further exploit these biases to conduct attacks on LLM judges. We hope that our work can notify the community of the bias and vulnerability of human- and LLM-as-a-judge, as well as the urgency of developing robust evaluation systems.
AceGPT, Localizing Large Language Models in Arabic
Huang Huang
|
Fei Yu
|
Jianqing Zhu
|
Xuening Sun
|
Hao Cheng
|
Song Dingjie
|
Zhihong Chen
|
Mosen Alharthi
|
Bang An
|
Juncai He
|
Ziche Liu
|
Junying Chen
|
Jianquan Li
|
Benyou Wang
|
Lian Zhang
|
Ruoyu Sun
|
Xiang Wan
|
Haizhou Li
|
Jinchao Xu
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
This paper is devoted to the development of a localized Large Language Model (LLM) specifically for Arabic, a language imbued with unique cultural characteristics inadequately addressed by current mainstream models. Significant concerns emerge when addressing cultural sensitivity and local values. To address this, the paper proposes a comprehensive solution that includes further pre-training with Arabic texts, Supervised Fine-Tuning (SFT) utilizing native Arabic instructions, and GPT-4 responses in Arabic, alongside Reinforcement Learning with AI Feedback (RLAIF) employing a reward model attuned to local culture and values. The goal is to cultivate culturally cognizant and value-aligned Arabic LLMs capable of accommodating the diverse, application-specific needs of Arabic-speaking communities. Comprehensive evaluations reveal that the resulting model, dubbed ‘AceGPT’, sets the state-of-the-art standard for open Arabic LLMs across various benchmarks. Codes, data, and models are in https://github.com/FreedomIntelligence/AceGPT.
Search
Co-authors
- Benyou Wang 2
- Guiming Hardy Chen 1
- Shunian Chen 1
- Feng Jiang 1
- Huang Huang 1
- show all...