Lizhen Xu
2024
R-Judge: Benchmarking Safety Risk Awareness for LLM Agents
Tongxin Yuan
|
Zhiwei He
|
Lingzhong Dong
|
Yiming Wang
|
Ruijie Zhao
|
Tian Xia
|
Lizhen Xu
|
Binglin Zhou
|
Fangqi Li
|
Zhuosheng Zhang
|
Rui Wang
|
Gongshen Liu
Findings of the Association for Computational Linguistics: EMNLP 2024
Large language models (LLMs) have exhibited great potential in autonomously completing tasks across real-world applications. Despite this, these LLM agents introduce unexpected safety risks when operating in interactive environments. Instead of centering on the harmlessness of LLM-generated content in most prior studies, this work addresses the imperative need for benchmarking the behavioral safety of LLM agents within diverse environments. We introduce R-Judge, a benchmark crafted to evaluate the proficiency of LLMs in judging and identifying safety risks given agent interaction records. R-Judge comprises 569 records of multi-turn agent interaction, encompassing 27 key risk scenarios among 5 application categories and 10 risk types. It is of high-quality curation with annotated safety labels and risk descriptions. Evaluation of 11 LLMs on R-Judge shows considerable room for enhancing the risk awareness of LLMs: The best-performing model, GPT-4o, achieves 74.42% while no other models significantly exceed the random. Moreover, we reveal that risk awareness in open agent scenarios is a multi-dimensional capability involving knowledge and reasoning, thus challenging for LLMs. With further experiments, we find that fine-tuning on safety judgment significantly improve model performance while straightforward prompting mechanisms fail. R-Judge is publicly available at Annoymous.
Search
Co-authors
- Tongxin Yuan 1
- Zhiwei He 1
- Lingzhong Dong 1
- Yiming Wang 1
- Ruijie Zhao 1
- show all...