Yuxiang Zheng
2024
OpenResearcher: Unleashing AI for Accelerated Scientific Research
Yuxiang Zheng
|
Shichao Sun
|
Lin Qiu
|
Dongyu Ru
|
Cheng Jiayang
|
Xuefeng Li
|
Jifan Lin
|
Binjie Wang
|
Yun Luo
|
Renjie Pan
|
Yang Xu
|
Qingkai Min
|
Zizhao Zhang
|
Yiwen Wang
|
Wenjie Li
|
Pengfei Liu
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
The rapid growth of scientific literature imposes significant challenges for researchers endeavoring to stay updated with the latest advancements in their fields and delve into new areas. We introduce OpenResearcher, an innovative platform that leverages Artificial Intelligence (AI) techniques to accelerate the research process by answering diverse questions from researchers. OpenResearcher is built based on Retrieval-Augmented Generation (RAG) to integrate Large Language Models (LLMs) with up-to-date, domain-specific knowledge. Moreover, we develop various tools for OpenResearcher to understand researchers’ queries, search from the scientific literature, filter retrieved information, provide accurate and comprehensive answers, and self-refine these answers. OpenResearcher can flexibly use these tools to balance efficiency and effectiveness. As a result, OpenResearcher enables researchers to save time and increase their potential to discover new insights and drive scientific breakthroughs. Demo, video, and code are available at: https://github.com/GAIR-NLP/OpenResearcher.
SAFETY-J: Evaluating Safety with Critique
Yixiu Liu
|
Yuxiang Zheng
|
Shijie Xia
|
Jiajun Li
|
Yi Tu
|
Chaoling Song
|
Pengfei Liu
Findings of the Association for Computational Linguistics: EMNLP 2024
The deployment of Large Language Models (LLMs) in content generation raises significant safety concerns, particularly regarding the transparency and interpretability of content evaluations. Current methods, primarily focused on binary safety classifications, lack mechanisms for detailed critique, limiting their utility for model improvement and user trust. To address these limitations, we introduce SAFETY-J, a bilingual generative safety evaluator for English and Chinese with critique-based judgment. SAFETY-J utilizes a robust training dataset that includes diverse dialogues and augmented query-response pairs to assess safety across various scenarios comprehensively. We establish an automated meta-evaluation benchmark that objectively assesses the quality of critiques with minimal human intervention, facilitating scalable and continuous improvement. Additionally, SAFETY-Jemploys an iterative preference learning technique to dynamically refine safety assessments based on meta-evaluations and critiques. Our evaluations demonstrate that SAFETY-J provides more nuanced and accurate safety evaluations, thereby enhancing both critique quality and predictive reliability in complex content scenarios. To facilitate further research and application, we have released SAFETY-J’s training protocols, datasets, and code at https://github.com/GAIR-NLP/Safety-J.
Search
Co-authors
- Pengfei Liu 2
- Shichao Sun 1
- Lin Qiu 1
- Dongyu Ru 1
- Cheng Jiayang 1
- show all...