Yuan Cheng
2024
ULMR: Unlearning Large Language Models via Negative Response and Model Parameter Average
Shaojie Shi
|
Xiaoyu Tan
|
Xihe Qiu
|
Chao Qu
|
Kexin Nie
|
Yuan Cheng
|
Wei Chu
|
Xu Yinghui
|
Yuan Qi
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track
In recent years, large language models (LLMs) have attracted significant interest from the research community due to their broad applicability in many language-oriented tasks, and are now widely used in numerous areas of production and daily life. One source of the powerful capabilities of LLMs is the massive scale of their pre-training dataset. However, these pre-training datasets contain many outdated, harmful, and personally sensitive information, which inevitably becomes memorized by LLM during the pre-training process. Eliminating this undesirable data is crucial for ensuring the model’s safety and enhancing the user experience. However, the cost of extensively cleaning the pre-training dataset and retraining the model from scratch is very high. In this work, we propose ULMR , a unlearning framework for LLMs , which first uses carefully designed prompts to rewrite the instructions in the specified dataset, and generate corresponding negative responses. Subsequently, to ensure that the model does not excessively deviate post-training, we perform model parameter averaging to preserve the performance of the original LLM. We conducted experiments on two public datasets, TOFU and RWKU, demonstrating that our method can effectively forget specified information while retaining the capabilities of the original LLM.
2021
CRSLab: An Open-Source Toolkit for Building Conversational Recommender System
Kun Zhou
|
Xiaolei Wang
|
Yuanhang Zhou
|
Chenzhan Shang
|
Yuan Cheng
|
Wayne Xin Zhao
|
Yaliang Li
|
Ji-Rong Wen
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations
In recent years, conversational recommender systems (CRSs) have drawn a wide attention in the research community, which focus on providing high-quality recommendations to users via natural language conversations. However, due to diverse scenarios and data formats, existing studies on CRSs lack unified and standardized implementation or comparison. To tackle this challenge, we release an open-source toolkit CRSLab, which provides a unified and extensible framework with highly-decoupled modules to develop CRSs. Based on this framework, we collect 6 commonly used human-annotated CRS datasets and implement 19 models that include advanced techniques such as graph neural networks and pre-training models. Besides, our toolkit provides a series of automatic evaluation protocols and a human-machine interaction interface to evaluate and compare different CRS methods. The project and documents are released at https://github.com/RUCAIBox/CRSLab.
Search
Co-authors
- Shaojie Shi 1
- Xiaoyu Tan 1
- Xihe Qiu 1
- Chao Qu 1
- Kexin Nie 1
- show all...