Cheng-Hsun Hsueh
2024
Editing the Mind of Giants: An In-Depth Exploration of Pitfalls of Knowledge Editing in Large Language Models
Cheng-Hsun Hsueh
|
Paul Kuo-Ming Huang
|
Tzu-Han Lin
|
Che Wei Liao
|
Hung-Chieh Fang
|
Chao-Wei Huang
|
Yun-Nung Chen
Findings of the Association for Computational Linguistics: EMNLP 2024
Knowledge editing is a rising technique for efficiently updating factual knowledge in large language models (LLMs) with minimal alteration of parameters. However, recent studies have identified side effects, such as knowledge distortion and the deterioration of general abilities, that have emerged after editing. Despite these findings, evaluating the pitfalls of knowledge editing often relies on inconsistent metrics and benchmarks, lacking a uniform standard. In response, this survey presents a comprehensive study of these side effects, providing a unified perspective on the challenges of knowledge editing in LLMs by conducting experiments with consistent metrics and benchmarks. Additionally, we review related works and outline potential research directions to address these limitations. Our survey highlights the limitations of current knowledge editing methods, emphasizing the need for a deeper understanding of the inner knowledge structures of LLMs and improved knowledge editing methods. To foster future research, we have released the complementary materials publicly (https://github.com/MiuLab/EditLLM-Survey).
2020
Semantic Guidance of Dialogue Generation with Reinforcement Learning
Cheng-Hsun Hsueh
|
Wei-Yun Ma
Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Neural encoder-decoder models have shown promising performance for human-computer dialogue systems over the past few years. However, due to the maximum-likelihood objective for the decoder, the generated responses are often universal and safe to the point that they lack meaningful information and are no longer relevant to the post. To address this, in this paper, we propose semantic guidance using reinforcement learning to ensure that the generated responses indeed include the given or predicted semantics and that these semantics do not appear repeatedly in the response. Synsets, which comprise sets of manually defined synonyms, are used as the form of assigned semantics. For a given/assigned/predicted synset, only one of its synonyms should appear in the generated response; this constitutes a simple but effective semantic-control mechanism. We conduct both quantitative and qualitative evaluations, which show that the generated responses are not only higher-quality but also reflect the assigned semantic controls.
Search
Co-authors
- Paul Kuo-Ming Huang 1
- Tzu-Han Lin 1
- Che-Wei Liao 1
- Hung-Chieh Fang 1
- Chao-Wei Huang 1
- show all...