Lingyu Li
2024
ESC-Eval: Evaluating Emotion Support Conversations in Large Language Models
Haiquan Zhao
|
Lingyu Li
|
Shisong Chen
|
Shuqi Kong
|
Jiaan Wang
|
Kexin Huang
|
Tianle Gu
|
Yixu Wang
|
Jian Wang
|
Liang Dandan
|
Zhixu Li
|
Yan Teng
|
Yanghua Xiao
|
Yingchun Wang
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Emotion Support Conversation (ESC) is a crucial application, which aims to reduce human stress, offer emotional guidance, and ultimately enhance human mental and physical well-being. With the advancement of Large Language Models (LLMs), many researchers have employed LLMs as the ESC models. However, the evaluation of these LLM-based ESCs remains uncertain. In detail, we first re-organize 2,801 role-playing cards from seven existing datasets to define the roles of the role-playing agent. Second, we train a specific role-playing model called ESC-Role which behaves more like a confused person than GPT-4. Third, through ESC-Role and organized role cards, we systematically conduct experiments using 14 LLMs as the ESC models, including general AI-assistant LLMs (e.g., ChatGPT) and ESC-oriented LLMs (e.g., ExTES-Llama). We conduct comprehensive human annotations on interactive multi-turn dialogues of different ESC models. The results show that ESC-oriented LLMs exhibit superior ESC abilities compared to general AI-assistant LLMs, but there is still a gap behind human performance. Moreover, to automate the scoring process for future ESC models, we developed ESC-RANK, which trained on the annotated data, achieving a scoring performance surpassing 35 points of GPT-4.
Can We Statically Locate Knowledge in Large Language Models? Financial Domain and Toxicity Reduction Case Studies
Jordi Armengol-Estapé
|
Lingyu Li
|
Sebastian Gehrmann
|
Achintya Gopal
|
David Rosenberg
|
Gideon Mann
|
Mark Dredze
Proceedings of the 7th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP
Current large language model (LLM) evaluations rely on benchmarks to assess model capabilities and their encoded knowledge. However, these evaluations cannot reveal where a model encodes its knowledge, and thus little is known about which weights contain specific information. We propose a method to statically (without forward or backward passes) locate topical knowledge in the weight space of an LLM, building on a prior insight that parameters can be decoded into interpretable tokens. If parameters can be mapped into the embedding space, it should be possible to directly search for knowledge via embedding similarity. We study the validity of this assumption across several LLMs for a variety of concepts in the financial domain and a toxicity detection setup. Our analysis yields an improved understanding of the promises and limitations of static knowledge location in real-world scenarios.
Search
Co-authors
- Haiquan Zhao 1
- Shisong Chen 1
- Shuqi Kong 1
- Jiaan Wang 1
- Kexin Huang 1
- show all...