Yimeng Zhang


2025

pdf bib
Can Large Language Models Understand You Better? An MBTI Personality Detection Dataset Aligned with Population Traits
Bohan Li | Jiannan Guan | Longxu Dou | Yunlong Feng | Dingzirui Wang | Yang Xu | Enbo Wang | Qiguang Chen | Bichen Wang | Xiao Xu | Yimeng Zhang | Libo Qin | Yanyan Zhao | Qingfu Zhu | Wanxiang Che
Proceedings of the 31st International Conference on Computational Linguistics

The Myers-Briggs Type Indicator (MBTI) is one of the most influential personality theories reflecting individual differences in thinking, feeling, and behaving. MBTI personality detection has garnered considerable research interest and has evolved significantly over the years. However, this task tends to be overly optimistic, as it currently does not align well with the natural distribution of population personality traits. Specifically, the self-reported labels in existing datasets result in data quality issues and the hard labels fail to capture the full range of population personality distributions. In this paper, we identify the task by constructing MBTIBench, the first manually annotated MBTI personality detection dataset with soft labels, under the guidance of psychologists. Our experimental results confirm that soft labels can provide more benefits to other psychological tasks than hard labels. We highlight the polarized predictions and biases in LLMs as key directions for future research.

2024

pdf bib
SOUL: Unlocking the Power of Second-Order Optimization for LLM Unlearning
Jinghan Jia | Yihua Zhang | Yimeng Zhang | Jiancheng Liu | Bharat Runwal | James Diffenderfer | Bhavya Kailkhura | Sijia Liu
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Large Language Models (LLMs) have highlighted the necessity of effective unlearning mechanisms to comply with data regulations and ethical AI practices. LLM unlearning aims at removing undesired data influences and associated model capabilities without compromising utility beyond the scope of unlearning. While interest in studying LLM unlearning is growing, the impact of the optimizer choice for LLM unlearning remains unexplored. In this work, we shed light on the significance of optimizer selection in LLM unlearning for the first time, establishing a clear connection between second-order optimization and influence unlearning (a classical approach using influence functions to update the model for data influence removal). This insight propels us to develop a second-order optimization-based LLM unlearning framework, termed Second-Order UnLearning (SOUL), which extends the static, one-shot model update using influence unlearning to a dynamic, iterative unlearning process. Our extensive experiments show that SOUL consistently outperforms conventional first-order methods across various unlearning tasks, models, and metrics, indicating that second-order optimization offers an effective and broadly applicable solution for LLM unlearning.