Qixuan Zhang
2024
Visual Prompting in LLMs for Enhancing Emotion Recognition
Qixuan Zhang
|
Zhifeng Wang
|
Dylan Zhang
|
Wenjia Niu
|
Sabrina Caldwell
|
Tom Gedeon
|
Yang Liu
|
Zhenyue Qin
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Vision Large Language Models (VLLMs) are transforming the intersection of computer vision and natural language processing; however, the potential of using visual prompts for emotion recognition in these models remains largely unexplored and untapped. Traditional methods in VLLMs struggle with spatial localization and often discard valuable global context. We propose a novel Set-of-Vision prompting (SoV) approach that enhances zero-shot emotion recognition by using spatial information, such as bounding boxes and facial landmarks, to mark targets precisely. SoV improves accuracy in face count and emotion categorization while preserving the enriched image context. Through comprehensive experimentation and analysis of recent commercial or open-source VLLMs, we evaluate the SoV model’s ability to comprehend facial expressions in natural environments. Our findings demonstrate the effectiveness of integrating spatial visual prompts into VLLMs for improving emotion recognition performance.
Can Machine Unlearning Reduce Social Bias in Language Models?
Omkar Dige
|
Diljot Arneja
|
Tsz Fung Yau
|
Qixuan Zhang
|
Mohammad Bolandraftar
|
Xiaodan Zhu
|
Faiza Khan Khattak
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track
Mitigating bias in language models (LMs) has become a critical problem due to the widespread deployment of LMs in the industry and customer-facing applications. Numerous approaches revolve around data pre-processing and subsequent fine-tuning of language models, tasks that can be both time-consuming and computationally demanding. As alternatives, machine unlearning techniques are being explored, yet there is a notable lack of comparative studies evaluating the effectiveness of these methods. In this work, we explore the effectiveness of two machine unlearning methods: Partitioned Contrastive Gradient Unlearning (PCGU) applied on decoder models, and Negation via Task Vector, and compare them with Direct Preference Optimization (DPO) to reduce social biases in open-source LMs such as LLaMA-2 and OPT. We also implement distributed PCGU for large models. It is empirically shown, through quantitative and qualitative analyses, that negation via Task Vector method outperforms PCGU and is comparable to DPO in debiasing models with minimum deterioration in model performance and perplexity. Negation via Task Vector reduces the bias score by 25.5% for LLaMA-2 and achieves bias reduction of up to 40% for OPT models. Moreover, it can be easily tuned to balance the trade-off between bias reduction and generation quality, unlike DPO.
Search
Co-authors
- Zhifeng Wang 1
- Dylan Zhang 1
- Wenjia Niu 1
- Sabrina Caldwell 1
- Tom Gedeon 1
- show all...