Linear Layer Extrapolation for Fine-Grained Emotion Classification

Mayukh Sharma, Sean O’Brien, Julian McAuley


Abstract
Certain abilities of Transformer-based language models consistently emerge in their later layers. Previous research has leveraged this phenomenon to improve factual accuracy through self-contrast, penalizing early-exit predictions based on the premise that later-layer updates are more factually reliable than earlier-layer associations. We observe a similar pattern for fine-grained emotion classification in text, demonstrating that self-contrast can enhance encoder-based text classifiers. Additionally, we reinterpret self-contrast as a form of linear extrapolation, which motivates a refined approach that dynamically adjusts the contrastive strength based on the selected intermediate layer. Experiments across multiple models and emotion classification datasets show that our method outperforms standard classification techniques in fine-grained emotion classification tasks.
Anthology ID:
2024.emnlp-main.1161
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
20880–20888
Language:
URL:
https://aclanthology.org/2024.emnlp-main.1161
DOI:
10.18653/v1/2024.emnlp-main.1161
Bibkey:
Cite (ACL):
Mayukh Sharma, Sean O’Brien, and Julian McAuley. 2024. Linear Layer Extrapolation for Fine-Grained Emotion Classification. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 20880–20888, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Linear Layer Extrapolation for Fine-Grained Emotion Classification (Sharma et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.1161.pdf