Simple Temperature Cool-down in Contrastive Framework for Unsupervised Sentence Representation Learning

Yoo Hyun Jeong, Myeong Soo Han, Dong-Kyu Chae


Abstract
In this paper, we proposes a simple, tricky method to improve sentence representation of unsupervised contrastive learning. Even though contrastive learning has achieved great performances in both visual representation learning (VRL) and sentence representation learning (SRL) fields, we focus on the fact that there is a gap between characteristics and training dynamics of VRL and SRL. We first examine the role of temperature to bridge the gap between VRL and SRL, and find some temperature-dependent elements in SRL; i.e., a higher temperature causes overfitting of the uniformity while improving the alignment in earlier phase of training. Then, we design a temperature cool-down technique based on this observation, which helps PLMs to be more suitable for contrastive learning via preparation of uniform representation space. Our experimental results on widely-utilized benchmarks demonstrate the effectiveness and extensiblity of our method.
Anthology ID:
2024.findings-eacl.37
Volume:
Findings of the Association for Computational Linguistics: EACL 2024
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Yvette Graham, Matthew Purver
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
550–559
Language:
URL:
https://aclanthology.org/2024.findings-eacl.37
DOI:
Bibkey:
Cite (ACL):
Yoo Hyun Jeong, Myeong Soo Han, and Dong-Kyu Chae. 2024. Simple Temperature Cool-down in Contrastive Framework for Unsupervised Sentence Representation Learning. In Findings of the Association for Computational Linguistics: EACL 2024, pages 550–559, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
Simple Temperature Cool-down in Contrastive Framework for Unsupervised Sentence Representation Learning (Jeong et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-eacl.37.pdf