Composition-contrastive Learning for Sentence Embeddings

Sachin Chanchani, Ruihong Huang


Abstract
Vector representations of natural language are ubiquitous in search applications. Recently, various methods based on contrastive learning have been proposed to learn textual representations from unlabelled data; by maximizing alignment between minimally-perturbed embeddings of the same text, and encouraging a uniform distribution of embeddings across a broader corpus. Differently, we propose maximizing alignment between texts and a composition of their phrasal constituents. We consider several realizations of this objective and elaborate the impact on representations in each case. Experimental results on semantic textual similarity tasks show improvements over baselines that are comparable with state-of-the-art approaches. Moreover, this work is the first to do so without incurring costs in auxiliary training objectives or additional network parameters.
Anthology ID:
2023.acl-long.882
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15836–15848
Language:
URL:
https://aclanthology.org/2023.acl-long.882
DOI:
10.18653/v1/2023.acl-long.882
Bibkey:
Cite (ACL):
Sachin Chanchani and Ruihong Huang. 2023. Composition-contrastive Learning for Sentence Embeddings. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15836–15848, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Composition-contrastive Learning for Sentence Embeddings (Chanchani & Huang, ACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.acl-long.882.pdf
Video:
 https://aclanthology.org/2023.acl-long.882.mp4